Posts Tagged 'Network'

March 14, 2012

Game On: SoftLayer + Game Developers + GDC

Last week, I spent a few days at GDC in San Francisco, getting a glimpse into the latest games hitting the market. Game developers are a unique bunch, and that uniqueness goes beyond the unbelievable volume of NOS Energy Drinks they consume ... They like to test and push the IT envelope, making games more diverse, interactive and social.

The new crop of games showcased at GDC is more resource-intensive — it's almost like watching an IT arms race; they're upping the ante for all online gaming companies. The appetite from the public remains relentless, and the pay-off can be huge. Consider that gaming industry research firm DFC Intelligence predicts that worldwide market revenue generated solely from online games is set to reach $26.4 billion in 2015, more than double the $11.9 achieved in 2009.

That's where SoftLayer comes in. We understand the high stakes in the gaming world and have tailored our IaaS offerings for an optimal end-user experience that stretches from initial release to everyday play. Take a look at what game developer OMGPOP (a SoftLayer customer) achieved with Draw Something: Almost overnight it became the #1 application in Apple's App Store, tallying more than 26 million downloads in just a few weeks. To put the volume of gameplay into perspective, the game itself is generating more than 30 hours of drawings per second. That's what what we refer to as "Internet Scale." When YouTube hit one hour of video uploads per second, they came up with a pretty impressive presentation to talk about that scale ... and that's only one hour per second.

Draw Something

Gamers require a high-performance, always on, graphically attractive and quick-responding experience. If they don't get that experience, they move on to the next game that can give it to them. With our core strengths of automation and extensive network reach, game developers come to us to easily enable that experience, and in return, they get a platform where they can develop, test, deploy and yes, play their latest games. True "Internet Scale" with easy consumptive billing ... Get in and out quickly, and use only what you need.

Some of the most interesting and innovative use cases of how customers take advantage of our platform come from the gaming industry. Because we make it easy to rapidly provision resources (deploy dedicated servers in less than two hours and cloud servers in as few as five minutes) in an automated way (our API), many developers have started incorporating cloud-like functions into their games and applications that add dedicated resources to their infrastructure on-demand as you'd only expect to see in a virtual environment. Now that Flex Images are available, we're expecting to see a lot more of that.

As I was speaking with a few customers on the show floor, I was amazed to hear how passionate they were about what one called the "secret ingredient" at SoftLayer: Our network. He talked about his trials and tribulations in delivering global reach and performance before he transitioned his infrastructure to SoftLayer, and hearing what our high-bandwidth and low-latency architecture has meant for his games was an affirmation for all of the work we've put into creating (and continuing to build) the network.

The rapid pace of innovation and change that keeps the gaming industry going is almost electric ... When you walk into a room filled with game developers, their energy is contagious. We ended GDC with an opportunity to do just that. We were proud to sponsor a launch party for our friends at East Side Game Studios as the celebrated the release of two new games — Zombinis and Ruby Skies. Since their NomNom Combo puzzle game is one of the most addicting games on my iPhone, it was a no-brainer to hook up with them at GDC. If you want a peek into the party, check out our GDC photo album on Facebook.

Draw Something

To give you an idea of how much the gaming culture permeates the SoftLayer offices, I need only point out a graffiti mural on one of the walls in our HQ office in Dallas. Because we sometimes get nostalgic for the days of misspent youth in video arcades playing Pac Man, Donkey Kong and Super Mario, we incorporated those iconic games in a piece of artwork in our office:

Retro Gaming Mural

If you are an aspiring game developer, we'd like to hear from you and help enable the next Internet gaming sensation ... Having a good amount of experience with our existing customer base should assure you that we know what we're talking about. For now, though, it's my turn to go "Draw Something."

-@gkdog

February 15, 2012

SoftLayer + OpenStack Swift = SoftLayer Object Storage

Since our inception in 2005, SoftLayer's goal has been to provide an array of on-demand data center and hosting services that combine exceptional access, control, scalability and security with unparalleled network robustness and ease of use ... That's why we're so excited to unveil SoftLayer Object Storage to our customers.

Based on OpenStack Object Storage (codenamed Swift) — open-source software that allows the creation of redundant, scalable object storage on clusters of standardized servers — SoftLayer Object Storage provides customers with new opportunities to leverage cost-effective cloud-based storage and to simultaneously realize significant capex-related cost savings.

OpenStack has been phenomenally successful thanks to a global software community comprised of developers and other technologists that has built and tweaked a standards-based, massively scalable open-source platform for public and private cloud computing. The simple goal of the OpenStack project is to deliver code that enables any organization to create and offer feature-rich cloud computing services from industry-standard hardware. The overarching OpenStack technology consists of several interrelated project components: One for compute, one for an image service, one for object storage, and a few more projects in development.

SoftLayer Object Storage
Like the OpenStack Swift system on which it is based, SoftLayer Object Storage is not a file system or real-time data-storage system, rather it's a long-term storage system for a more permanent type of static data that can be retrieved, leveraged and updated when necessary. Typical applications for this type of storage can involve virtual machine images, photo storage, email storage and backup archiving.

One of the primary benefits of Object Storage is the role that it can play in automating and streamlining data storage in cloud computing environments. SoftLayer Object Storage offers rich metadata features and search capability that can be leveraged to automate the way unstructured data gets accessed. In this way, SoftLayer Object Storage will provide organizations with new capabilities for improving overall data management and storage efficiency.

File Storage v. Object Storage
To better understand the difference between file storage and object storage, let's look at how file storage and object storage differ when it comes to metadata and search for a simple photo image. When a digital camera or camera-enabled phone snaps a photo, it embeds a series of metadata values in the image. If you save the image in a standard image file format, you can search for it by standard file properties like name, date and size. If you save the same image with additional metadata as an object, you can set object metadata values for the image (after reading them from the image file). This detail provides granular search capability based on the metadata keys and values, in addition to the standard object properties. Here is a sample comparison of an image's metadata value in both systems:

File Metadata Object Metadata
Name:img01.jpg Name:img01.jpg
Date: 2012-02-13 Date:2012-02-13
Size:1.2MB Size:1.2MB
Manufacturer:CASIO
Model:QV-4000
x-Resolution:72.00
y-Resolution:72.00
PixelXDimension:2240
PixelYDimension:1680
FNumber:f/4.0
Exposure Time:1/659 sec.

Using the rich metadata and search capability enabled by object storage, you would be able to search for all images with a dimension of 2240x1680 or a resolution of 72x72 in a quick/automated fashion. The object storage system "understands" more about what is being stored because it is able to differentiate files based on characteristics that you define.

What Makes SoftLayer Object Storage Different?
SoftLayer Object Storage features several unique features and ways for SoftLayer customers to upload, access and manage data:

  • Search — Quickly access information through user-defined metadata key-value pairs, file name or unique identifier
  • CDN — Serve your content globally over our high-performance content delivery network
  • Private Network — Free, secure private network traffic between all data centers and storage cluster nodes
  • API — Access to a full-feature OpenStack-compatible API with additional support for CDN and search integration
  • Portal — Web application integrated into the SoftLayer portal
  • Mobile — iPhone and Android mobile apps, with Windows Phone app coming soon
  • Language Bindings — Feature-complete bindings for Java, PHP, Python and Ruby*

*Language bindings, documentation, and guides are available on SLDN.

We think SoftLayer Object Storage will be attractive to a broad range of current and prospective customers, from web-centric businesses dependent on file sharing and content distribution to legal/medical/financial-services companies which possess large volumes of data that must be stored securely while remaining readily accessible.

SoftLayer Object Storage significantly extends our cloud-services portfolio while substantially enriching the storage capabilities that we bring to our customers. What are you waiting for? Go order yourself some object storage @ $0.12/GB!

-Marc

January 19, 2012

IPv6 Milestone: "World IPv6 Launch Day"

On Tuesday, the Internet Society announced "World IPv6 Launch Day", a huge step in the transition from IPv4 to IPv6. Scheduled for June 6, 2012, this "launch day" comes almost one year after the similarly noteworthy World IPv6 Day, during which many prominent Internet businesses enabled IPv6 AAAA record resolution for their primary websites for a 24-hour period.

With IPv6 Day serving as a "test run," we confirmed a lot of what we know about IPv6 compatibility and interoperability with deployed systems throughout the Internet, and we even learned about a few areas that needed a little additional attention. Access troubles for end-users was measured in fractions of a percentage, and while some sites left IPv6 running, many of them ended up disabling the AAAA IPv6 records at the end of the event, resuming their legacy IPv4-only configuration.

We're past the "testing" phase now. Many of the IPv6-related issues observed in desktop operating systems (think: your PCs, phones, and tablets) and consumer network equipment (think: your home router) have been resolved. In response – and in an effort to kick IPv6 deployment in the butt – the same businesses which ran the 24-hour field test last year have committed to turning on IPv6 for their content and keeping it on as of 6/6/2012.

But that's not all, folks!

In the past, IPv6 availability would have simply impacted customers connecting to the Internet from a few universities, international providers and smaller technology-forward ISPs. What's great about this event is that a significant number of major broadband ISPs (think: your home and business Internet connection) have committed to enabling IPv6 to their subscribers. June 6, 2012, marks a day where at least 1% of the participating ISPs' downstream customers will be receiving IPv6 addresses.

While 1% may not seem all that impressive at first, in order to survive the change, these ISPs must slowly roll out IPv6 availability to ensure that they can handle the potential volume of resulting customer support issues. There will be new training and technical challenges that I suspect all of these ISPs will face, and this type of approach is a good way to ensure success. Again, we must appreciate that the ISPs are turning it on for good now.

What does this mean for SoftLayer customers? Well the good news is that our network is already IPv6-enabled ... In fact, it has been so for a few years now. Those of you who have taken advantage of running a dual-stack of IPv4 and IPv6 addresses may have noticed surprisingly low IPv6 traffic volume. When 6/6/2012 comes around, you should see that volume rise (and continue to rise consistently from there). For those of you without IPv6 addresses, now's the time to get started and get your feet wet. You need to be prepared for the day when new "eyeballs" are coming online with IPv6-only addresses. If you don't know where to start, go back through this article and click on a few of the hyperlinks, and if you want more information, ARIN has a great informational IPv6 wiki that has been enjoying community input for a couple years now.

The long term benefit of this June 6th milestone is that with some of the "big guys" playing in this space, the visibility of IPv6 should improve. This will help motivate the "little guys" who otherwise couldn't get motivated – or more often couldn't justify the budgetary requirements – to start implementing IPv6 throughout their organizations. The Internet is growing rapidly, and as our collective attentions are focused on how current legislation (SOPA/PIPA) could impede that growth, we should be intentional about fortifying the Internet's underlying architecture.

-Dani

December 29, 2011

Using iPerf to Troubleshoot Speed/Throughput Issues

Two of the most common network characteristics we look at when investigating network-related concerns in the NOC are speed and throughput. You may have experienced the following scenario yourself: You just provisioned a new bad-boy server with a gigabit connection in a data center on the opposite side of the globe. You begin to upload your data and to your shock, you see "Time Remaining: 10 Hours." "What's wrong with the network?" you wonder. The traceroute and MTR look fine, but where's the performance and bandwidth I'm paying for?

This issue is all too common and it has nothing to do with the network, but in fact, the culprits are none other than TCP and the laws of physics.

In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn't send more until it receives an acknowledgement from the remote host that all data was received. This is called the "TCP Window." Data travels at the speed of light, and typically, most hosts are fairly close together. This "windowing" happens so fast we don't even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called "Bandwidth Delay Product," or BDP.

We can overcome BDP to some degree by sending more data at a time. We do this by adjusting the "TCP Window" – telling TCP to send more data per flow than the default parameters. Each OS is different and the default values will vary, but most all operating systems allow tweaking of the TCP stack and/or using parallel data streams. So what is iPerf and how does it fit into all of this?

What is iPerf?

iPerf is simple, open-source, command-line, network diagnostic tool that can run on Linux, BSD, or Windows platforms which you install on two endpoints. One side runs in a 'server' mode listening for requests; the other end runs 'client' mode that sends data. When activated, it tries to send as much data down your pipe as it can, spitting out transfer statistics as it does. What's so cool about iPerf is you can test in real time any number of TCP window settings, even using parallel streams. There's even a Java based GUI you can install that runs on top of it called, JPerf (JPerf is beyond the scope of this article, but I recommend looking into it). What's even cooler is that because iPerf resides in memory, there are no files to clean up.

How do I use iPerf?

iPerf can be quickly downloaded from SourceForge to be installed. It uses port 5001 by default, and the bandwidth it displays is from the client to the server. Each test runs for 10 seconds by default, but virtually every setting is adjustable. Once installed, simply bring up the command line on both of the hosts and run these commands.

On the server side:
iperf -s

On the client side:
iperf -c [server_ip]

The output on the client side will look like this:

#iperf -c 10.10.10.5
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 0.0.0.0 port 46956 connected with 168.192.1.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 10.0 sec  10.0 MBytes  1.00 Mbits/sec

There are a lot of things we can do to make this output better with more meaningful data. For example, let's say we want the test to run for 20 seconds instead of 10 (-t 20), and we want to display transfer data every 2 seconds instead of the default of 10 (-i 2), and we want to test on port 8000 instead of 5001 (-p 8000). For the purposes of this exercise, let's use those customization as our baseline. This is what the command string would look like on both ends:

Client Side:

#iperf -c 10.10.10.5 -p 8000 -t 20 -i 2
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.10.10.10 port 46956 connected with 10.10.10.5 port 8000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  6.00 MBytes  25.2 Mbits/sec
[  3]  2.0- 4.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3]  4.0- 6.0 sec  7.00 MBytes  29.4 Mbits/sec
[  3]  6.0- 8.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3]  8.0-10.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3] 10.0-12.0 sec  7.00 MBytes  29.4 Mbits/sec
[  3] 12.0-14.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3] 14.0-16.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3] 16.0-18.0 sec  6.88 MBytes  28.8 Mbits/sec
[  3] 18.0-20.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3]  0.0-20.0 sec  70.1 MBytes  29.4 Mbits/sec

Server Side:

#iperf -s -p 8000 -i 2
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[852] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 58316
[ ID] Interval Transfer Bandwidth
[  4]  0.0- 2.0 sec  6.05 MBytes  25.4 Mbits/sec
[  4]  2.0- 4.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4]  4.0- 6.0 sec  6.94 MBytes  29.1 Mbits/sec
[  4]  6.0- 8.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4]  8.0-10.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4] 10.0-12.0 sec  6.95 MBytes  29.1 Mbits/sec
[  4] 12.0-14.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4] 14.0-16.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4] 16.0-18.0 sec  6.95 MBytes  29.1 Mbits/sec
[  4] 18.0-20.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4]  0.0-20.0 sec  70.1 MBytes  29.4 Mbits/sec

There are many, many other parameters you can set that are beyond the scope of this article, but for our purposes, the main use is to prove out our bandwidth. This is where we'll use the TCP window options and parallel streams. To set a new TCP window you use the -w switch and you can set the parallel streams by using -P.

Increased TCP window commands:

Server side:
#iperf -s -w 1024k -i 2

Client side:
#iperf -i 2 -t 20 -c 10.10.10.5 -w 1024k

And here are the iperf results from two Softlayer file servers – one in Washington, D.C., acting as Client, the other in Seattle acting as Server:

Client Side:

# iperf -i 2 -t 20 -c 10.10.10.5 -p 8000 -w 1024k
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[  3] local 10.10.10.10 port 53903 connected with 10.10.10.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  3]  2.0- 4.0 sec  28.5 MBytes   120 Mbits/sec
[  3]  4.0- 6.0 sec  28.4 MBytes   119 Mbits/sec
[  3]  6.0- 8.0 sec  28.9 MBytes   121 Mbits/sec
[  3]  8.0-10.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 10.0-12.0 sec  29.0 MBytes   122 Mbits/sec
[  3] 12.0-14.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 14.0-16.0 sec  29.0 MBytes   122 Mbits/sec
[  3] 16.0-18.0 sec  27.9 MBytes   117 Mbits/sec
[  3] 18.0-20.0 sec  29.0 MBytes   122 Mbits/sec
[  3]  0.0-20.0 sec   283 MBytes   118 Mbits/sec

Server Side:

#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[  4] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 53903
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  4]  2.0- 4.0 sec  28.6 MBytes   120 Mbits/sec
[  4]  4.0- 6.0 sec  28.3 MBytes   119 Mbits/sec
[  4]  6.0- 8.0 sec  28.9 MBytes   121 Mbits/sec
[  4]  8.0-10.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 10.0-12.0 sec  29.0 MBytes   121 Mbits/sec
[  4] 12.0-14.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 14.0-16.0 sec  29.0 MBytes   122 Mbits/sec
[  4] 16.0-18.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 18.0-20.0 sec  29.0 MBytes   121 Mbits/sec
[  4]  0.0-20.0 sec   283 MBytes   118 Mbits/sec

We can see here, that by increasing the TCP window from the default value to 1MB (1024k) we achieved around a 400% increase in throughput over our baseline. Unfortunately, this is the limit of this OS in terms of Window size. So what more can we do? Parallel streams! With multiple simultaneous streams we can fill the pipe close to its maximum usable amount.

Parallel Stream Command:
#iperf -i 2 -t 20 -c -p 8000 10.10.10.5 -w 1024k -P 7

Client Side:

#iperf -i 2 -t 20 -c -p 10.10.10.5 -w 1024k -P 7
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
 [ ID] Interval       Transfer     Bandwidth
[  9]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  4]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  7]  0.0- 2.0 sec  25.6 MBytes   107 Mbits/sec
[  8]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  5]  0.0- 2.0 sec  25.8 MBytes   108 Mbits/sec
[  3]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  6]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[SUM]  0.0- 2.0 sec   178 MBytes   746 Mbits/sec
 
(output omitted for brevity on server & client)
 
[  7] 18.0-20.0 sec  28.2 MBytes   118 Mbits/sec
[  8] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  5] 18.0-20.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 18.0-20.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 18.0-20.0 sec  28.9 MBytes   121 Mbits/sec
[  9] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  6] 18.0-20.0 sec  28.9 MBytes   121 Mbits/sec
[SUM] 18.0-20.0 sec   200 MBytes   837 Mbits/sec
[SUM]  0.0-20.0 sec  1.93 GBytes   826 Mbits/sec 

Server Side:

#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[  4] local 10.10.10.10 port 8000 connected with 10.10.10.5 port 53903
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0- 2.0 sec  25.7 MBytes   108 Mbits/sec
[  8]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  4]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  9]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[ 10]  0.0- 2.0 sec  25.9 MBytes   108 Mbits/sec
[  7]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  6]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[SUM]  0.0- 2.0 sec   178 MBytes   747 Mbits/sec
 
[  4] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  5] 18.0-20.0 sec  28.3 MBytes   119 Mbits/sec
[  7] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[ 10] 18.0-20.0 sec  28.1 MBytes   118 Mbits/sec
[  9] 18.0-20.0 sec  28.0 MBytes   118 Mbits/sec
[  8] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  6] 18.0-20.0 sec  29.0 MBytes   121 Mbits/sec
[SUM] 18.0-20.0 sec   200 MBytes   838 Mbits/sec
[SUM]  0.0-20.1 sec  1.93 GBytes   825 Mbits/sec

As you can see from the tests above, we were able to increase throughput from 29Mb/s with a single stream and the default TCP Window to 824Mb/s using a higher window and parallel streams. On a Gigabit link, this about the maximum throughput one could hope to achieve before saturating the link and causing packet loss. The bottom line is, I was able to prove out the network and verify bandwidth capacity was not an issue. From that conclusion, I could focus on tweaking TCP to get the most out of my network.

I'd like to point out that we will never get 100% out of any link. Typically, 90% utilization is about the real world maximum anyone will achieve. If you get any more, you'll begin to saturate the link and incur packet loss. I should also point out that Softlayer doesn't directly support iPerf, so it's up to you install and play around with. It's such a versatile and easy to use little piece of software that it's become invaluable to me, and I think it will become invaluable to you as well!

-Andrew

December 2, 2011

Global Network: The Proof is in the Traceroute

You've probably heard a lot about SoftLayer's global expansion into Asia and Europe, and while the idea of geographically diversifying is impressive in itself, one of the most significant implications of our international expansion is what it's done for the SoftLayer Network.

As George explained in "Globalization and Hosting: The World Wide Web is Flat," our strategic objective is to get a network point of presence within 40ms of all of our users and our users' users to provide the best network stability and performance possible anywhere on the planet. The reasoning is simple: The sooner a user gets on on our network, the quicker we can efficiently route them through our points of presence to a server in one of our data centers.

The cynics in the audience are probably yawning and shrugging that idea off as marketing mumbo jumbo, so I thought it would be good to demonstrate how the network expansion immediately and measurably improved our customers' network experience from Asia to the United States. Just look at the traceroutes.

As you're probably aware, a traceroute shows the "hops" or routers along the network path from an origin IP to a destination IP. When we were building out the Singapore data center (before the network points of presence were turned up in Asia), I ran a traceroute from Singapore to SoftLayer.com, and immediately after the launch of the data center, I ran another one:

Pre-Launch Traceroute to SoftLayer.com from Singapore

traceroute to softlayer.com (66.228.118.53), 64 hops max, 52 byte packets
 1  10.151.60.1 (10.151.60.1)  1.884 ms  1.089 ms  1.569 ms
 2  10.151.50.11 (10.151.50.11)  2.006 ms  1.669 ms  1.753 ms
 3  119.75.13.65 (119.75.13.65)  3.380 ms  3.388 ms  4.344 ms
 4  58.185.229.69 (58.185.229.69)  3.684 ms  3.348 ms  3.919 ms
 5  165.21.255.37 (165.21.255.37)  9.002 ms  3.516 ms  4.228 ms
 6  165.21.12.4 (165.21.12.4)  3.716 ms  3.965 ms  5.663 ms
 7  203.208.190.21 (203.208.190.21)  4.442 ms  4.117 ms  4.967 ms
 8  203.208.153.241 (203.208.153.241)  6.807 ms  55.288 ms  56.211 ms
 9  so-2-0-3-0.laxow-cr1.ix.singtel.com (203.208.149.238)  187.953 ms  188.447 ms  187.809 ms
10  ge-4-0-0-0.laxow-dr2.ix.singtel.com (203.208.149.34)  184.143 ms
    ge-4-1-1-0.sngc3-dr1.ix.singtel.com (203.208.149.138)  189.510 ms
    ge-4-0-0-0.laxow-dr2.ix.singtel.com (203.208.149.34)  289.039 ms
11  203.208.171.98 (203.208.171.98)  187.645 ms  188.700 ms  187.912 ms
12  te1-6.bbr01.cs01.lax01.networklayer.com (66.109.11.42)  186.482 ms  188.265 ms  187.021 ms
13  ae7.bbr01.cs01.lax01.networklayer.com (173.192.18.166)  188.569 ms  191.100 ms  188.736 ms
14  po5.bbr01.eq01.dal01.networklayer.com (173.192.18.140)  381.645 ms  410.052 ms  420.311 ms
15  ae0.dar01.sr01.dal01.networklayer.com (173.192.18.211)  415.379 ms  415.902 ms  418.339 ms
16  po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  417.426 ms  417.301 ms
    po2.slr01.sr01.dal01.networklayer.com (66.228.118.142)  416.692 ms
17  * * *

Post-Launch Traceroute to SoftLayer.com from Singapore

traceroute to softlayer.com (66.228.118.53), 64 hops max, 52 byte packets
 1  192.168.206.1 (192.168.206.1)  2.850 ms  1.409 ms  1.206 ms
 2  174.133.118.65-static.reverse.networklayer.com (174.133.118.65)  1.550 ms  1.680 ms  1.394 ms
 3  ae4.dar01.sr03.sng01.networklayer.com (174.133.118.136)  1.812 ms  1.341 ms  1.734 ms
 4  ae9.bbr01.eq01.sng02.networklayer.com (50.97.18.198)  35.550 ms  1.999 ms  2.124 ms
 5  50.97.18.169-static.reverse.softlayer.com (50.97.18.169)  174.726 ms  175.484 ms  175.491 ms
 6  po5.bbr01.eq01.dal01.networklayer.com (173.192.18.140)  203.821 ms  203.749 ms  205.803 ms
 7  ae0.dar01.sr01.dal01.networklayer.com (173.192.18.253)  306.755 ms
    ae0.dar01.sr01.dal01.networklayer.com (173.192.18.211)  208.669 ms  203.127 ms
 8  po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  203.518 ms
    po2.slr01.sr01.dal01.networklayer.com (66.228.118.142)  305.534 ms
    po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  204.150 ms
 9  * * *

I won't dive too deep into what these traceroutes are telling us because that'll need to be an entirely different blog. What I want to draw your attention to are a few key differences between the pre- and post-launch traceroutes:

  • Getting onto SoftLayer's network:. The first reference to "networklayer" in the pre-launch trace is in hop 12 (~187ms). In the post-launch trace, we were on "networklayer" in the second hop (~1.5ms).
  • Number of hops: Pre-launch, our network path took 16 hops to get to SoftLayer.com. Post-launch, it took 8.
  • Response times from the destination: The average response time from SoftLayer.com to Singapore before the launch of our network points of presence in Asia was about 417ms (milliseconds). After the launch, it dropped to an average of about ~250ms.

These traceroutes demonstrate that users in Singapore travel a much better network path to a server in one of our U.S. data centers than they had before we turned up the network in Asia, and that experience isn't limited to users in Singapore ... users throughout Europe and Asia will see fewer hops and better speeds now that the data centers and points of presence on those continents are live. And that's without buying a server in either of those markets or making any changes to how they interact with us.

Managing a worldwide network for a worldwide customer base with thousands of different ISPs and millions of possible routes is not a "set it and forget it" endeavor, so we have a team of engineers in our Network Operations Center that focuses on tweaking and optimizing routes 24x7. Branching out into Europe and Asia introduces a slew of challenges when working with providers on the other side of the globe, but I guess it's true: "If it were easy, everyone would do it."

Innovate or die.

-@toddmitchell

November 25, 2011

Online in Amsterdam: Innovators Wanted

Since I started with SoftLayer a couple of months ago, I have been asked by industry analysts, customers, interviewees and my drinking friends ... ahem, I mean networking event associates, "Why did SoftLayer choose Amsterdam for its European headquarters?"

My answer has always been consistent: It's all about the products and the people.

On the product side, having our data center on the AMS-IX gives us lightning fast connectivity to one of the biggest data exchanges in Europe. Combined with our 10GB PoPs in Frankfurt and London, it means we have minimal latency, so your customers are happy. With these arrangements, we're able to extend the ability for customers only to pay for outbound public traffic. Did I mention that the three-tier network is up and running? Public, private and management ... Okay, okay, you get it: Being in Amsterdam extends our industry leading global network.

Amsterdam is not the only game in town where we could get a great connection, though. SoftLayer wanted to make the other kinds of connections to grow a global business ... connections with the right people.

It was not that not that long ago when ten guys were working out of a living room to change the way hosting was done. Now you're reading the blog of a global company with several hundred million in turnover, and the entrepreneurial spirit is stronger than ever. SoftLayer wanted to be in a place where we could hire and conspire with other global pioneers, and with Amsterdam's long history of creativity, innovation and global trade (not to mention Oliebollen), SoftLayer selected Amsterdam for its EMEA HQ.

This video from Don Ritzen and the Rockstart Accelerator team articulate the environment we are glad to be a part of:

With the Amsterdam data center officially online, we've had a chance to get out of the facility and into the community, and we are fitting right in. A couple of weeks ago, I was honored to speak at the Appsterdam Launch Party 2.0 Overwinter. The Appsterdam team is developing an infrastructure so that startups can more easily thrive and focus on what they do what they do best: innovate.

Mike Lee, mayor of Appsterdam asked all the speakers to tell the pan-European audience why we were speaking at the event and what we had to offer the developer community. For me it was an easy answer: We bring automated on demand hosting infrastructure to the community so people can focus on building great products. We also support the community with a referral program, so if developers refer clients to SoftLayer, we will pay them a generous commission ... Not to mention that empowerment and innovation are core SoftLayer values, so we will continue to improve our platform so our customers can control their IT environment with the latest and greatest technologies in the industry.

Needless to say, the audience was intrigued. And I didn't even show them what a SoftLayer pod looks like ...

SoftLayer Amsterdam
SoftLayer Amsterdam

We're looking at the tip of the iceberg in Europe, and we're ecstatic about the opportunities and possibilities that await us as we build on our foothold here and continue our worldwide expansion. If you want to join a young startup-like team in Amsterdam, we want to hear from you ... We're hiring like crazy right now: SoftLayer Careers

-Jonathan

November 21, 2011

The SoftLayer Server Challenge - ad:tech Expertise

If you've visited SoftLayer at a large conference this year, you probably came face-to-rack with our Server Challenge. Your task: Reassemble our miniature rack of SuperMicro servers in the fastest time at the conference. To do this, you need to install twenty drive trays in five servers and connect network cables in the correct switches to mirror the server rack setup on our data centers. If you're able to score the best time, you win an iPad 2!

In the sometimes-boring world of collateral and T-shirts at trade shows, the activity around this competition stands in stark contrast. It's been huge hit everywhere we go, so if you haven't had a chance to try your hand at the challenge, I'm sure we'll bring it to several of our 2012 shows. As a way of rewarding those of you who loyally follow our blog, I thought I could give you an advantage by sharing some tips for when you're in front of the Server Challenge rack ... And to give you an idea of how important these tips can be, look at how close the top two times were at ad:tech NYC:

That's right. 17 hundredths of a second between victory and defeat. Now are you ready to take some notes?

SoftLayer Server Challenge

The Start
When you start the challenge, don't look at the timer to see if your time started ... If it doesn't start, we'll stop you. By focusing your attention on the network cables or drive trays (whichever you choose to start with), you can save yourself a half of a second.

SoftLayer Server Challenge

Network Cables
You don't have to connect the network cables first, but I have to choose something to complete first, so the network cables won the coin flip. When you're connecting the network cables, it's best to grab all three cables of the same color and try to snap them in together. Plugging in the cables one-by-one requires three times the work.

SoftLayer Server Challenge

Hard Drives
When you're tackling the hard drives, the key is to line up the drives and have them installed completely before moving on. My tip for installing the drives is to tilt them in on a sideways angle, not at an upwards angle. If you try and tilt the drives upwards, you'll most likely get the drive tray stuck and have to remove it to try again. If you can do it precisely, picking up two drives at one time has worked well, and our all-time record of around 54 seconds took that approach.

SoftLayer Server Challenge

SoftLayer Server Challenge

SoftLayer Server Challenge

One last pointer: Lock them in place immediately after installing them. If you leave the latch open, it makes it harder to get neighboring drives installed, and it's such a small incremental effort to close the latch that even if you perfect a "close all the latches" technique at the end, you'd still end up spending more time.

SoftLayer Server Challenge

The Finish
Don't forget to put both hands back on the timer to stop your time. :-)

SoftLayer Server Challenge

Now that you're equipped with some of the best Server Challenge tips and tricks, we want you to start training. In 2012, we expect to see someone complete it in under 50 seconds ... And that person probably will carry the all-time record home – along with a new iPad 2!

Keep an eye on our Event Schedule for upcoming shows, and if there's a conference where you really want to see the Server Challenge, let us know and we'll see if we can set it up.

Good Luck!

-Summer

November 18, 2011

Four Years of SLaying in Seattle

How are we already in mid-November? Did 2011 just fly by us or what? As we approach 2012, I will be celebrating my fourth anniversary with SoftLayer in our Seattle data center. Seattle was SoftLayer's first data center outside of the Dallas area when it opened four years ago, and since then, I've seen the launch of Washington D.C., the Dallas HQ + DAL05, San Jose, Singapore and Amsterdam ... while adding a few data centers in Houston and Dallas after the merger with The Planet last year. We've gone from ~15,000 servers when I started to around 100,000 servers in 13 data centers with 16 network PoPs on three different continents around the world. It's safe to say we've grown.

In the four years since our Seattle facility launched, over 60% of our original team – the folks our Dallas team trained – are still here. Being part of such a huge team and watching the SoftLayer roll out data centers around the world is exciting, and seeing our customers grow with us is even better. In the midst of all of that growth, our team is always trying to figure out new technologies and techniques to share with customers to help them meet their ever-evolving needs. The goal: Give our customers total control.

One great example of this focus was our recent launch of QuantaStor Storage Servers. We teamed up with industry leader OS Nexus to bring our customers a production-ready mass storage appliance with a combined SAN and NAS storage system built into the Ubuntu Server and provides a number of system features such as snapshots, compression, remote replication and thin provisioning. A customer could use this in a number of environments from virtualized systems to video production to web and application servers, or as a backup based server. If you're looking for a mass storage system, I highly recommend it.

If we've grown this much in my first four years, I can only imagine what the business will look like four years from now. A SoftLayer data center on every corner? Maybe we can get PHIL to figure out how we can put a SoftLayer pod in the space normally occupied by a coffee shop ... making sure to keep as much coffee as possible, obviously.

-Bill

October 27, 2011

SoftLayer Features and Benefits - Data Centers

When we last talked, I broke down the differences between features and benefits. To recap: a feature is something prominent about a person, place or thing, while a benefit is a feature that is useful to you. In that blog, I discussed our customer portal and the automation within, so with this next installment, let's move into my favorite place: the data center ... Our pride and joy!

If you have not had a chance to visit a SoftLayer data center, you're missing out. The number one response I get when I begin a tour through any of our facilities is, "I have been through several data centers before, and they're pretty boring," or my favorite, "We don't have to go in, they all look the same." Then they get a glimpse at the SoftLayer facility through the window in our lobby:

Data Center Window

What makes a SoftLayer DC so different and unique?

We deploy data centers in a pod concept. A pod, or server room, is a designed to be an identical installation of balanced power, cooling and redundant best-in-class equipment in under 10,000 square feet. It will support just about 5,000 dedicated servers, and each pod is built to the same specifications as every other pod. We use the same hardware vendor for servers, the majority of our internal network is powered by Cisco gear and edge equipment is now powered by Juniper. Even the paint on the walls matches up from pod to pod, city to city and now country to country. That's standardization!

That all sounds great, but what does that mean for you? How do all these things benefit you as the end user?

First of all, setting standards improves our efficiency in support and operations. We can pluck any of our technicians in DAL05 and drop him into SJC01, and he'll feel right at home despite the outside world looking a bit different. No facility quirks, no learning curve. In fact, the Go Live Crews in Singapore and Amsterdam are all experienced SoftLayer technicians from our US facilities, so they help us make sure all of the details are exactly alike.

Beyond the support aspect, having data centers in multiple cities around the world is a benefit within itself: You have the option to host your solution as close or as far away from you as you wish. Taking that a step further, disaster recovery becomes much easier with our unique network-within-a-network topology.

The third biggest benefit customers get from SoftLayer's data centers is the quality of the server chassis. Because we standardize our SuperMicro chassis in every facility, we're able to troubleshoot and resolve issues faster when a customer contacts us. Let's say the mainboard is having a problem, and your Linux server is in kernel panic. Instead of taking time to try and fix the part, I can hot-swap all the drives into an identical chassis and use the portal to automatically move all of your IP addresses and network configurations to a new location in the DC. The server boots right up and is back in service with minimal downtime.

Try to do that with "similar" hardware (not "identical"), and see where that gets you.

The last obvious customer benefit we'll talk about here is the data center's internal network performance. Powered by Cisco internal switches and Juniper routers on the edge, we can provide unmatched bandwidth capacity to our data centers as well as low latency links between servers. In one rack on the data center floor, you can see 80Gbps of bandwidth. Our automated, high-speed network allows us to provision a server anywhere in a pod and an additional server anywhere else in the same pod, and they will perform as if they are sitting right next to each other. That means you don't need to reserve space in the same rack for a server that you think you'll need in the future, so when your business grows, your infrastructure can grow seamlessly with you.

In the last installment of this little "SoftLayer Features and Benefits" series, we'll talk about the global network and learn why no one in the industry can match it.

-Harold

October 22, 2011

Content Streaming = Living Like Kings

As a video gaming and movie addict, I've always followed the latest trends and news in these two areas. Because there always seems to be some "breaking news" every day due to technology advancing so rapidly, sometimes it's tough to keep up.

In gaming, I remember it all started for me back when my parents decided to buy me the first Nintendo console. Pointing that light sensor gun at unsuspecting ducks and watching them fall was all the rage ... It marked a big step in the evolution of home gaming. What initially seemed like a good investment to keep me out of trouble soon turned into a headache for my parents. I frequently begged for more games, and they were not cheap. Look at how much new video games cost these days, and you'll see that not much has changed in that regard. The fire to play all the latest games was never extinguished, so a chunk of my income was always earmarked for the next amazing game I needed.

As for movies, I also found myself collecting as many as possible to rewatch whenever I choose. While each individual movie didn't cost as much as a video game, the aggregate costs definitely built up over time. My family and friends warned me that my "extravagant lifestyle" is reserved for the rich and would only lead me to financial ruin.

Fast forward to today, and I can say that I've learned a lot and found ways to sustainably feed my addiction without driving myself to financial ruin. How is it possible that I am able to live like a king without breaking the bank? It's all thanks to content streaming, made possible by the Internet. I no longer have to buy every single game to have the ability to play whenever I feel like it with services like OnLive that actually streams numerous games to my TV (and a few other supported devices). Beyond the fact that I save money by not buying the game, I don't even need the latest computer hardware to play the more graphics-intensive games like Crysis:

Crysis

You might not be familiar with OnLive just yet, but most people know about content steaming from companies like Netflix and Amazon. You can stream countless movies to your devices to watch movies on demand for a monthly fee or on a per-movie basis. With these services readily available, it's possible for just about anyone have the "kid in the candy store" experience of pulling up essentially any content whenever we want to watch or play.

If either form of entertainment appeals to you, you can agree that our quality of life has improved over time significantly. The streaming services provided by companies like Netflix and OnLive have really taken advantage of the technological capabilities offered by high speed Internet, which also reminds us of the significance of web hosting. To offer customers complete satisfaction, deciding which web hosting company to go with for a business is often a very difficult decision, especially since there are so many out there. It would make complete business sense to find an extremely reliable company to ensure the success of such services and having worked in the industry, and I can assure you with much pride that SoftLayer certainly shines in this area.

As an employee, I see how we're building our network to provide the best experience around the world, and if there's ever a problem, we treat all outages with extreme urgency. Customers get better turnaround times, and they can provide better service for their customers. If some content streaming were to become unavailable, it wouldn't be long before it became available again.

It's pretty safe to say that the Internet has spoiled me ... Now all I need is a crown.

-Danny

Categories: 
Subscribe to network