cloud

May 14, 2015

Update - VENOM Vulnerability

Yesterday, a security advisory designated CVE-2015-3456 / XSA-133 was publicly announced. The advisory identified a vulnerability, which has become commonly known as "VENOM", through which an attacker could exploit floppy driver support in QEMU to escalate their privileges.

SoftLayer engineers, in concert with our technology partners, completed a deep analysis of the vulnerability and determined that SoftLayer virtual servers are not affected by this issue.

We're always committed to ensuring our customers' operations and data are well protected. If customers have any questions or concerns, don't hesitate to reach out to SoftLayer support or your direct SoftLayer contacts.

-Sonny

May 12, 2015

The SLayer Standard Vol. 1, No. 12

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

We've got the power
What makes an existing partnership better? More power, of course. IBM and SAP strengthened the bond by adding a new set of integrated Power Systems solutions for SAP HANA in-memory computer applications: POWER8 servers. Welcome to a new era of high speed, high volume data processing.

Straight from the horse’s mouth
On the subject of IBM’s cloudy future, Forbes sat down with none other than Robert LeBlanc, SVP of IBM’s Cloud Business, to clear the haze. Ambition, AWS envy, and giving up on the public cloud? It’s all there.

Friending Facebook
If your company could target the right folks on Facebook, would it be interested? That’s what IBM’s latest ad partnership with the social network is all about. A write-up in Fast Company provides all the details behind the cooperative, which is aimed to "more accurately identify which of [a company’s] customers are among the 1.44 billion people active on Facebook.” After all, learning to leverage the social web just makes sense.

We’re so happy for you
When big things happen for our customers, we love to highlight them. Longtime IBM business partner Manhattan Associates chose IBM Cloud as a preferred cloud provider for its clients (which includes tech support for those running their applications on SoftLayer). And Distribution Central is now offering its 1,000 resellers access to AWS, Azure and IBM Cloud’s SoftLayer cloud services through a single interface. Way to go, everyone.

No autographs, please!
Oh, and it’s come to our attention that we were mentioned on the latest episode of HBO’s Silicon Valley. Although the scenario in which we were mentioned wasn't quite factually accurate, being famous looks good on us, if we do say so ourselves. Now if you’ll excuse us, we’re going to inquire into our star on the Hollywood Walk of Fame.

-Fayza

Categories: 
April 29, 2015

The SLayer Standard Vol. 1, No. 11

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Q1
A recent study deemed SoftLayer the top-mentioned hosting provider for cloud services among 50 percent of IT decision makers. This news comes on the heels of IBM’s first quarter earnings report, announcing a 75 percent increase in cloud revenue (with yearly revenue at $7.7 billion). Forbes explains IBM’s rise to power over the competition in “Move Over Amazon, IBM Can Also Claim Top Spot In Cloud Services.” Additionally, Mark Jones, SoftLayer’s chief technology officer, gave details to CRN on how IBM expects to stay on top of the cloud competition by offering pricing benefits over its market-leading rivals.

SoftLayer opens data center in The Netherlands…again.
Last week, in an effort to continue delivering on our promise to expand data centers worldwide, SoftLayer opened a second data center in the Netherlands—just outside Amsterdam in Almere. “The new facility demonstrates the demand and success IBM Cloud is having at delivering high-value services right to the doorstep of our clients,” said James Comfort, IBM cloud services general manager.

Building Applications in the Cloud with SoftLayer
For those who enjoy broadcast over print, our lead technology evangelist, Phil Jackson, sat down with Jacob Goldstein of Wireframes to discuss how to choose the right servers for your needs. Listen to the podcast.

-JRL

Categories: 
April 27, 2015

Good Documentation: A How-to Guide

As part of my job in Development Support, I write internal technical documentation for employee use only. My department is also the last line of support before a developer is called in for customer support issues, so we manage a lot of the troubleshooting documentation. Some of the documentation I write and use is designed for internal use for my position, but some of it is troubleshooting documents for other job positions within the company. I have a few guidelines that I use to improve the quality of my documentation. These are by no means definitive, but they’re some helpful tips that I’ve picked up over the years.

Readability

I’m sure everyone has met the frustration of reading a long-winded sentence that should have been three separate sentences. Keeping your sentences as short as possible helps ensure that your advice won’t go in one ear and out the other. If you can write things in a simpler way, you should do so. The goal of your documentation is to make your readers smarter.

Avoid phrasing things in a confusing way. A good example of this is how you employ parentheses. Sometimes it is necessary to use them to convey important beneficial tidbits to your readers. If you write something with parentheses in it, and you can’t read it out loud without it sounding confusing, try to re-word it, or run it by someone else.

Good: It should have "limited connectivity" (the computer icon with the exclamation point) or "active" status (the green checkmark) and NOT "retired" (the red X).
Bad: It should have the icon “limited connectivity” (basically the computer icon with the exclamation point that appears in the list) (you can see the “limited connectivity” text if you hover over it) or “active” (the green checkmark) status and NOT the red “retired” X icon.

Ideally, you should use the same formatting for all of your documentation. At the very least, you should make your formatting consistent within your document. All of our transaction troubleshooting documentation at SoftLayer uses a standardized error formatting that is consistent and easy to read. Sometimes it might be necessary to break the convention if readability is improved. For example: Collapsible menus make it hard to search the entire page using ctrl+F, but very often, it makes things more difficult.

And finally, if people continually have a slew of questions, it’s probably time to revise your documentation and make it clearer. If it’s too complex, break it down into simpler terms. Add more examples to help clarify things so that it makes sense to your end reader.

Simplicity

Use bullet points or numbered lists when listing things instead of a paragraph block. I mention this because good formatting saves man-hours. There’s a difference between one person having to search a document for five minutes, versus 100 people having to search a document for five minutes each. That’s over eight man-hours lost. Bullet points are much faster to skim through when you are looking for something specific in the middle of a page somewhere. Avoid the “TL;DR” effect and don’t send your readers a wall of text.

Avoid superfluous information. If you have extra information beyond what is necessary, it can have an adverse effect on your readers. Your document may be the first your readers have read on your topic, so don’t overload them with too much information.

Don’t create duplicate information. If your documentation source is electronic, keep your documentation from repeating information, and just link to it in a central location. If you have the same information in five different places, you’ll have to update it in five different places if something changes.

Break up longer documents into smaller, logical sections. Organize your information first. Figure out headings and main points. If your page seems too long, try to break it down into smaller sections. For example, you might want to separate a troubleshooting section from the product information section. If your troubleshooting section grows too large, consider moving it to its own page.

Thoroughness

Don’t make assumptions about what the users already know. If it wasn’t covered in your basic training when you were hired, consider adding it to the documentation. This is especially important when you are documenting things for your own job position. Don’t leave out important details just because you can remember them offhand. You’re doing yourself a favor as well. Six months from now, you may need to use your documentation and you may not remember those details.

Bad:SSH to the image server and delete the offending RGX folder.
Good:SSH to the image server (imageserver.mycompany.local), and run ls -al /dev/rgx_files/ | grep blah to find the offending RGX folder and then use rm -rf /dev/rgx_files/<folder> to delete it.

Make sure your documentation covers as much ground as possible. Cover every error and every possible scenario that you can think of. Collaborate with other people to identify any areas you may have missed.

Account for errors. Error messages often give very helpful information. The error might be as straightforward as “Error: You have entered an unsupported character: ‘$.’” Make sure to document the cause and fix for it in detail. If there are unsupported characters, it might be a good idea to provide a list of unsupported characters.

If something is confusing, provide a good example. It’s usually pretty easy to identify the pain points—the things you struggle with are probably going to be difficult for your readers as well. Sometimes things can be explained better in an example than they can in a lengthy paragraph. If you were documenting a command, it might be worthwhile to provide a good example first and then break it down and explain it in detail. Images can also be very helpful in getting your point across. In documenting user interfaces, an image can be a much better choice than words. Draw red boxes or arrows to guide the reader on the procedure.

-Mark

April 24, 2015

Working Well With Your Employees

In the past 17 years I’ve worked in a clean-room laboratory environment as an in-house tech support person managing windows machines around dangerous lasers and chemicals, in the telecommunications industry as a systems analyst and software engineer, and in the hosting industry as a lead developer, software architect, and manager of development. In every case, the following guiding principles have served me well, both as an employee striving to learn more and be a better contributor and as a manager striving to be a worthy employer of rising talent. Whether you are a manager or a startup CEO, this advice will help you cultivate success for you and your employees.

Hire up.
When you’re starting out, you will likely wear many hats out of necessity, but as your company grows, these hats need to be given to others. Hire the best talent you can, and rely on their expertise. Don’t be intimidated by intelligence—embrace it and don’t let your ego stand in the way. Also, be aware that faulty assumptions about someone’s skill set can throw off deadlines and cause support issues down the road. Empowering people increases a sense of ownership and pride in one’s work.

Stay curious.
IBM has reinvented itself over and over. It has done this to keep up with the ever-changing industry with the help of curious employees. Curious people ask more questions, dig deeper, and they find creative solutions to current industry needs. Don’t pour cold water on your employees who want to do things differently. Listen to them with an open mind. Change is sometimes required, and it comes through innovation by curious people.

Integrate and automate everything.
Take a cue from SoftLayer: If you find yourself performing a repetitive task, automate and document it. We’ve focused on automation since day one. Not only do we automate server provisioning, but we’ve also automated our development build processes so that we can achieve repeatable success in code releases. Do your best to automate yourself out of a job and encourage others to live by this mantra. Don’t trade efficiency for job security—those who excel in this should be given more responsibility.

Peace of mind is worth a lot.
Once a coworker and I applied to contract for a job internally because our company was about to spend millions farming it out to a third party. We knew we could do it faster and cheaper, but the company went with the third party instead. Losing that contract taught me that companies are willing to pay handsomely for peace of mind. If you can build a team that is that source of that peace of mind for your company, you will go far.

When things don’t go right.
Sometimes things go off the rails, and there’s nothing you can do about it. People make mistakes. Deadlines are missed. Contracts fall through. In these situations, it’s important to focus on where the process went wrong and put changes in place to keep it from happening again. This is more beneficial to your team than finger pointing. If you can learn from your mistakes, you will create an environment that is agile and successful.

- Jason

April 20, 2015

The SLayer Standard Vol. 1, No. 10

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

The Battle for Global Market Share
Warmer weather must be around the corner—or it could just be the cloud industry heating up. How will cloud providers profit as more and more providers push for world domination? The Economist predicts an industry change as prices drop.

IBM Partners with TI on Secure APIs for IoT
Allow me to translate: the International Business Machines Corporation is partnering with Texas Instruments to secure application program interfaces with the help of the Internet of Things. Through its collaboration with TI, IBM will create a Secure Registry Service that will provide trust and authentication practices and protocol across the value chain–from silicon embedded in devices and products to businesses and homes.

(Join the conversation at #IoTNow or #IoT.)

The U.S. Army Goes Hybrid
The U.S. Army is hoping to see a 50 percent cost savings by utilizing IBM cloud services and products. Like many customers, the Army opted for a hybrid solution for security, flexibility, and ease of scale. Read more about what IBM Cloud and SoftLayer are doing for the U.S Army and other U.S. government departments.

The Only Constant is Change
Or so said Heraclitus of Ephesus. And to keep up with the changing times, IBM has reinvented itself over and over again to stay relevant and successful. This interesting read discusses why big corporations just aren't what they used to be, what major factors have transformed the IT industry over the last couple of decades, and how IBM has been leading the change, time-after-time.

-JRL

Categories: 
April 17, 2015

A Grandmother’s Advice for Startups: You Never Know ‘til You Ask

Today my grandmother turns 95. She's in amazing shape for someone who's nearly a century old. She drives herself around, does her own grocery shopping, and still goes to the beauty parlor every other week to get her hair set.

Growing up less than a mile from her and my granddad, we spent a lot of time with them over the years. Of all of the support, comfort, and wisdom they imparted to me over that time, one piece of advice from my grandmother has stood the test of time. No matter where I was in the world, or what I was doing, it has been relevant and helpful. That advice is:

You never know ‘til you ask.

Simple and powerful, it has guided me throughout my life. Here are some ways you can put this to work for you.

Ask for the Introduction
Whether you're fundraising, hiring, selling, or just looking for feedback, you need to expand your network to reach the right people. The best way to do this is through strategic introductions. In the Catalyst program, making connections is part of our offering to companies. Introductions are such a regular part of my work in the startup community. In my experience, people want to help other people, so as long as you're not taking advantage of it, ask for introductions. You're likely to get a nice warm introduction, which can lead to a meeting.

Ask for the Meeting
Now that you have that introduction, ask for a meeting with a purpose in mind. Even if you don't have an introduction, many people in the startup world are approachable with a cold email.

Guy Kawasaki, former chief evangelist for Apple, and author of 13 books including The Art of the Start 2.0, wrote a fantastic post, "The Effective Emailer," on how to craft that all-important message with your ask.

Another great take on the email ask is from venture capitalist Brad Feld, "If You Want a Response, Ask Specific Questions." This post offers advice on how not to approach someone. The title of the post says it all, if you want a response, ask a specific question.

Ask for the Sale
Many startup founders don't have sales experience and so often miss this incredibly simple, yet incredibly important part of sales: asking for the sale. Even in mass-market B2C businesses, you'll be surprised how easy and effective it is to ask people to sign up. Your first sales will be high-touch and likely require a big time investment from your team. But all of that work will go to waste if you don't say, "Will you sign up to be our customer?" And if the answer is a no, then ask, "What are the next steps for working with you?"

Empower Yourself
It's empowering to ask for something that you want. This is the heart of my grandmother's advice. She is and has always been an empowered woman. I believe a big part of that came from not being afraid to ask for what she wanted. As long as you're polite and respectful in your approach, step up and ask.

The opposite of this is to meekly watch the world go by. If you do not ask, it will sweep you away on other people's directions. This is the path to failure as an entrepreneur.

The way to empower yourself in this world starts with asking for what you want. Whether it's something as simple as asking for a special order at a restaurant or as big as asking for an investment, make that ask. After all, you'll never know unless you ask.

-Rich

April 10, 2015

The SLayer Standard Vol. 1, No. 9

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Welcome to the Masters
If you’re not practicing your swing this weekend, you’re watching the Masters. Over the next couple of days, professional golfers will seek their shot at landing the coveted Green Jacket. And while everyone might be watching the leaderboard, IBM will be hard at work in what they are calling the “bunker,” located in a small green building at the Augusta National Golf Club.

What does IBM have to do with the Masters? Everything.

Read how IBM, backed by the power of the SoftLayer cloud, is making the Masters website virtually uncrashable.

And for those that can’t line the greens to watch your favorite player, IBM is utilizing the lasers the Golf Club has placed around the course to track the ball as it flies from hole-to-hole. Learn more about the golf-ball tracking technology here.

Open Happiness
In a move to streamline tech operations and cut costs, Coca-Cola Amatil is partnering with IBM Cloud to move some of its platforms to SoftLayer data centers in Sydney and Melbourne—a deal sure to open happiness.

"The move to SoftLayer will provide us with a game-changing level of flexibility, resiliency and reliability to ramp up and down capacity as needed. It will also remove the need for large expenditure on IT infrastructure." - Barry Simpson, CIO, Coca-Cola Amatil

Read more about the new CCA cloud environment and the five-year, multimillion-dollar deal.

-JRL

Categories: 
April 1, 2015

The SLayer Standard Vol. 1 No. 8

The week in review. All the IBM Cloud and SoftLayer headlines in one place.

Sunny Skies for IBM Cloud and The Weather Company
IBM made big headlines on Tuesday when it announced they would team up with The Weather Company boasting “100 percent chance of smarter business forecasts.”

Bloomberg sits down with Bob Picciano, IBM Analytics Senior VP, and David Kenny, The Weather Company CEO to discuss what makes this different than other companies that have analyzed the weather in the past. Using Watson Analytics and the Internet of Things, the partnership will transform business decision-making based on weather behavior. Read how IBM’s $3 billion investment in the Internet of Things will collect weather data from 100,000 weather stations around the world and turn it into meaningful data for business owners.

Indian Startups Choose SoftLayer
According to the National Association of Software and Services Companies (NASSCOM), India has the world’s third largest and the fastest-growing startup ecosystem. Like many SoftLayer startup customers, Goldstar Healthcare, Vtiger, Clematix, Ecoziee Marketing utilize the SoftLayer cloud infrastructure platform to “begin on a small scale and then expand rapidly to meet workload demands without having to worry about large investments in infrastructure development.”

New SoftLayer Storage Offerings
Last week, SoftLayer announced the launch of block storage and file storage complete with Endurance- and Performance-class tiers. The media was fast to report the new offerings that provide customers more choice, flexibility, and control for their storage needs and workloads.

“ … SoftLayer’s focus on tailored capacity and performance needs coincides with the trend in the cloud market of customizing technology based on different application requirements.”– IBM Splits SoftLayer Cloud Storage Into Endurance, Performance Tiers

“In the age of the cloud, the relationship between cloud storage capacity and I/O performance has officially become divorced.” – IBM Falls Into Cloud Storage Pricing Line

Pick your favorite online tech media and read all about it: SiliconANGLE, Computer Weekly, Data Center Knowledge, CRN, V3, Cloud Computing Intelligence, Storage Networking Solutions UK, and DCS Europe.

#IBMandTwitter
There are more than half a billion tweets posted to Twitter every day. IBM is teaming up with Twitter to turn those “tweets into insights for more than 100 organizations around the world.” Leon Sun of The Motley Fool takes a closer look at what the deal means to IBM and Twitter.

“Twitter provides a powerful new lens through which to look at the world. This partnership, drawing on IBM’s leading cloud-based analytics platform, will help clients enrich business decisions with an entirely new class of data. This is the latest example of how IBM is reimaging work.” – Ginni Romety, IBM Chairman, President and CEO

-JRL

Categories: 
March 30, 2015

The Importance of Data's Physical Location in the Cloud

If top-tier cloud providers use similar network hardware in their data centers and connect to the same transit and peering bandwidth providers, how can SoftLayer claim to provide the best network performance in the cloud computing industry?

Over the years, I've heard variations of that question asked dozens of times, and it's fairly easy to answer with impressive facts and figures. All SoftLayer data centers and network points of presence (PoPs) are connected to our unique global network backbone, which carries public, private, and management traffic to and from servers. Using our network connectivity table, some back-of-the-envelope calculations reveal that we have more than 2,500Gbps of bandwidth connectivity with some of the largest transit and peering bandwidth providers in the world (and that total doesn't even include the private peering relationships we have with other providers in various regional markets). Additionally, customers may order servers with up to 10Gbps network ports in our data centers.

For the most part, those stats explain our differentiation, but part of the bigger network performance story is still missing, and to a certain extent it has been untold—until today.

The 2,500+Gbps of bandwidth connectivity we break out in the network connectivity table only accounts for the on-ramps and off-ramps of our network. Our global network backbone is actually made up of an additional 2,600+Gbps of bandwidth connectivity ... and all of that backbone connectivity transports SoftLayer-related traffic.

This robust network architecture streamlines the access to and delivery of data on SoftLayer servers. When you access a SoftLayer server, the network is designed to bring you onto our global backbone as quickly as possible at one of our network PoPs, and when you're on our global backbone, you'll experience fewer hops (and a more direct route that we control). When one of your users requests data from your SoftLayer server, that data travels across the global backbone to the nearest network PoP, where it is handed off to another provider to carry the data the "last mile."

With this controlled environment, I decided to undertake an impromptu science experiment to demonstrate how location and physical distance affect network performance in the cloud.

Speed Testing on the SoftLayer Global Network Backbone

I work in the SoftLayer office in downtown Houston, Texas. In network-speak, this location is HOU04. You won't find that location on any data center or network tables because it's just an office, but it's connected to the same global backbone as our data centers and network points of presence. From my office, the "last mile" doesn't exist; when I access a SoftLayer server, my bits and bytes only travel across the SoftLayer network, so we're effectively cutting out a number of uncontrollable variables in the process of running network speed tests.

For better or worse, I didn't tell any network engineers that I planned to run speed tests to every available data center and share the results I found, so you're seeing exactly what I saw with no tomfoolery. I just fired up my browser, headed to our Data Centers page, and made my way down the list using the SpeedTest option for each facility. Customers often go through this process when trying to determine the latency, speeds, and network path that they can expect from servers in each data center, but if we look at the results collectively, we can learn a lot more about network performance in general.

With the results, we'll discuss how network speed tests work, what the results mean, and why some might be surprising. If you're feeling scientific and want to run the tests yourself, you're more than welcome to do so.

The Ookla SpeedTests we link to from the data centers table measured the latency (ping time), jitter (variation in latency), download speeds, and upload speeds between the user's computer and the data center's test server. To run this experiment, I connected my MacBook Pro via Ethernet to a 100Mbps wired connection. At the end of each speed test, I took a screenshot of the performance stats:

SoftLayer Network Speed Test

To save you the trouble of trying to read all of the stats on each data center as they cycle through that animated GIF, I also put them into a table (click the data center name to see its results screenshot in a new window):

Data Center Latency (ms) Download Speed (Mbps) Upload Speed (Mbps) Jitter (ms)
AMS01 121 77.69 82.18 1
DAL01 9 93.16 87.43 0
DAL05 7 93.16 83.77 0
DAL06 7 93.11 83.50 0
DAL07 8 93.08 83.60 0
DAL09 11 93.05 82.54 0
FRA02 128 78.11 85.08 0
HKG02 184 50.75 78.93 2
HOU02 2 93.12 83.45 1
LON02 114 77.41 83.74 2
MEL01 186 63.40 78.73 1
MEX01 27 92.32 83.29 1
MON01 52 89.65 85.94 3
PAR01 127 82.40 83.38 0
SJC01 44 90.43 83.60 1
SEA01 50 90.33 83.23 2
SNG01 195 40.35 72.35 1
SYD01 196 61.04 75.82 4
TOK02 135 75.63 82.20 2
TOR01 40 90.37 82.90 1
WDC01 43 89.68 84.35 0

By performing these speed tests on the SoftLayer network, we can actually learn a lot about how speed tests work and how physical location affects network performance. But before we get into that, let's take note of a few interesting results from the table above:

  • The lowest latency from my office is to the HOU02 (Houston, Texas) data center. That data center is about 14.2 miles away as the crow flies.
  • The highest latency results from my office are to the SYD01 (Sydney, Australia) and SNG01 (Singapore) data centers. Those data centers are at least 8,600 and 10,000 miles away, respectively.
  • The fastest download speed observed is 93.16Mbps, and that number was seen from two data centers: DAL01 and DAL05.
  • The slowest download speed observed is 40.35Mbps from SNG01.
  • The fastest upload speed observed is 87.43Mbps to DAL01.
  • The slowest upload speed observed is 72.35Mbps to SNG01.
  • The upload speeds observed are faster than the download speeds from every data center outside of North America.

Are you surprised that we didn't see any results closer to 100Mbps? Is our server in Singapore underperforming? Are servers outside of North America more selfish to receive data and stingy to give it back?

Those are great questions, and they actually jumpstart an explanation of how the network tests work and what they're telling us.

Maximum Download Speed on 100Mbps Connection

If my office is 2 milliseconds from the test server in HOU02, why is my download speed only 93.12Mbps? To answer this question, we need to understand that to perform these tests, a connection is made using Transmission Control Protocol (TCP) to move the data, and TCP does a lot of work in the background. The download is broken into a number of tiny chunks called packets and sent from the sender to the receiver. TCP wants to ensure that each packet that is sent is received, so the receiver sends an acknowledgement back to the sender to confirm that the packet arrived. If the sender is unable to verify that a given packet was successfully delivered to the receiver, the sender will resend the packet.

This system is pretty simple, but in actuality, it's very dynamic. TCP wants to be as efficient as possible ... to send the fewest number of packets to get the entire message across. To accomplish this, TCP is able to modify the size of each packet to optimize it for each communication. The receiver dictates how large the packet should be by providing a receive window to accommodate a small packet size, and it analyzes and adjusts the receive window to get the largest packets possible without becoming unstable. Some operating systems are better than others when it comes to tweaking and optimizing TCP transfer rates, but the processes TCP takes to ensure that the packets are sent and received without error takes overhead, and that overhead limits the maximum speed we can achieve.

Understanding the SNG01 Results

Why did my SNG01 speed test max out at a meager 40.35Mbps on my 100Mbps connection? Well, now that we understand how TCP is working behind the scenes, we can see why our download speeds from Singapore are lower than we'd expect. Latency between the sending and successful receipt of a packet plays into TCP’s considerations of a stable connection. Higher ping times will cause TCP to send smaller packet sizes than it would for lower ping times to ensure that no sizable packet is lost (which would have to be reproduced and resent).

With our global backbone optimizing the network path of the packets between Houston and Singapore, the more than 10,000-mile journey, the nature of TCP, and my computer's TCP receive window adjustments all factor into the download speeds recorded from SNG01. Looking at the results in the context of the distance the data has to travel, our results are actually well within the expected performance.

Because the default behavior of TCP is partially to blame for the results, we could actually tweak the test and tune our configurations to deliver faster speeds. To confirm that improvements can be made relatively easily, we can actually just look at the answer to our third question...

Upload > Download?

Why are the upload speeds faster than the download speeds after latency jumps from 50ms to 114ms? Every location in North America is within 2,000 miles of Houston, while the closest location outside of North America is about 5,000 miles away. With what we've learned about how TCP and physical distance play into download speeds, that jump in distance explains why the download speeds drop from 90.33Mbps to 77.41Mbps as soon as we cross an ocean, but how can the upload speeds to Europe (and even APAC) stay on par with their North American counterparts? The only difference between our download path and upload path is which side is sending and which side is receiving. And if the receiver determines the size of the TCP receive window, the most likely culprit in the discrepancy between download and upload speeds is TCP windowing.

A Linux server is built and optimized to be a server, whereas my MacOSX laptop has a lot of other responsibilities, so it shouldn't come as a surprise that the default TCP receive window handling is better on the server side. With changes to the way my laptop handles TCP, download speeds would likely be improved significantly. Additionally, if we wanted to push the envelope even further, we might consider using a different transfer protocol to take advantage of the consistent, controlled network environment.

The Importance of Physical Location in Cloud Computing

These real-world test results under controlled conditions demonstrate the significance of data's geographic proximity to its user on the user's perceived network performance. We know that the network latency in a 14-mile trip will be lower than the latency in a 10,000-mile trip, but we often don't think about the ripple effect latency has on other network performance indicators. And this experiment actually controls a lot of other variables that can exacerbate the performance impact of geographic distance. The tests were run on a 100Mbps connection because that's a pretty common maximum port speed, but if we ran the same tests on a GigE line, the difference would be even more dramatic. Proof: HOU02 @ 1Gbps v. SNG01 @ 1Gbps

Let's apply our experiment to a real-world example: Half of our site's user base is in Paris and the other half is in Singapore. If we chose to host our cloud infrastructure exclusively from Paris, our users would see dramatically different results. Users in Paris would have sub-10ms latency while users in Singapore have about 300ms of latency. Obviously, operating cloud servers in both markets would be the best way to ensure peak performance in both locations, but what if you can only afford to provision your cloud infrastructure in one location? Where would you choose to provision that infrastructure to provide a consistent user experience for your audience in both markets?

Given what we've learned, we should probably choose a location with roughly the same latency to both markets. We can use the SoftLayer Looking Glass to see that San Jose, California (SJC01) would be a logical midpoint ... At this second, the latency between SJC and PAR on the SoftLayer backbone is 149ms, and the latency between SJC and SNG is 162ms, so both would experience very similar performance (all else being equal). Our users in the two markets won't experience mind-blowing speeds, but neither will experience mind-numbing speeds either.

The network performance implications of physical distance apply to all cloud providers, but because of the SoftLayer global network backbone, we're able to control many of the variables that lead to higher (or inconsistent) latency to and from a given data center. The longer a single provider can route traffic, the more efficiently that traffic will move. You might see the same latency speeds to another provider's cloud infrastructure from a given location at a given time across the public Internet, but you certainly won't see the same consistency from all locations at all times. SoftLayer has spent millions of dollars to build, maintain, and grow our global network backbone to transport public and private network traffic, and as a result, we feel pretty good about claiming to provide the best network performance in cloud computing.

-@khazard

Pages

Subscribe to cloud