Posts Tagged 'Speed'

August 12, 2015

Network Performance 101: What is latency, and why does it matter?

We’ve all been there. Waiting for a web page to load can be so frustrating that we end up just closing out. You might ask yourself, “Hey, I have high-speed Internet. Why is this happening to me?” Well, there are a lot of factors outside your control that … control page loads. And whether you have an online store, run big data solutions, or have your employees set up on a network accessing files around the world, you never want to hear that your data, consumer products, information, or otherwise, is keeping you from a sale or slowing down employee productivity because of slow data transfer.

So why are some pages so much slower to load than others?
It could be that poorly written code or large images are slowing the load on the backend, but slow page loads can also be caused by network latency. This might sound elementary, but data is not just floating out there in some non-physical Internet space. In reality, data is stored on hard drives … somewhere. Network connectivity provides a path for that data to travel to end users around the world, and that connectivity can vary significantly—depending on how far it’s going, how many times the data has to hop between service providers, how much bandwidth is available along the way, the other data traveling across the same path, and a number of other variables.

The measurement of how quickly data travels between two connected points is called network latency. Network latency is an expression of the amount of time it takes a packet of data to get from one place to another.

Understanding Network Latency
Theoretically, data can travel at the speed of light across optical fiber network cables, but in practice, data typically travels slower than light due to the variables we referenced in the previous section. If a network connection doesn’t have any available bandwidth capacity, data might temporarily queue up to wait for its turn to travel across the line. If a service provider’s network doesn’t route a network path optimally, data could be sent hundreds or thousands of miles away from the destination in the process of routing to the destination. These kinds of delays and detours lead to higher network latency, which lead to slower page loads and download speeds.

We express network latency in milliseconds (that’s 1,000 milliseconds per second), and while a few thousandths of a second may not mean much to us as we’re living our daily lives, those milliseconds are often the deciding factors for whether we stay on a webpage or give up and try another site. As consumers of high-speed Internet, we like what we like, and we want what we want when we want it. In the financial sector, milliseconds can mean billions of dollars in gains or losses from trade transactions on a day-to-day basis.

Logical conclusion: Everyone wants the lowest network latency to the greatest number of users.

Common Approaches to Minimize Network Latency
If our shared goal is to minimize latency for our data, the most common approaches to addressing network latency involve limiting the number of potential variables that can impact the speed of data’s movement. While we don’t have complete control over how our data travels across the Internet, we can do a few things to keep our network latency in line:

  • Distribute data around the world: Users in different locations can pull data from a location that’s geographically close to them. Because the data is closer to the users, it is handed off fewer times, it has a shorter distance to travel, and inefficient routing is less likely to cause a significant performance impact.
  • Provision servers with high-capacity network ports: Huge volumes of data can travel to and from the server every second. If packets are delayed due to fully saturated ports, milliseconds of time pass, pages load slower, download speeds drop, and users get unhappy.
  • Understand how your providers route traffic: When you know how your data is transferred to users around the world, you can make better decisions about where you host your data.

How SoftLayer Minimizes Network Latency
To minimize latency, we took a unique approach to building our network. All of our data centers are connected to network points of presence. All of our network points of presence are connected to each other via our global backbone network. And by maintaining our own global backbone network, our network operations team is able to control network paths and data handoffs much more granularly than if we relied on other providers to move data between geographies.

SoftLayer Private Network

For example, if a user in Berlin wants to watch a cat video hosted on a SoftLayer server in Dallas, the packets of data that make up that cat video will travel across our backbone network (which is exclusively used by SoftLayer traffic) to Frankfurt, where the packets would be handed off to one of our peering or transit public network partners to get to the user in Berlin.

Without a global backbone network, the packets would be handed off to a peering or transit public network provider in Dallas, and that provider would route the packets across its network and/or hand the packets off to another provider at a network hop, and the packets would bounce their way to Germany. It’s entirely possible that the packets could get from Dallas to Berlin with the same network latency with or without the global backbone network, but without the global backbone network, there are a lot more variables.

In addition to building a global backbone network, we also segment public, private, and management traffic onto different network ports so that different types of traffic can be transferred without interfering with each other.

SoftLayer Private Network

But at the end of the day, all of that network planning and forethought doesn’t amount to a hill of beans if you can’t see the results for yourself. That’s why we put speed tests on our website so you can check out our network yourself (for more on speed tests, check out this blog post).

TL;DR: Network Latency
Your users want your data as quickly as you can get it to them. The time it takes for your data to get to them across the Internet is called network latency. The more control you (or your provider) have over your data’s network path, the more consistent (and lower) your network latency will be.

Stay tuned. Next month we will be discussing Network Performance 101: Security, where we’ll discuss all things cloud security—including answering your burning questions: Can other people see or access my data in a public cloud? Is my data more prone to hackers? And, what safeguards do SoftLayer have in place to protect data?

-JRL

March 30, 2015

The Importance of Data's Physical Location in the Cloud

If top-tier cloud providers use similar network hardware in their data centers and connect to the same transit and peering bandwidth providers, how can SoftLayer claim to provide the best network performance in the cloud computing industry?

Over the years, I've heard variations of that question asked dozens of times, and it's fairly easy to answer with impressive facts and figures. All SoftLayer data centers and network points of presence (PoPs) are connected to our unique global network backbone, which carries public, private, and management traffic to and from servers. Using our network connectivity table, some back-of-the-envelope calculations reveal that we have more than 2,500Gbps of bandwidth connectivity with some of the largest transit and peering bandwidth providers in the world (and that total doesn't even include the private peering relationships we have with other providers in various regional markets). Additionally, customers may order servers with up to 10Gbps network ports in our data centers.

For the most part, those stats explain our differentiation, but part of the bigger network performance story is still missing, and to a certain extent it has been untold—until today.

The 2,500+Gbps of bandwidth connectivity we break out in the network connectivity table only accounts for the on-ramps and off-ramps of our network. Our global network backbone is actually made up of an additional 2,600+Gbps of bandwidth connectivity ... and all of that backbone connectivity transports SoftLayer-related traffic.

This robust network architecture streamlines the access to and delivery of data on SoftLayer servers. When you access a SoftLayer server, the network is designed to bring you onto our global backbone as quickly as possible at one of our network PoPs, and when you're on our global backbone, you'll experience fewer hops (and a more direct route that we control). When one of your users requests data from your SoftLayer server, that data travels across the global backbone to the nearest network PoP, where it is handed off to another provider to carry the data the "last mile."

With this controlled environment, I decided to undertake an impromptu science experiment to demonstrate how location and physical distance affect network performance in the cloud.

Speed Testing on the SoftLayer Global Network Backbone

I work in the SoftLayer office in downtown Houston, Texas. In network-speak, this location is HOU04. You won't find that location on any data center or network tables because it's just an office, but it's connected to the same global backbone as our data centers and network points of presence. From my office, the "last mile" doesn't exist; when I access a SoftLayer server, my bits and bytes only travel across the SoftLayer network, so we're effectively cutting out a number of uncontrollable variables in the process of running network speed tests.

For better or worse, I didn't tell any network engineers that I planned to run speed tests to every available data center and share the results I found, so you're seeing exactly what I saw with no tomfoolery. I just fired up my browser, headed to our Data Centers page, and made my way down the list using the SpeedTest option for each facility. Customers often go through this process when trying to determine the latency, speeds, and network path that they can expect from servers in each data center, but if we look at the results collectively, we can learn a lot more about network performance in general.

With the results, we'll discuss how network speed tests work, what the results mean, and why some might be surprising. If you're feeling scientific and want to run the tests yourself, you're more than welcome to do so.

The Ookla SpeedTests we link to from the data centers table measured the latency (ping time), jitter (variation in latency), download speeds, and upload speeds between the user's computer and the data center's test server. To run this experiment, I connected my MacBook Pro via Ethernet to a 100Mbps wired connection. At the end of each speed test, I took a screenshot of the performance stats:

SoftLayer Network Speed Test

To save you the trouble of trying to read all of the stats on each data center as they cycle through that animated GIF, I also put them into a table (click the data center name to see its results screenshot in a new window):

Data Center Latency (ms) Download Speed (Mbps) Upload Speed (Mbps) Jitter (ms)
AMS01 121 77.69 82.18 1
DAL01 9 93.16 87.43 0
DAL05 7 93.16 83.77 0
DAL06 7 93.11 83.50 0
DAL07 8 93.08 83.60 0
DAL09 11 93.05 82.54 0
FRA02 128 78.11 85.08 0
HKG02 184 50.75 78.93 2
HOU02 2 93.12 83.45 1
LON02 114 77.41 83.74 2
MEL01 186 63.40 78.73 1
MEX01 27 92.32 83.29 1
MON01 52 89.65 85.94 3
PAR01 127 82.40 83.38 0
SJC01 44 90.43 83.60 1
SEA01 50 90.33 83.23 2
SNG01 195 40.35 72.35 1
SYD01 196 61.04 75.82 4
TOK02 135 75.63 82.20 2
TOR01 40 90.37 82.90 1
WDC01 43 89.68 84.35 0

By performing these speed tests on the SoftLayer network, we can actually learn a lot about how speed tests work and how physical location affects network performance. But before we get into that, let's take note of a few interesting results from the table above:

  • The lowest latency from my office is to the HOU02 (Houston, Texas) data center. That data center is about 14.2 miles away as the crow flies.
  • The highest latency results from my office are to the SYD01 (Sydney, Australia) and SNG01 (Singapore) data centers. Those data centers are at least 8,600 and 10,000 miles away, respectively.
  • The fastest download speed observed is 93.16Mbps, and that number was seen from two data centers: DAL01 and DAL05.
  • The slowest download speed observed is 40.35Mbps from SNG01.
  • The fastest upload speed observed is 87.43Mbps to DAL01.
  • The slowest upload speed observed is 72.35Mbps to SNG01.
  • The upload speeds observed are faster than the download speeds from every data center outside of North America.

Are you surprised that we didn't see any results closer to 100Mbps? Is our server in Singapore underperforming? Are servers outside of North America more selfish to receive data and stingy to give it back?

Those are great questions, and they actually jumpstart an explanation of how the network tests work and what they're telling us.

Maximum Download Speed on 100Mbps Connection

If my office is 2 milliseconds from the test server in HOU02, why is my download speed only 93.12Mbps? To answer this question, we need to understand that to perform these tests, a connection is made using Transmission Control Protocol (TCP) to move the data, and TCP does a lot of work in the background. The download is broken into a number of tiny chunks called packets and sent from the sender to the receiver. TCP wants to ensure that each packet that is sent is received, so the receiver sends an acknowledgement back to the sender to confirm that the packet arrived. If the sender is unable to verify that a given packet was successfully delivered to the receiver, the sender will resend the packet.

This system is pretty simple, but in actuality, it's very dynamic. TCP wants to be as efficient as possible ... to send the fewest number of packets to get the entire message across. To accomplish this, TCP is able to modify the size of each packet to optimize it for each communication. The receiver dictates how large the packet should be by providing a receive window to accommodate a small packet size, and it analyzes and adjusts the receive window to get the largest packets possible without becoming unstable. Some operating systems are better than others when it comes to tweaking and optimizing TCP transfer rates, but the processes TCP takes to ensure that the packets are sent and received without error takes overhead, and that overhead limits the maximum speed we can achieve.

Understanding the SNG01 Results

Why did my SNG01 speed test max out at a meager 40.35Mbps on my 100Mbps connection? Well, now that we understand how TCP is working behind the scenes, we can see why our download speeds from Singapore are lower than we'd expect. Latency between the sending and successful receipt of a packet plays into TCP’s considerations of a stable connection. Higher ping times will cause TCP to send smaller packet sizes than it would for lower ping times to ensure that no sizable packet is lost (which would have to be reproduced and resent).

With our global backbone optimizing the network path of the packets between Houston and Singapore, the more than 10,000-mile journey, the nature of TCP, and my computer's TCP receive window adjustments all factor into the download speeds recorded from SNG01. Looking at the results in the context of the distance the data has to travel, our results are actually well within the expected performance.

Because the default behavior of TCP is partially to blame for the results, we could actually tweak the test and tune our configurations to deliver faster speeds. To confirm that improvements can be made relatively easily, we can actually just look at the answer to our third question...

Upload > Download?

Why are the upload speeds faster than the download speeds after latency jumps from 50ms to 114ms? Every location in North America is within 2,000 miles of Houston, while the closest location outside of North America is about 5,000 miles away. With what we've learned about how TCP and physical distance play into download speeds, that jump in distance explains why the download speeds drop from 90.33Mbps to 77.41Mbps as soon as we cross an ocean, but how can the upload speeds to Europe (and even APAC) stay on par with their North American counterparts? The only difference between our download path and upload path is which side is sending and which side is receiving. And if the receiver determines the size of the TCP receive window, the most likely culprit in the discrepancy between download and upload speeds is TCP windowing.

A Linux server is built and optimized to be a server, whereas my MacOSX laptop has a lot of other responsibilities, so it shouldn't come as a surprise that the default TCP receive window handling is better on the server side. With changes to the way my laptop handles TCP, download speeds would likely be improved significantly. Additionally, if we wanted to push the envelope even further, we might consider using a different transfer protocol to take advantage of the consistent, controlled network environment.

The Importance of Physical Location in Cloud Computing

These real-world test results under controlled conditions demonstrate the significance of data's geographic proximity to its user on the user's perceived network performance. We know that the network latency in a 14-mile trip will be lower than the latency in a 10,000-mile trip, but we often don't think about the ripple effect latency has on other network performance indicators. And this experiment actually controls a lot of other variables that can exacerbate the performance impact of geographic distance. The tests were run on a 100Mbps connection because that's a pretty common maximum port speed, but if we ran the same tests on a GigE line, the difference would be even more dramatic. Proof: HOU02 @ 1Gbps v. SNG01 @ 1Gbps

Let's apply our experiment to a real-world example: Half of our site's user base is in Paris and the other half is in Singapore. If we chose to host our cloud infrastructure exclusively from Paris, our users would see dramatically different results. Users in Paris would have sub-10ms latency while users in Singapore have about 300ms of latency. Obviously, operating cloud servers in both markets would be the best way to ensure peak performance in both locations, but what if you can only afford to provision your cloud infrastructure in one location? Where would you choose to provision that infrastructure to provide a consistent user experience for your audience in both markets?

Given what we've learned, we should probably choose a location with roughly the same latency to both markets. We can use the SoftLayer Looking Glass to see that San Jose, California (SJC01) would be a logical midpoint ... At this second, the latency between SJC and PAR on the SoftLayer backbone is 149ms, and the latency between SJC and SNG is 162ms, so both would experience very similar performance (all else being equal). Our users in the two markets won't experience mind-blowing speeds, but neither will experience mind-numbing speeds either.

The network performance implications of physical distance apply to all cloud providers, but because of the SoftLayer global network backbone, we're able to control many of the variables that lead to higher (or inconsistent) latency to and from a given data center. The longer a single provider can route traffic, the more efficiently that traffic will move. You might see the same latency speeds to another provider's cloud infrastructure from a given location at a given time across the public Internet, but you certainly won't see the same consistency from all locations at all times. SoftLayer has spent millions of dollars to build, maintain, and grow our global network backbone to transport public and private network traffic, and as a result, we feel pretty good about claiming to provide the best network performance in cloud computing.

-@khazard

December 17, 2014

Does physical location matter “in the cloud”?

By now everyone understands that the cloud is indeed a place on Earth, but there still seems to be confusion around why global expansion by way of adding data centers is such a big deal. After all, if data is stored “in the cloud,” why wouldn’t adding more servers in our existing data centers suffice? Well, there’s a much more significant reason for adding more data centers than just being able to host more data.

As we’ve explained in previous blog posts, Globalization and Hosting: The World Wide Web is Flat and Global Network: The Proof is in the Traceroute, our strategic objective is to get a network point of presence (PoP) within 40ms of all our users (and our users' users) in order to provide the best network stability and performance possible anywhere on the planet.

Data can travel across the Internet quickly, but just like anything, the farther something has to go, the longer it will take to get there. Seems pretty logical right? But we also need to take into account that not all routes are created equally. So to deliver the best network performance, we designed our global network to get data to the closest route possible to our network. Think of each SoftLayer PoP as an on-ramp to our global network backbone. The sooner a user is able to get onto our network, the quicker we can efficiently route them through our PoPs to a server in one of our data centers. Furthermore, once plugged into the network, we are able to control the flow of traffic.

Let’s take a look at this traceroute example from the abovementioned blog post. As you are probably aware, a traceroute shows the "hops" or routers along the network path from an origin IP to a destination IP. When we were building out the Singapore data center (before the network points of presence were turned up in Asia), the author ran a traceroute from Singapore to SoftLayer.com, and immediately after the launch of the data center, ran another one.

Pre-Launch Traceroute to SoftLayer.com from Singapore

traceroute to softlayer.com (66.228.118.53), 64 hops max, 52 byte packets
 1  10.151.60.1 (10.151.60.1)  1.884 ms  1.089 ms  1.569 ms
 2  10.151.50.11 (10.151.50.11)  2.006 ms  1.669 ms  1.753 ms
 3  119.75.13.65 (119.75.13.65)  3.380 ms  3.388 ms  4.344 ms
 4  58.185.229.69 (58.185.229.69)  3.684 ms  3.348 ms  3.919 ms
 5  165.21.255.37 (165.21.255.37)  9.002 ms  3.516 ms  4.228 ms
 6  165.21.12.4 (165.21.12.4)  3.716 ms  3.965 ms  5.663 ms
 7  203.208.190.21 (203.208.190.21)  4.442 ms  4.117 ms  4.967 ms
 8  203.208.153.241 (203.208.153.241)  6.807 ms  55.288 ms  56.211 ms
 9  so-2-0-3-0.laxow-cr1.ix.singtel.com (203.208.149.238)  187.953 ms  188.447 ms  187.809 ms
10  ge-4-0-0-0.laxow-dr2.ix.singtel.com (203.208.149.34)  184.143 ms
    ge-4-1-1-0.sngc3-dr1.ix.singtel.com (203.208.149.138)  189.510 ms
    ge-4-0-0-0.laxow-dr2.ix.singtel.com (203.208.149.34)  289.039 ms
11  203.208.171.98 (203.208.171.98)  187.645 ms  188.700 ms  187.912 ms
12  te1-6.bbr01.cs01.lax01.networklayer.com (66.109.11.42)  186.482 ms  188.265 ms  187.021 ms
13  ae7.bbr01.cs01.lax01.networklayer.com (173.192.18.166)  188.569 ms  191.100 ms  188.736 ms
14  po5.bbr01.eq01.dal01.networklayer.com (173.192.18.140)  381.645 ms  410.052 ms  420.311 ms
15  ae0.dar01.sr01.dal01.networklayer.com (173.192.18.211)  415.379 ms  415.902 ms  418.339 ms
16  po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  417.426 ms  417.301 ms
    po2.slr01.sr01.dal01.networklayer.com (66.228.118.142)  416.692 ms
17  * * *

Post-Launch Traceroute to SoftLayer.com from Singapore

traceroute to softlayer.com (66.228.118.53), 64 hops max, 52 byte packets
 1  192.168.206.1 (192.168.206.1)  2.850 ms  1.409 ms  1.206 ms
 2  174.133.118.65-static.reverse.networklayer.com (174.133.118.65)  1.550 ms  1.680 ms  1.394 ms
 3  ae4.dar01.sr03.sng01.networklayer.com (174.133.118.136)  1.812 ms  1.341 ms  1.734 ms
 4  ae9.bbr01.eq01.sng02.networklayer.com (50.97.18.198)  35.550 ms  1.999 ms  2.124 ms
 5  50.97.18.169-static.reverse.softlayer.com (50.97.18.169)  174.726 ms  175.484 ms  175.491 ms
 6  po5.bbr01.eq01.dal01.networklayer.com (173.192.18.140)  203.821 ms  203.749 ms  205.803 ms
 7  ae0.dar01.sr01.dal01.networklayer.com (173.192.18.253)  306.755 ms
    ae0.dar01.sr01.dal01.networklayer.com (173.192.18.211)  208.669 ms  203.127 ms
 8  po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  203.518 ms
    po2.slr01.sr01.dal01.networklayer.com (66.228.118.142)  305.534 ms
    po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  204.150 ms
 9  * * *

After the Singapore data center launch, the number of hops was reduced by 50 percent, and the response time (in milliseconds) was reduced by 40 percent. Those are pretty impressive numbers from just lighting up a couple PoPs and a data center, and that was just the beginning of our global expansion in 2012.

That’s why we are so excited to announce the three new data centers launching this month: Mexico City, Tokyo, and Frankfurt.



Of course, this is great news for customers who require data residency in Mexico, Japan, and Germany. And yes, these new locations provide additional in-region redundancy within APAC, EMEA, and the Americas. But even customers without servers in these new facilities have reason to celebrate: Our global network backbone is expanding, so users in these markets will see even better network stability and speed to servers in every other SoftLayer data center around the world!

-JRL

January 24, 2013

Startup Series: SPEEDILICIOUS

Research from the Aberdeen Group shows the average website is losing 9% of its business because
 the speed of the site frustrates visitors into leaving. 9% of your traffic might be leaving your site because they feel like it's too slow. That thought is staggering, and any site owner would be foolish not to fix the problem. SPEEDILICIOUS — one of our new Catalyst partners — has an innovative solution that optimizes website performance and helps businesses deliver content to their end users faster.

SPEEDILICIOUS

I recently had the chance to chat with SPEEDILICIOUS founders Seymour Segnit and Chip Krauskopf, and Seymour rephrased that "9%" statistic in a pretty alarming way: "Losing 9% of your business is the equivalent of simply allowing your website to go offline, down, dark, dead, 404 for over a MONTH each year!" There is ample data to back this up from high-profile sites like Amazon, Microsoft and Walmart.com, but intuitively, you know it already ... A slow site (even a slightly slow site) is annoying.

The challenge many website owners have when it comes to their loading speeds is that problems might not be noticeable from their own workstations. Thanks to caching and the Internet connections most of us have, when we visit our own sites, we don't have any trouble accessing our content quickly. Unfortunately, many of our customers don't share that experience when they visit our sites on mobile, hotel, airports and (worst of all) conference connections. The most common approach to speeding up load times is to throw bigger servers or a CDN (content delivery network) at the problem, but while those improvements make a difference, they only address part of the problem ... Even with the most powerful servers in SoftLayer's fleet, your page can load at a crawl if your code can't be rendered quickly by a browser.

That makes life as a website developer difficult. The process of optimizing code and tweaking settings to speed up load times can be time-consuming and frustrating. Or as Chip explained to me, "Speeding up your site is essential, it shouldn’t be be slow and complicated. We fix that problem." Take a look:

The idea that your site performance can be sped up significantly overnight seems a little crazy, but if it works (which it clearly does), wouldn't it be crazier not to try it? SPEEDILICIOUS offers a $1 trial for you to see the results on your own site, and they regularly host a free webinar called "How to Grow Your Business 5-15% Overnight" which covers the critical techniques for speeding up any website.

As technology continues to improve and behavioral patterns of purchasing migrate away from the mall and onto our computers and smart phones, SPEEDILICIOUS has a tremendous opportunity to capture a ripe market. So they're clearly a great fit for Catalyst. If you're interested in learning more or would like to speak to Seymour, Chip or anyone on their team, please let me know and I'll make the direct introduction any time.

-@JoshuaKrammes

December 17, 2012

Big Data at SoftLayer: The Importance of IOPS

The jet flow gates in the Hoover Dam can release up to 73,000 cubic feet — the equivalent of 546,040 gallons — of water per second at 120 miles per hour. Imagine replacing those jet flow gates with a single garden hose that pushes 25 gallons per minute (or 0.42 gallons per second). Things would get ugly pretty quickly. In the same way, a massive "big data" infrastructure can be crippled by insufficient IOPS.

IOPS — Input/Output Operations Per Second — measure computer storage in terms of the number of read and write operations it can perform in a second. IOPS are a primary concern for database environments where content is being written and queried constantly, and when we take those database environments to the extreme (big data), the importance of IOPS can't be overstated: If you aren't able perform database reads and writes quickly in a big data environment, it doesn't matter how many gigabytes, terabytes or petabytes you have in your database ... You won't be able to efficiently access, add to or modify your data set.

As we worked with 10gen to create, test and tweak SoftLayer's MongoDB engineered servers, our primary focus centered on performance. Since the performance of massively scalable databases is dictated by the read and write operations to that database's data set, we invested significant resources into maximizing the IOPS for each engineered server ... And that involved a lot more than just swapping hard drives out of servers until we found a configuration that worked best. Yes, "Disk I/O" — the amount of input/output operations a given disk can perform — plays a significant role in big data IOPS, but many other factors limit big data performance. How is performance impacted by network-attached storage? At what point will a given CPU become a bottleneck? How much RAM should included in a base configuration to accommodate the load we expect our users to put on each tier of server? Are there operating system changes that can optimize the performance of a platform like MongoDB?

The resulting engineered servers are a testament to the blood, sweat and tears that were shed in the name of creating a reliable, high-performance big data environment. And I can prove it.

Most shared virtual instances — the scalable infrastructure many users employ for big data — use network-attached storage for their platform's storage. When data has to be queried over a network connection (rather than from a local disk), you introduce latency and more "moving parts" that have to work together. Disk I/O might be amazing on the enterprise SAN where your data lives, but because that data is not stored on-server with your processor or memory resources, performance can sporadically go from "Amazing" to "I Hate My Life" depending on network traffic. When I've tested the IOPS for network-attached storage from a large competitor's virtual instances, I saw an average of around 400 IOPS per mount. It's difficult to say whether that's "not good enough" because every application will have different needs in terms of concurrent reads and writes, but it certainly could be better. We performed some internal testing of the IOPS for the hard drive configurations in our Medium and Large MongoDB engineered servers to give you an apples-to-apples comparison.

Before we get into the tests, here are the specs for the servers we're using:

Medium (MD) MongoDB Engineered Server
Dual 6-core Intel 5670 CPUs
CentOS 6 64-bit
36GB RAM
1Gb Network - Bonded
Large (LG) MongoDB Engineered Server
Dual 8-core Intel E5-2620 CPUs
CentOS 6 64-bit
128GB RAM
1Gb Network - Bonded
 

The numbers shown in the table below reflect the average number of IOPS we recorded with a 100% random read/write workload on each of these engineered servers. To measure these IOPS, we used a tool called fio with an 8k block size and iodepth at 128. Remembering that the virtual instance using network-attached storage was able to get 400 IOPS per mount, let's look at how our "base" configurations perform:

Medium - 2 x 64GB SSD RAID1 (Journal) - 4 x 300GB 15k SAS RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 2937
Random Write IOPS - /var/lib/mongo/logs 1306
Random Read IOPS - /var/lib/mongo/data 1720
Random Write IOPS - /var/lib/mongo/data 772
Random Read IOPS - /var/lib/mongo/data/journal 19659
Random Write IOPS - /var/lib/mongo/data/journal 8869
   
Medium - 2 x 64GB SSD RAID1 (Journal) - 4 x 400GB SSD RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 30269
Random Write IOPS - /var/lib/mongo/logs 13124
Random Read IOPS - /var/lib/mongo/data 33757
Random Write IOPS - /var/lib/mongo/data 14168
Random Read IOPS - /var/lib/mongo/data/journal 19644
Random Write IOPS - /var/lib/mongo/data/journal 8882
   
Large - 2 x 64GB SSD RAID1 (Journal) - 6 x 600GB 15k SAS RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 4820
Random Write IOPS - /var/lib/mongo/logs 2080
Random Read IOPS - /var/lib/mongo/data 2461
Random Write IOPS - /var/lib/mongo/data 1099
Random Read IOPS - /var/lib/mongo/data/journal 19639
Random Write IOPS - /var/lib/mongo/data/journal 8772
 
Large - 2 x 64GB SSD RAID1 (Journal) - 6 x 400GB SSD RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 32403
Random Write IOPS - /var/lib/mongo/logs 13928
Random Read IOPS - /var/lib/mongo/data 34536
Random Write IOPS - /var/lib/mongo/data 15412
Random Read IOPS - /var/lib/mongo/data/journal 19578
Random Write IOPS - /var/lib/mongo/data/journal 8835

Clearly, the 400 IOPS per mount results you'd see in SAN-based storage can't hold a candle to the performance of a physical disk, regardless of whether it's SAS or SSD. As you'd expect, the "Journal" reads and writes have roughly the same IOPS between all of the configurations because all four configurations use 2 x 64GB SSD drives in RAID1. In both configurations, SSD drives provide better Data mount read/write performance than the 15K SAS drives, and the results suggest that having more physical drives in a Data mount will provide higher average IOPS. To put that observation to the test, I maxed out the number of hard drives in both configurations (10 in the 2U MD server and 34 in the 4U LG server) and recorded the results:

Medium - 2 x 64GB SSD RAID1 (Journal) - 10 x 300GB 15k SAS RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 7175
Random Write IOPS - /var/lib/mongo/logs 3481
Random Read IOPS - /var/lib/mongo/data 6468
Random Write IOPS - /var/lib/mongo/data 1763
Random Read IOPS - /var/lib/mongo/data/journal 18383
Random Write IOPS - /var/lib/mongo/data/journal 8765
   
Medium - 2 x 64GB SSD RAID1 (Journal) - 10 x 400GB SSD RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 32160
Random Write IOPS - /var/lib/mongo/logs 12181
Random Read IOPS - /var/lib/mongo/data 34642
Random Write IOPS - /var/lib/mongo/data 14545
Random Read IOPS - /var/lib/mongo/data/journal 19699
Random Write IOPS - /var/lib/mongo/data/journal 8764
   
Large - 2 x 64GB SSD RAID1 (Journal) - 34 x 600GB 15k SAS RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 17566
Random Write IOPS - /var/lib/mongo/logs 11918
Random Read IOPS - /var/lib/mongo/data 9978
Random Write IOPS - /var/lib/mongo/data 6526
Random Read IOPS - /var/lib/mongo/data/journal 18522
Random Write IOPS - /var/lib/mongo/data/journal 8722
 
Large - 2 x 64GB SSD RAID1 (Journal) - 34 x 400GB SSD RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 34220
Random Write IOPS - /var/lib/mongo/logs 15388
Random Read IOPS - /var/lib/mongo/data 35998
Random Write IOPS - /var/lib/mongo/data 17120
Random Read IOPS - /var/lib/mongo/data/journal 17998
Random Write IOPS - /var/lib/mongo/data/journal 8822

It should come as no surprise that by adding more drives into the configuration, we get better IOPS, but you might be wondering why the results aren't "betterer" when it comes to the IOPS in the SSD drive configurations. While the IOPS numbers improve going from four to ten drives in the medium engineered server and six to thirty-four drives in the large engineered server, they don't increase as significantly as the IOPS differences in the SAS drives. This is what I meant when I explained that several factors contribute to and potentially limit IOPS performance. In this case, the limiting factor throttling the (ridiculously high) IOPS is the RAID card we are using in the servers. We've been working with our RAID card vendor to test a new card that will open a little more headroom for SSD IOPS, but that replacement card doesn't provide the consistency and reliability we need for these servers (which is just as important as speed).

There are probably a dozen other observations I could point out about how each result compares with the others (and why), but I'll stop here and open the floor for you. Do you notice anything interesting in the results? Does anything surprise you? What kind of IOPS performance have you seen from your server/cloud instance when running a tool like fio?

-Kelly

August 21, 2012

High Performance Computing - GPU v. CPU

Sometimes, technical conversations can sound like people are just making up tech-sounding words and acronyms: "If you want HPC to handle Gigaflops of computational operations, you probably need to supplement your server's CPU and RAM with a GPU or two." It's like hearing a shady auto mechanic talk about replacing gaskets on double overhead flange valves or hearing Chris Farley (in Tommy Boy) explain that he was "just checking the specs on the endline for the rotary girder" ... You don't know exactly what they're talking about, but you're pretty sure they're lying.

When we talk about high performance computing (HPC), a natural tendency is to go straight into technical specifications and acronyms, but that makes the learning curve steeper for people who are trying to understand why a solution is better suited for certain types of workloads than technology they are already familiar with. With that in mind, I thought I'd share a quick explanation of graphics processing units (GPUs) in the context of central processing units (CPUs).

The first thing that usually confuses people about GPUs is the name: "Why do I need a graphics processing unit on a server? I don't need to render the visual textures from Crysis on my database server ... A GPU is not going to benefit me." It's true that you don't need cutting-edge graphics on your server, but a GPU's power isn't limited to "graphics" operations. The "graphics" part of the name reflects the original intention for kind of processing GPUs perform, but in the last ten years or so, developers and engineers have come to adapt the processing power for more general-purpose computing power.

GPUs were designed in a highly parallel structure that allows large blocks of data to be processed at one time — similar computations are being made on data at the same time (rather than in order). If you assigned the task of rendering a 3D environment to a CPU, it would slow to a crawl — it handles requests more linearly. Because GPUs are better at performing repetitive tasks on large blocks of data than CPUs, you start see the benefit of enlisting a GPU in a server environment.

The Folding@home project and bitcoin mining are two of the most visible distributed computing projects that GPUs are accelerating, and they're perfect examples of workloads made exponentially faster with the parallel processing power of graphics processing units. You don't need to be folding protein or completing a blockchain to get the performance benefits, though; if you are taxing your CPUs with repetitive compute tasks, a GPU could make your life a lot easier.

If that still doesn't make sense, I'll turn the floor over to the Mythbusters in a presentation for our friends at NVIDIA:

SoftLayer uses NVIDIA Tesla GPUs in our high performance computing servers, so developers can use "Compute Unified Device Architecture" (CUDA) to easily take advantage of their GPU's capabilities.

Hopefully, this quick rundown is helpful in demystifying the "technobabble" about GPUs and HPC ... As a quick test, see if this sentence makes more sense now than it did when you started this blog: "If you want HPC to handle Gigaflops of computational operations, you probably need to supplement your server's CPU and RAM with a GPU or two."

-Phil

August 17, 2012

SoftLayer Private Clouds - Provisioning Speed

SoftLayer Private Clouds are officially live, and that means you can now order and provision your very own private cloud infrastructure on Citrix CloudPlatform quickly and easily. Chief Scientist Nathan Day introduced private clouds on the blog when it was announced at Cloud Expo East, and CTO Duke Skarda followed up with an explanation of the architecture powering SoftLayer Private Clouds. The most amazing claim: You can order a private cloud infrastructure and spin up its first virtual machines in a matter of hours rather than days, weeks or months.

If you've ever looked at building your own private cloud in the past, the "days, weeks or months" timeline isn't very surprising — you have to get the hardware provisioned, the software installed and the network configured ... and it all has to work together. Hearing that SoftLayer Private Clouds can be provisioned in "hours" probably seems too good to be true to administrators who have tried building a private cloud in the past, so I thought I'd put it to the test by ordering a private cloud and documenting the experience.

At 9:30am, I walked over to Phil Jackson's desk and asked him if he would be interested in helping me out with the project. By 9:35am, I had him convinced (proof), and the clock was started.

When we started the order process, part of our work is already done for us:

SoftLayer Private Clouds

To guarantee peak performance of the CloudPlatform management server, SoftLayer selected the hardware for us: A single processor quad core Xeon 5620 server with 6GB RAM, GigE, and two 2.0TB SATA II HDDs in RAID1. With the management server selected, our only task was choosing our host server and where we wanted the first zone (host server and management server) to be installed:

SoftLayer Private Clouds

For our host server, we opted for a dual processor quad core Xeon 5504 with the default specs, and we decided to spin it up in DAL05. We added (and justified) a block of 16 secondary IP addresses for our first zone, and we submitted the order. The time: 9:38am.

At this point, it would be easy for us to game the system to shave off a few minutes from the provisioning process by manually approving the order we just placed (since we have access to the order queue), but we stayed true to the experiment and let it be approved as it normally would be. We didn't have to wait long:

SoftLayer Private Clouds

At 9:42am, our order was approved, and the pressure was on. How long would it take before we were able to log into the CloudStack portal to create a virtual machine? I'd walked over to Phil's desk 12 minutes ago, and we still had to get two physical servers online and configured to work with each other on CloudPlatform. Luckily, the automated provisioning process took on a the brunt of that pressure.

Both server orders were sent to the data center, and the provisioning system selected two pieces of hardware that best matched what we needed. Our exact configurations weren't available, so a SBT in the data center was dispatched to make the appropriate hardware changes to meet our needs, and the automated system kicked into high gear. IP addresses were assigned to the management and host servers, and we were able to monitor each server's progress in the customer portal. The hardware was tested and prepared for OS install, and when it was ready, the base operating systems were loaded — CentOS 6 on the management server and Citrix XenServer 6 on the host server. After CentOS 6 finished provisioning on the management server, CloudStack was installed. Then we got an email:

SoftLayer Private Clouds

At 11:24am, less than two hours from when I walked over to Phil's desk, we had two servers online and configured with CloudStack, and we were ready to provision our first virtual machines in our private cloud environment.

We log into CloudStack and added our first instance:

SoftLayer Private Clouds

We configured our new instance in a few clicks, and we clicked "Launch VM" at 11:38am. It came online in just over 3 minutes (11:42am):

SoftLayer Private Clouds

I got from "walking to Phil's desk" to having a multi-server private cloud infrastructure running a VM in exactly two hours and twelve minutes. For fun, I created a second VM on the host server, and it was provisioned in 31.7 seconds. It's safe to say that the claim that SoftLayer takes "hours" to provision a private cloud has officially been confirmed, but we thought it would be fun to add one more wrinkle to the system: What if we wanted to add another host server in a different data center?

From the "Hardware" tab in the SoftLayer portal, we selected "Add Zone" to from the "Actions" in the "Private Clouds" section, and we chose a host server with four portable IP addresses in WDC01. The zone was created, and the host server went through the same hardware provisioning process that our initial deployment went through, and our new host server was online in < 2 hours. We jumped into CloudStack, and the new zone was created with our host server ready to provision VMs in Washington, D.C.

Given how quick the instances were spinning up in the first zone, we timed a few in the second zone ... The first instance was online in about 4 minutes, and the second was running in 26.8 seconds.

SoftLayer Private Clouds

By the time I went out for a late lunch at 1:30pm, we'd spun up a new private cloud infrastructure with geographically dispersed zones that launched new cloud instances in under 30 seconds. Not bad.

Don't take my word for it, though ... Order a SoftLayer Private Cloud and see for yourself.

-@khazard

July 5, 2012

Bandwidth Utilization: Managing a Global Network

SoftLayer has over 1,750 Gbit/s of network capacity. In each of our data centers and points of presence, we have an extensive library of peering relationships and multiple 10 Gbit/s connections to independent Tier 1 carriers. We operate one of the fastest, most reliable networks on the planet, and our customers love it:

From a network operations standpoint, that means we have our work cut out for us to keep everything running smoothly while continuing to build the network to accommodate a steady increase in customer demand. It might be easier to rest on our laurels to simply maintain what we already have in place, but when you look at the trend of bandwidth usage over the past 18 months, you'll see why we need to be proactive about expanding our network:

Long Term Bandwidth Usage Trend

The purple line above plots the 95th percentile of weekly outbound bandwidth utilization on the SoftLayer network, and the red line shows the linear trend of that consumption over time. From week to week, the total usage appears relatively consistent, growing at a steady rate, but when you look a little deeper, you get a better picture of how dynamic our network actually is:

SoftLayer Weekly Bandwidth Usage

The animated gif above shows the 2-hour average of bandwidth usage on our entire network over a seven-week period (times in CDT). As you can see, on a day-to-day basis, consumption fluctuates pretty significantly. The NOC (Network Operations Center) needs to be able to accommodate every spike of usage at any time of day, and our network engineering and strategy teams have to stay ahead of the game when it comes to planning points of presence and increasing bandwidth capacity to accommodate our customers' ever-expanding needs.

But wait. There's more.

Let's go one level deeper and look a graph of the 95th percentile bandwidth usage on 5-minute intervals from one week in a single data center:

Long Term Bandwidth Usage Trend

The variations in usage are even more dramatic. Because we have thirteen data centers geographically dispersed around the world with an international customer base, the variations you see in total bandwidth utilization understate the complexity of our network's bandwidth usage. Customers targeting the Asian market might host content in SNG01, and the peaks in bandwidth consumption from Singapore will counterbalance the valleys of consumption at the same time in the United States and Europe.

With that in mind, here's a challenge for you: Looking at the graph above, if the times listed are in CDT, which data center do you think that data came from?

It would be interesting to look at weekly usage trends, how those trends are changing and what those trends tell us about our customer base, but that assessment would probably be "information overload" in this post, so I'll save that for another day.

-Dani

P.S. If you came to this post expecting to see "a big truck" or "a series of tubes," I'm sorry I let you down.

January 10, 2012

Web Development - HTML5 - Custom Data Attributes

I recently worked on a project that involved creating promotion codes for our clients. I wanted to make this tool as simple as possible to use and because this involved dealing with thousands of our products in dozens of categories with custom pricing for each of these products, I had to find a generic way to deal with client-side form validation. I didn't want to write custom JavaScript functions for each of the required inputs, so I decided to use custom data attributes.

Last month, we started a series focusing on web development tips and tricks with a post about JavaScript optimization. In this installment, we're cover how to use HTML5 custom data attributes to assist you in validating forms.

Custom data attributes for elements are "[attributes] in no namespace whose name starts with the string 'data-', has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z)." Thanks W3C. That definition is bookish, so let's break it down and look at some examples.

Valid:

<div data-name="Philip">Mr. Thompson is an okay guy.</div>
<a href="softlayer.com" data-company-name="SoftLayer" data-company-state="TX">SoftLayer</a>
<li data-color="blue">Smurfs</li>

Invalid:

// This attribute is not prefixed with 'data-'
    <h2 database-id="244">Food</h2>
 
// These 2 attributes contain capital letters in the attribute names
    <p data-firstName="Ashley" data-lastName="Thompson">...</p>
 
// This attribute does not have any valid characters following 'data-'
    <img src="/images/pizza.png" data-="Sausage" />

Now that you know what custom data attributes are, why would we use them? Custom attributes allow us to relate specific information to particular elements. This information is hidden to the end user, so we don't have to worry about the data cluttering screen space and we don't have to create separate hidden elements whose purpose is to hold custom data (which is just bad practice). This data can be used by a JavaScript programmer to many different ends. Among the most common use cases are to manipulate elements, provide custom styles (using CSS) and perform form validation. In this post, we'll focus on form validation.

Let's start out with a simple form with two radio inputs. Since I like food, our labels will be food items!

<input type="radio" name="food" value="pizza" data-sl-required="food" data-sl-show=".pizza" /> Pizza
<input type="radio" name="food" value="sandwich" data-sl-required="food" data-sl-show="#sandwich" /> Sandwich
<div class="hidden required" data-sl-error-food="1">A food item must be selected</div>

Here we have two standard radio inputs with some custom data attributes and a hidden div (using CSS) that contains our error message. The first input has a name of food and a value of pizza – these will be used on the server side once our form is submitted. There are two custom data attributes for this input: data-sl-required and data-sl-show. I am defining the data attribute data-sl-required to indicate that this radio button is required in the form and one of them must be selected to pass validation. Note that I have prefixed required with sl- to namespace my data attribute. required is generic and I don't want to have any conflicts with any other attributes – especially ones written for other projects! The value of data-sl-required is food, which I have tied to the div with the attribute data-sl-error-food (more on this later).

The second custom attribute is data-sl-show with a selector of .pizza. (To review selectors, jump back to the JavaScript optimization post.) This data attribute will be used to show a hidden div that contains the class pizza when the radio button is clicked. The sandwich radio input has the same data attributes but with slightly different values.

Now we can move on to our HTML with the hidden text inputs:

<div class="hidden pizza">
    Pizza type: <input type="text" name="pizza" data-sl-required="pizza" data-sl-depends="{&quot;type&quot;:&quot;radio&quot;,&quot;name&quot;:&quot;food&quot;,&quot;value&quot;:&quot;sandwich&quot;}" />
    <div class="hidden required" data-sl-error-pizza="1">The pizza type must not be empty</div>
</div>
 
<div class="hidden" id="sandwich">
    Sandwich: <input type="text" name="sandwich" data-sl-required="sandwich" data-sl-depends="{&quot;type&quot;:&quot;radio&quot;,&quot;name&quot;:&quot;food&quot;,&quot;value&quot;:&quot;sandwich&quot;}" />
    <div class="hidden required" data-sl-error-sandwich="1">The sandwich must not be empty</div>
</div>

These are hidden divs that contain text inputs that will be used to input more data depending on the radio input selected. Notice that the first outer div has a class of pizza and the second outer div has an id of sandwich. These values tie back to the data-sl-show selectors for the radio inputs. These text inputs also contain the data-sl-required attribute like the previous radio inputs. The data-sl-depends data attribute is a fun attribute that contains a JSON-encoded object that's been run through PHP's htmlentities() function. For readability, the data-sl-depends values contain:

{"type":"radio","name":"food","value":"pizza"}
{"type":"radio","name":"food","value":"sandwich"}

This first attribute says that our text input “depends” on the radio input with the name food and value pizza to be visible and selected in order to be processed as required. You can just imagine the possibilities of combinations you can create to make very custom functionality with very generic JavaScript.

Now we can examine our JavaScript to make sense of all these custom data attributes. Note that I'll be using Mootools' syntax, but the same can fairly easily be accomplished using jQuery or your favorite JavaScript framework. I'm going to start by creating a DataForm class. This will be generic enough so that you can use it in multiple forms and it's not tied to this specific instance. Reuse is good! To have it fit our needs, we're going to pass some options when instantiating it.

new DataForm({
    ...,
    dataAttributes: {
        required: 'data-sl-required',
        errorPrefix: 'data-sl-error',
        depends: 'data-sl-depends',
        show: 'data-sl-show'
    }
});

As you can see, I'm using the data attribute names from our form as the options – this will allow you to create your own data attribute names depending on your uses. We first need to make our hidden divs visible whenever our radio buttons are clicked – we'll use our initData() method for that.

initData: function() {
    var attrs = this.options.dataAttributes,
        divs = [];
 
    $$('input[type=radio]['+attrs.show+']').each(function(input) {
        var div = $$(input.get(attrs.show));
        divs.push(div);
        input.addEvent('change', function(e) {
            divs.each(function(div) { div.hide(); });
            div.show();
        });
    });
}

This method grabs all the radio inputs with the show attribute (data-sl-show) and adds an onchange event to each of them. When a radio input is checked, it first hides all the divs, and then shows the div that's associated with that radio input.

Great! Now we have our text inputs showing and hiding depending on which radio button is selected. Now onto the actual validation. First, we'll make sure our required radio inputs are checked:

$$('input[type=radio]['+attrs.required+']:not(:checked)').each(function(input) {
    var name = input.get('name');
    checkError(name, isRadioWithNameSelected(name))
});

This grabs all the unchecked radio inputs with the required attribute (data-sl-required) and checks to see if any radio with that same name is selected using the function isRadioWithNameSelected(). The checkError() function will show or hide the error div (data-sl-error-*) depending if the radio is checked or not. Don't worry - you'll see how these functions are implemented in the JSFiddle below.

Now we can check our text inputs:

$$('input[type=text]['+attrs.required+']:visible').each(function(input) {
    var name = input.get('name'),
        depends = input.get(attrs.depends),
        value = input.get('value').trim();
 
    if (depends) {
        depends = JSON.encode(depends);
        switch (depends.type) {
            case 'radio':
                if (depends.name &amp;&amp; depends.value) {
                    var radio = $$('input[type=radio][name="'+depends.name+'"][value="'+depends.value+'"]:checked');
                    if (!radio) {
                        checkError(input.get(attrs.required), true);
                        return;
                    }
                }
                break;
        }
    }
 
    checkError(name, value!='');
});

This obtains the required and visible text inputs and determines if they are empty or not. Here we look at our depends object to grab the associated radio inputs. If that radio is checked and the text input is empty, the error message appears. If that radio is not checked, it doesn't even evaluate that text input. The depends.type is in a switch statement to allow for easy expansion of types. There are other cases for evaluating relationships ... I'll let you come up with more for yourself.

That concludes our usage of custom data attributes in form validation. This really only touched upon the very tip of the iceberg. Data attributes allow you – the creative developer – to interact in a new, HTML-valid way with your web pages.

Check out the working example of the above code at jsfiddle.net. To read more on custom data attributes, see what Google has to say. If you want to see really cool functionality that uses data attributes plus so much more, check out Aaron Newton's Behavior module over at clientcide.com.

Happy coding!

-Philip

December 27, 2011

186,282.4 Miles Per Second

Let's say there are 2495 miles separating me and the world's foremost authority on orthopedics who lives in Vancouver, Canada. If I needed some medical advice for how to remove a screwdriver from the palm of my hand that was the result of a a Christmas toy with "some assembly required," I'd be pretty happy I live in the year 2011. Here are a few of the communication methods that I may have settled with in years past:

On Foot: The average human walks 3.5 mph sustainable. Using this method it would take a messenger 29.7 days to get a description of the problem and a drawing of the damage to that doctor if the messenger walked non-stop. Because the doctor in this theoretical scenario is the only person on the planet who knows how to perform the screwdriver removal surgery, the doctor would have to accompany the messenger back to Texas, and I am fairly sure by the time they arrived, they'd have to visit a grave with a terrible epitaph like "He got screwed," or they'd find me answering to a crass nickname like "Stumpy."

On Horseback: The average speed of a galloping horse is around 30 mph sustainable, so with the help of a couple equestrian friends, the message could reach the doctor in 3.5 days if the horse were to run the whole journey without stopping, the doctor could saddle up and hit the trail back to Houston, getting here in about 7 days. In that span of time, I'd only be able to wave to him with one hand, given the inevitable amputation.

Via High-Speed Rail: With an average speed of 101 mph, it would take a mere 24.7 hour to get from Houston to Vancouver, so if this means of communication were the only one used, I could have the doctor at my bedside in a little over 48 hours. That turnaround time might mean my hand would be saved, but the delay would still yield a terrible headache and a lot of embarrassment ... Seeing as how a screwdriver in your hand is relatively noticeable at Christmas parties.

Via Commercial Flight: If the message was taken by plane and the doctor returned by plane, the round trip would be around 12.4 hours at an average rate of 400 mph ... I'd only have to endure half a day of mockery.

Via E-mail: With the multimedia capabilities of email, the doctor could be sent a picture of the damage instantly and a surgeon in Houston could be instructed on how to best save my hand. There would be little delay, but there are no guarantees that the stand-in surgeon would be able to correctly execute on the instructions given by this theoretical world's only orthopedic surgeon.

Via Video Chat: In milliseconds, a video connection could be made between the stand-in surgeon and the orthopedic specialist. The specialist could watch and instruct the stand-in surgeon on how to complete the surgery, and I'd be using both hands again by Christmas morning. Technology is also getting to a point where the specialist could perform parts of the surgery remotely ... Let's just hope they use a good network connection on both end since any latency would be pretty significant.

I started thinking about the amazing speed with which we access information when I met with CTO Duke Skarda. He gave a few examples of our customers that piqued his interested, given to the innovative nature of their business, and one in particular made me realize how far we've come when I considered the availability and speed of our access to information:

The company facilitated advertisements on the Internet by customizing the advertising experience to each visitor by auctioning off ad space to companies that fit that particular visitor's profile. In the simplest sense, a website has a blank area for an advertisment, the site sends non-sensitive information about the visitor to an advertising network. The advertising network then distributes that information to multiple advertisers who process it, generate targeted ads and place a bid to "purchase" the space for that visitor. The winner of the auction is determined, and the winner's ad would be populated on the website.

All of this is done in under a second, before the visitor even knows the process took place.

We live in a time of instant access. We are only limited by the speed of light, a blazing 186,282.4 miles/second. That means you could, theoretically, send a message around the world in .03 milliseconds. Businesses use this speed to create and market products and services to the global market, I can't wait to see what tomorrow holds ... Maybe some kind of technology that prevents screwdrivers from piercing hands?

-Clayton

Categories: 
Subscribe to speed