Posts Tagged 'Network'

August 12, 2015

Network Performance 101: What is latency, and why does it matter?

We’ve all been there. Waiting for a web page to load can be so frustrating that we end up just closing out. You might ask yourself, “Hey, I have high-speed Internet. Why is this happening to me?” Well, there are a lot of factors outside your control that … control page loads. And whether you have an online store, run big data solutions, or have your employees set up on a network accessing files around the world, you never want to hear that your data, consumer products, information, or otherwise, is keeping you from a sale or slowing down employee productivity because of slow data transfer.

So why are some pages so much slower to load than others?
It could be that poorly written code or large images are slowing the load on the backend, but slow page loads can also be caused by network latency. This might sound elementary, but data is not just floating out there in some non-physical Internet space. In reality, data is stored on hard drives … somewhere. Network connectivity provides a path for that data to travel to end users around the world, and that connectivity can vary significantly—depending on how far it’s going, how many times the data has to hop between service providers, how much bandwidth is available along the way, the other data traveling across the same path, and a number of other variables.

The measurement of how quickly data travels between two connected points is called network latency. Network latency is an expression of the amount of time it takes a packet of data to get from one place to another.

Understanding Network Latency
Theoretically, data can travel at the speed of light across optical fiber network cables, but in practice, data typically travels slower than light due to the variables we referenced in the previous section. If a network connection doesn’t have any available bandwidth capacity, data might temporarily queue up to wait for its turn to travel across the line. If a service provider’s network doesn’t route a network path optimally, data could be sent hundreds or thousands of miles away from the destination in the process of routing to the destination. These kinds of delays and detours lead to higher network latency, which lead to slower page loads and download speeds.

We express network latency in milliseconds (that’s 1,000 milliseconds per second), and while a few thousandths of a second may not mean much to us as we’re living our daily lives, those milliseconds are often the deciding factors for whether we stay on a webpage or give up and try another site. As consumers of high-speed Internet, we like what we like, and we want what we want when we want it. In the financial sector, milliseconds can mean billions of dollars in gains or losses from trade transactions on a day-to-day basis.

Logical conclusion: Everyone wants the lowest network latency to the greatest number of users.

Common Approaches to Minimize Network Latency
If our shared goal is to minimize latency for our data, the most common approaches to addressing network latency involve limiting the number of potential variables that can impact the speed of data’s movement. While we don’t have complete control over how our data travels across the Internet, we can do a few things to keep our network latency in line:

  • Distribute data around the world: Users in different locations can pull data from a location that’s geographically close to them. Because the data is closer to the users, it is handed off fewer times, it has a shorter distance to travel, and inefficient routing is less likely to cause a significant performance impact.
  • Provision servers with high-capacity network ports: Huge volumes of data can travel to and from the server every second. If packets are delayed due to fully saturated ports, milliseconds of time pass, pages load slower, download speeds drop, and users get unhappy.
  • Understand how your providers route traffic: When you know how your data is transferred to users around the world, you can make better decisions about where you host your data.

How SoftLayer Minimizes Network Latency
To minimize latency, we took a unique approach to building our network. All of our data centers are connected to network points of presence. All of our network points of presence are connected to each other via our global backbone network. And by maintaining our own global backbone network, our network operations team is able to control network paths and data handoffs much more granularly than if we relied on other providers to move data between geographies.

SoftLayer Private Network

For example, if a user in Berlin wants to watch a cat video hosted on a SoftLayer server in Dallas, the packets of data that make up that cat video will travel across our backbone network (which is exclusively used by SoftLayer traffic) to Frankfurt, where the packets would be handed off to one of our peering or transit public network partners to get to the user in Berlin.

Without a global backbone network, the packets would be handed off to a peering or transit public network provider in Dallas, and that provider would route the packets across its network and/or hand the packets off to another provider at a network hop, and the packets would bounce their way to Germany. It’s entirely possible that the packets could get from Dallas to Berlin with the same network latency with or without the global backbone network, but without the global backbone network, there are a lot more variables.

In addition to building a global backbone network, we also segment public, private, and management traffic onto different network ports so that different types of traffic can be transferred without interfering with each other.

SoftLayer Private Network

But at the end of the day, all of that network planning and forethought doesn’t amount to a hill of beans if you can’t see the results for yourself. That’s why we put speed tests on our website so you can check out our network yourself (for more on speed tests, check out this blog post).

TL;DR: Network Latency
Your users want your data as quickly as you can get it to them. The time it takes for your data to get to them across the Internet is called network latency. The more control you (or your provider) have over your data’s network path, the more consistent (and lower) your network latency will be.

Stay tuned. Next month we will be discussing Network Performance 101: Security, where we’ll discuss all things cloud security—including answering your burning questions: Can other people see or access my data in a public cloud? Is my data more prone to hackers? And, what safeguards do SoftLayer have in place to protect data?

-JRL

March 30, 2015

The Importance of Data's Physical Location in the Cloud

If top-tier cloud providers use similar network hardware in their data centers and connect to the same transit and peering bandwidth providers, how can SoftLayer claim to provide the best network performance in the cloud computing industry?

Over the years, I've heard variations of that question asked dozens of times, and it's fairly easy to answer with impressive facts and figures. All SoftLayer data centers and network points of presence (PoPs) are connected to our unique global network backbone, which carries public, private, and management traffic to and from servers. Using our network connectivity table, some back-of-the-envelope calculations reveal that we have more than 2,500Gbps of bandwidth connectivity with some of the largest transit and peering bandwidth providers in the world (and that total doesn't even include the private peering relationships we have with other providers in various regional markets). Additionally, customers may order servers with up to 10Gbps network ports in our data centers.

For the most part, those stats explain our differentiation, but part of the bigger network performance story is still missing, and to a certain extent it has been untold—until today.

The 2,500+Gbps of bandwidth connectivity we break out in the network connectivity table only accounts for the on-ramps and off-ramps of our network. Our global network backbone is actually made up of an additional 2,600+Gbps of bandwidth connectivity ... and all of that backbone connectivity transports SoftLayer-related traffic.

This robust network architecture streamlines the access to and delivery of data on SoftLayer servers. When you access a SoftLayer server, the network is designed to bring you onto our global backbone as quickly as possible at one of our network PoPs, and when you're on our global backbone, you'll experience fewer hops (and a more direct route that we control). When one of your users requests data from your SoftLayer server, that data travels across the global backbone to the nearest network PoP, where it is handed off to another provider to carry the data the "last mile."

With this controlled environment, I decided to undertake an impromptu science experiment to demonstrate how location and physical distance affect network performance in the cloud.

Speed Testing on the SoftLayer Global Network Backbone

I work in the SoftLayer office in downtown Houston, Texas. In network-speak, this location is HOU04. You won't find that location on any data center or network tables because it's just an office, but it's connected to the same global backbone as our data centers and network points of presence. From my office, the "last mile" doesn't exist; when I access a SoftLayer server, my bits and bytes only travel across the SoftLayer network, so we're effectively cutting out a number of uncontrollable variables in the process of running network speed tests.

For better or worse, I didn't tell any network engineers that I planned to run speed tests to every available data center and share the results I found, so you're seeing exactly what I saw with no tomfoolery. I just fired up my browser, headed to our Data Centers page, and made my way down the list using the SpeedTest option for each facility. Customers often go through this process when trying to determine the latency, speeds, and network path that they can expect from servers in each data center, but if we look at the results collectively, we can learn a lot more about network performance in general.

With the results, we'll discuss how network speed tests work, what the results mean, and why some might be surprising. If you're feeling scientific and want to run the tests yourself, you're more than welcome to do so.

The Ookla SpeedTests we link to from the data centers table measured the latency (ping time), jitter (variation in latency), download speeds, and upload speeds between the user's computer and the data center's test server. To run this experiment, I connected my MacBook Pro via Ethernet to a 100Mbps wired connection. At the end of each speed test, I took a screenshot of the performance stats:

SoftLayer Network Speed Test

To save you the trouble of trying to read all of the stats on each data center as they cycle through that animated GIF, I also put them into a table (click the data center name to see its results screenshot in a new window):

Data Center Latency (ms) Download Speed (Mbps) Upload Speed (Mbps) Jitter (ms)
AMS01 121 77.69 82.18 1
DAL01 9 93.16 87.43 0
DAL05 7 93.16 83.77 0
DAL06 7 93.11 83.50 0
DAL07 8 93.08 83.60 0
DAL09 11 93.05 82.54 0
FRA02 128 78.11 85.08 0
HKG02 184 50.75 78.93 2
HOU02 2 93.12 83.45 1
LON02 114 77.41 83.74 2
MEL01 186 63.40 78.73 1
MEX01 27 92.32 83.29 1
MON01 52 89.65 85.94 3
PAR01 127 82.40 83.38 0
SJC01 44 90.43 83.60 1
SEA01 50 90.33 83.23 2
SNG01 195 40.35 72.35 1
SYD01 196 61.04 75.82 4
TOK02 135 75.63 82.20 2
TOR01 40 90.37 82.90 1
WDC01 43 89.68 84.35 0

By performing these speed tests on the SoftLayer network, we can actually learn a lot about how speed tests work and how physical location affects network performance. But before we get into that, let's take note of a few interesting results from the table above:

  • The lowest latency from my office is to the HOU02 (Houston, Texas) data center. That data center is about 14.2 miles away as the crow flies.
  • The highest latency results from my office are to the SYD01 (Sydney, Australia) and SNG01 (Singapore) data centers. Those data centers are at least 8,600 and 10,000 miles away, respectively.
  • The fastest download speed observed is 93.16Mbps, and that number was seen from two data centers: DAL01 and DAL05.
  • The slowest download speed observed is 40.35Mbps from SNG01.
  • The fastest upload speed observed is 87.43Mbps to DAL01.
  • The slowest upload speed observed is 72.35Mbps to SNG01.
  • The upload speeds observed are faster than the download speeds from every data center outside of North America.

Are you surprised that we didn't see any results closer to 100Mbps? Is our server in Singapore underperforming? Are servers outside of North America more selfish to receive data and stingy to give it back?

Those are great questions, and they actually jumpstart an explanation of how the network tests work and what they're telling us.

Maximum Download Speed on 100Mbps Connection

If my office is 2 milliseconds from the test server in HOU02, why is my download speed only 93.12Mbps? To answer this question, we need to understand that to perform these tests, a connection is made using Transmission Control Protocol (TCP) to move the data, and TCP does a lot of work in the background. The download is broken into a number of tiny chunks called packets and sent from the sender to the receiver. TCP wants to ensure that each packet that is sent is received, so the receiver sends an acknowledgement back to the sender to confirm that the packet arrived. If the sender is unable to verify that a given packet was successfully delivered to the receiver, the sender will resend the packet.

This system is pretty simple, but in actuality, it's very dynamic. TCP wants to be as efficient as possible ... to send the fewest number of packets to get the entire message across. To accomplish this, TCP is able to modify the size of each packet to optimize it for each communication. The receiver dictates how large the packet should be by providing a receive window to accommodate a small packet size, and it analyzes and adjusts the receive window to get the largest packets possible without becoming unstable. Some operating systems are better than others when it comes to tweaking and optimizing TCP transfer rates, but the processes TCP takes to ensure that the packets are sent and received without error takes overhead, and that overhead limits the maximum speed we can achieve.

Understanding the SNG01 Results

Why did my SNG01 speed test max out at a meager 40.35Mbps on my 100Mbps connection? Well, now that we understand how TCP is working behind the scenes, we can see why our download speeds from Singapore are lower than we'd expect. Latency between the sending and successful receipt of a packet plays into TCP’s considerations of a stable connection. Higher ping times will cause TCP to send smaller packet sizes than it would for lower ping times to ensure that no sizable packet is lost (which would have to be reproduced and resent).

With our global backbone optimizing the network path of the packets between Houston and Singapore, the more than 10,000-mile journey, the nature of TCP, and my computer's TCP receive window adjustments all factor into the download speeds recorded from SNG01. Looking at the results in the context of the distance the data has to travel, our results are actually well within the expected performance.

Because the default behavior of TCP is partially to blame for the results, we could actually tweak the test and tune our configurations to deliver faster speeds. To confirm that improvements can be made relatively easily, we can actually just look at the answer to our third question...

Upload > Download?

Why are the upload speeds faster than the download speeds after latency jumps from 50ms to 114ms? Every location in North America is within 2,000 miles of Houston, while the closest location outside of North America is about 5,000 miles away. With what we've learned about how TCP and physical distance play into download speeds, that jump in distance explains why the download speeds drop from 90.33Mbps to 77.41Mbps as soon as we cross an ocean, but how can the upload speeds to Europe (and even APAC) stay on par with their North American counterparts? The only difference between our download path and upload path is which side is sending and which side is receiving. And if the receiver determines the size of the TCP receive window, the most likely culprit in the discrepancy between download and upload speeds is TCP windowing.

A Linux server is built and optimized to be a server, whereas my MacOSX laptop has a lot of other responsibilities, so it shouldn't come as a surprise that the default TCP receive window handling is better on the server side. With changes to the way my laptop handles TCP, download speeds would likely be improved significantly. Additionally, if we wanted to push the envelope even further, we might consider using a different transfer protocol to take advantage of the consistent, controlled network environment.

The Importance of Physical Location in Cloud Computing

These real-world test results under controlled conditions demonstrate the significance of data's geographic proximity to its user on the user's perceived network performance. We know that the network latency in a 14-mile trip will be lower than the latency in a 10,000-mile trip, but we often don't think about the ripple effect latency has on other network performance indicators. And this experiment actually controls a lot of other variables that can exacerbate the performance impact of geographic distance. The tests were run on a 100Mbps connection because that's a pretty common maximum port speed, but if we ran the same tests on a GigE line, the difference would be even more dramatic. Proof: HOU02 @ 1Gbps v. SNG01 @ 1Gbps

Let's apply our experiment to a real-world example: Half of our site's user base is in Paris and the other half is in Singapore. If we chose to host our cloud infrastructure exclusively from Paris, our users would see dramatically different results. Users in Paris would have sub-10ms latency while users in Singapore have about 300ms of latency. Obviously, operating cloud servers in both markets would be the best way to ensure peak performance in both locations, but what if you can only afford to provision your cloud infrastructure in one location? Where would you choose to provision that infrastructure to provide a consistent user experience for your audience in both markets?

Given what we've learned, we should probably choose a location with roughly the same latency to both markets. We can use the SoftLayer Looking Glass to see that San Jose, California (SJC01) would be a logical midpoint ... At this second, the latency between SJC and PAR on the SoftLayer backbone is 149ms, and the latency between SJC and SNG is 162ms, so both would experience very similar performance (all else being equal). Our users in the two markets won't experience mind-blowing speeds, but neither will experience mind-numbing speeds either.

The network performance implications of physical distance apply to all cloud providers, but because of the SoftLayer global network backbone, we're able to control many of the variables that lead to higher (or inconsistent) latency to and from a given data center. The longer a single provider can route traffic, the more efficiently that traffic will move. You might see the same latency speeds to another provider's cloud infrastructure from a given location at a given time across the public Internet, but you certainly won't see the same consistency from all locations at all times. SoftLayer has spent millions of dollars to build, maintain, and grow our global network backbone to transport public and private network traffic, and as a result, we feel pretty good about claiming to provide the best network performance in cloud computing.

-@khazard

December 17, 2014

Does physical location matter “in the cloud”?

By now everyone understands that the cloud is indeed a place on Earth, but there still seems to be confusion around why global expansion by way of adding data centers is such a big deal. After all, if data is stored “in the cloud,” why wouldn’t adding more servers in our existing data centers suffice? Well, there’s a much more significant reason for adding more data centers than just being able to host more data.

As we’ve explained in previous blog posts, Globalization and Hosting: The World Wide Web is Flat and Global Network: The Proof is in the Traceroute, our strategic objective is to get a network point of presence (PoP) within 40ms of all our users (and our users' users) in order to provide the best network stability and performance possible anywhere on the planet.

Data can travel across the Internet quickly, but just like anything, the farther something has to go, the longer it will take to get there. Seems pretty logical right? But we also need to take into account that not all routes are created equally. So to deliver the best network performance, we designed our global network to get data to the closest route possible to our network. Think of each SoftLayer PoP as an on-ramp to our global network backbone. The sooner a user is able to get onto our network, the quicker we can efficiently route them through our PoPs to a server in one of our data centers. Furthermore, once plugged into the network, we are able to control the flow of traffic.

Let’s take a look at this traceroute example from the abovementioned blog post. As you are probably aware, a traceroute shows the "hops" or routers along the network path from an origin IP to a destination IP. When we were building out the Singapore data center (before the network points of presence were turned up in Asia), the author ran a traceroute from Singapore to SoftLayer.com, and immediately after the launch of the data center, ran another one.

Pre-Launch Traceroute to SoftLayer.com from Singapore

traceroute to softlayer.com (66.228.118.53), 64 hops max, 52 byte packets
 1  10.151.60.1 (10.151.60.1)  1.884 ms  1.089 ms  1.569 ms
 2  10.151.50.11 (10.151.50.11)  2.006 ms  1.669 ms  1.753 ms
 3  119.75.13.65 (119.75.13.65)  3.380 ms  3.388 ms  4.344 ms
 4  58.185.229.69 (58.185.229.69)  3.684 ms  3.348 ms  3.919 ms
 5  165.21.255.37 (165.21.255.37)  9.002 ms  3.516 ms  4.228 ms
 6  165.21.12.4 (165.21.12.4)  3.716 ms  3.965 ms  5.663 ms
 7  203.208.190.21 (203.208.190.21)  4.442 ms  4.117 ms  4.967 ms
 8  203.208.153.241 (203.208.153.241)  6.807 ms  55.288 ms  56.211 ms
 9  so-2-0-3-0.laxow-cr1.ix.singtel.com (203.208.149.238)  187.953 ms  188.447 ms  187.809 ms
10  ge-4-0-0-0.laxow-dr2.ix.singtel.com (203.208.149.34)  184.143 ms
    ge-4-1-1-0.sngc3-dr1.ix.singtel.com (203.208.149.138)  189.510 ms
    ge-4-0-0-0.laxow-dr2.ix.singtel.com (203.208.149.34)  289.039 ms
11  203.208.171.98 (203.208.171.98)  187.645 ms  188.700 ms  187.912 ms
12  te1-6.bbr01.cs01.lax01.networklayer.com (66.109.11.42)  186.482 ms  188.265 ms  187.021 ms
13  ae7.bbr01.cs01.lax01.networklayer.com (173.192.18.166)  188.569 ms  191.100 ms  188.736 ms
14  po5.bbr01.eq01.dal01.networklayer.com (173.192.18.140)  381.645 ms  410.052 ms  420.311 ms
15  ae0.dar01.sr01.dal01.networklayer.com (173.192.18.211)  415.379 ms  415.902 ms  418.339 ms
16  po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  417.426 ms  417.301 ms
    po2.slr01.sr01.dal01.networklayer.com (66.228.118.142)  416.692 ms
17  * * *

Post-Launch Traceroute to SoftLayer.com from Singapore

traceroute to softlayer.com (66.228.118.53), 64 hops max, 52 byte packets
 1  192.168.206.1 (192.168.206.1)  2.850 ms  1.409 ms  1.206 ms
 2  174.133.118.65-static.reverse.networklayer.com (174.133.118.65)  1.550 ms  1.680 ms  1.394 ms
 3  ae4.dar01.sr03.sng01.networklayer.com (174.133.118.136)  1.812 ms  1.341 ms  1.734 ms
 4  ae9.bbr01.eq01.sng02.networklayer.com (50.97.18.198)  35.550 ms  1.999 ms  2.124 ms
 5  50.97.18.169-static.reverse.softlayer.com (50.97.18.169)  174.726 ms  175.484 ms  175.491 ms
 6  po5.bbr01.eq01.dal01.networklayer.com (173.192.18.140)  203.821 ms  203.749 ms  205.803 ms
 7  ae0.dar01.sr01.dal01.networklayer.com (173.192.18.253)  306.755 ms
    ae0.dar01.sr01.dal01.networklayer.com (173.192.18.211)  208.669 ms  203.127 ms
 8  po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  203.518 ms
    po2.slr01.sr01.dal01.networklayer.com (66.228.118.142)  305.534 ms
    po1.slr01.sr01.dal01.networklayer.com (66.228.118.138)  204.150 ms
 9  * * *

After the Singapore data center launch, the number of hops was reduced by 50 percent, and the response time (in milliseconds) was reduced by 40 percent. Those are pretty impressive numbers from just lighting up a couple PoPs and a data center, and that was just the beginning of our global expansion in 2012.

That’s why we are so excited to announce the three new data centers launching this month: Mexico City, Tokyo, and Frankfurt.



Of course, this is great news for customers who require data residency in Mexico, Japan, and Germany. And yes, these new locations provide additional in-region redundancy within APAC, EMEA, and the Americas. But even customers without servers in these new facilities have reason to celebrate: Our global network backbone is expanding, so users in these markets will see even better network stability and speed to servers in every other SoftLayer data center around the world!

-JRL

November 18, 2014

Your Direct Link into the SoftLayer Cloud

Remember the days when cellular companies charged additional fees for calls placed during peak hours or for text messages that exceeded your plan?

The good news is those days are pretty much over for cellular services thanks to unlimited text and data plans. The bad news is there are cloud and hosting providers who adhere to those same old billing practices of charging customers for every single communication their severs send or receive.

At SoftLayer we do things differently. All of our servers come with included terabytes of outbound bandwidth—5TB for virtual servers and 20TB for bare metal servers. Now you probably just noticed I specifically mentioned outbound bandwidth, and that's because we don't charge anything, nothing, zip, zilch for all traffic inbound to any of our servers, nor do we charge for any bandwidth usage across our Global Private Network.

Imagine the possibilities of what you could build on a Global Private Network that essentially comes free of charge just by being a SoftLayer customer.

  • How about building that true disaster recovery solution that you’re always talking about?
  • How about moving all of your backups offsite now that the necessary bandwidth requirements and costs aren’t standing in your way?
  • Or maybe it’s time to offer your app a little GSLB now that replicating data across remote sites, which hasn’t been feasible over the public Internet due to latency or security concerns, is now feasible?

We help put all these dreams within grasp thanks to Direct Link. Tap directly into our Global Private Network at connection speeds of 1Gbps or 10Gbps to establish a Direct Link into any of our 19 network PoPs (more PoPs are being added regularly). You’ll have the ability to seamlessly extend your private networks directly into SoftLayer. Not only does a Direct Link give you access to one of the world’s largest and fastest private networks, it gives you access to elastically scale your compute and storage on demand.

Many companies look to the cloud as a way to reduce capex and adjust spending on demand but hesitate to move workloads due to latency or security concerns. I'd like to say that latency isn’t even worth thinking twice about at SoftLayer. But don't take my word for it; take a peek at our Looking Glass, and see for yourself. In regards to security, a SoftLayer Direct Link enables you to build and deliver secure services on our private network without having to expose your servers to the public Internet.

For more information on Direct Link and connectivity check out KnowledgeLayer or this blog where the author digs into the technical details and explains how enterprise customers benefit from Direct Link with GRE Tunnels.

Thanks,
JD Wells

Categories: 
November 11, 2014

Which storage solution is best for your project?

Before building applications around our network storage, here’s a refresher on what network storage is, how it is used, the different types available, and the best uses for each.

What is network storage? Why would you use it?

Appropriately named, network storage is storage attached to a server over our network; not to be confused with directly attached storage (DAS), which is a hard drive located in the server (or connected with a device like a SCSI or USB cable). Although DAS transfers data to a server faster than network storage due to network latency and system caching, there is still a strong place for network storage.

Many different servers can access network storage, and with some network storage solutions, more than one server can get data from the same shared storage volume simultaneously. This comes in handy if one server dies, because another can pick up a storage device and start where the first left off.

With DAS, planned downtime for server upgrades, potential data loss, and provisioning larger or more servers can slow down productivity. The physical constraints of internal drives and costs associated with servers do not affect network storage.

Because SoftLayer manages the disk space of our network storage products, there’s no need to worry about rebuilding a redundant array of inexpensive disks (RAIDs) or failed disks. If a disk fails, SoftLayer automatically replaces it and rebuilds the RAID—in most cases you would be unaware that the changes occurred.

Select network storage solutions are available with tools for your important data. Schedule snapshots of your data, promote snapshots to full volumes, or reset your data to the snapshot point.

And with network storage, downtime is minimal. Disaster recovery tools available on select storage solutions let you send a command to quickly fail over to a different data center so you can access your data if our network is ever down in a data center.

Types of Network Storage And How They Are Different

Storage Area Network (SAN) or Block Storage

Block storage works like DAS, just remotely—only a single server can access a block storage volume at a time. Using an Internet small computer system interface (iSCSI) protocol over a secure transmission control protocol/Internet protocol (TCP/IP) connection, SoftLayer's block storage has excellent features for backup and disaster recovery, and adding snapshot schedules and failover redundancy make it a powerful enterprise solution.

Network Attached Storage (NAS) or File Storage

File storage acts like a remote file system. It has a slim operating system that allows servers to treat it like a remote directory structure. Multiple servers can share files on the same storage simultaneously. Our new consistent performance storage lets you share files quickly and easily using a network file system (NFS) with your choice of performance level and secure connections.

We also have a common Internet file system (CIFS) (Windows), which requires a credential that grants access to any server on our private network. File storage can only be accessed by SoftLayer servers.

Object Storage

Object storage is a standalone storage entity with its own representational state transfer (REST) API that grants applications (not operating systems) access to the files stored there. Located on a public network, servers in any of our data centers can directly access files stored there. Object storage is different in the way those files are stored as well. In object storage there is not a directory structure, but instead metadata tags are used to categorize and search for files. In conjunction with a content delivery network (CDN), you can quickly serve files to your users or to a mobile device in close proximity.

With pay-as-you-go pricing, you don’t have to worry about running out of space. We only charge based on the greatest usage in any given day. That means you can get started right now for free!

Which storage solution is best for your project?

If you are still confused about which network storage option you should build your applications around, take this eight-question quiz to find out if object, file or block storage will work best for you:

-Kevin

October 14, 2014

Enterprise Customers See Benefits of Direct Link with GRE Tunnels

We’ve had an overwhelming response to our Direct Link product launch over the past few months and with good reason. Customers can cross connect into the SoftLayer global private network with a direct link in any of our 22 points of presence (POPs) providing fast, secure, and unmetered access to their SoftLayer infrastructure from their remote data center locations.

Many of our enterprise customers who’ve set up a Direct Link want to balance the simplicity of a layer three cross connection with their sophisticated routing and access control list (ACL) requirements. To achieve this balance, many are using GRE tunnels from their on-premises routers to their SoftLayer Vyatta Gateway Appliance.

In previous blogs about Vyatta Gateway Appliance, we’ve described some typical use cases as well as highlighted the differences between the Vyatta OS and the Vyatta Appliance. So we’ll focus specifically on using GRE tunnels here.

What is GRE?
Generic Routing Encapsulation (GRE) is a protocol for packet encapsulation to facilitate routing other protocols over IP networks (RFC 2784). Customers typically create two endpoints for the tunnel; one on their remote router and the other on their Vyatta Gateway Appliance at SoftLayer.
How does GRE work?
GRE encapsulates a payload, an inner packet that needs to be delivered to a destination network, within an outer IP packet. Between two GRE endpoints all routers will look at the outer IP packet and forward it towards the endpoint where the inner packet is parsed and routed to the ultimate destination.
Why use GRE tunnels?
If a customer has multiple subnets at SoftLayer that need routing to, these would need multiple tunnels to each if they were not encapsulating with GRE. Since GRE encapsulates traffic within an outer packet, customers are able to route other protocols within the tunnel and route multiple subnets without multiple tunnels. A GRE endpoint on Vyatta will parse the packets and route them, eliminating that challenge.

Many of our enterprise customers have complex rules governing what servers and networks can communicate with each other. They typically build ACLs on their routers to enforce those rules. Having a GRE endpoint on a Vyatta Gateway Appliance allows customers to route and manage internal packets based on specific rules so that security models stay intact.

GRE tunnels can allow customers to keep their networking scheme; meaning customers can add IP addresses to their SoftLayer servers and directly access them eliminating any routing problems that could occur.

And, because GRE tunnels can run inside a VPN tunnel, customers can put the GRE inside of an IPSec tunnel to make it more secure.

Learn More on KnowledgeLayer

If you are considering Direct Link to achieve fast and unmetered access with the help of GRE tunnels and Vyatta Gateway Appliance but need more information, the SoftLayer KnowledgeLayer is continually updated with new information and best practices. Be sure to check out the entire section devoted to the Vyatta Gateway Appliance.

- Seth

Categories: 
October 8, 2014

An Insider’s Look at Our Data Centers

I’ve been with Softlayer over four years now. It’s been a journey that has taken me around the world—from Dallas to Singapore to Washington D.C, and back again. Along the way, I’ve met amazingly brilliant people who have helped me sharpen the tools in my ‘data center toolbox’ thus allowing me to enhance the customer experience by aiding and assisting in a complex compute environment.

I like to think of our data centers as masterpieces of elegant design. We currently have 14 of these works of art, with many more on the way. Here’s an insider’s look at the design:

Keeping It Cool
Our POD layouts have a raised floor system. The air conditioning units chill from the front bottom of the servers on the ‘cold rows’ passing through the servers on the ‘warm rows.’ The warm rows have ceiling vents to rapidly clear the warm air from the backs of the servers.

Jackets are recommended for this arctic environment.

Pumping up the POWER
Nothing is as important to us as keeping the lights on. Every data center has a three-tiered approach to keeping your servers and services on. Our first tier being street power. Each rack has two power strips to distribute the load and offer true redundancy for redundant servers and switches with the remote ability to power down an individual port on either power strip.

The second tier is our batter backup for each POD. This offers emergency response for seamless failover when street power is no more.

This leads to the third step in our model, generators. We have generators in place for a sustainable continuity of power until street power has returned. Check out the 2-megawatt diesel generator installation at the DAL05 data center here.

The Ultimate Social Network
Neither power nor cooling matter if you can’t connect to your server, which is where our proprietary networking topography comes to play. Each bare metal server and each virtual server resides in a rack that connects to three switches. Each of those switches connects to an aggregate switch for a row. The aggregate switch connects to a router.

The first switch, our private backend network, allows for SSL and VPN connectivity to manage your server. It also gives you the ability to have server-to-server communication without the bounds of bandwidth overages.

The second switch, our public network, provides pubic Internet access to your device, which is perfect for shopping, gaming, coding, or whatever you want to use it for. With 20TB of bandwidth coming standard for this network, the possibilities are endless.

The third and final switch, management, allows you to connect to the Intelligent Platform Management Interface that provides tools such as KVM/hardware monitoring/and even virtual CDs to install an image of your choosing! The cables to your devices from the switches are color-coded, port-number-to-rack-unit labeled, and masterfully arranged to maximize identification and airflow.

A Soft Place for Hardware
The heart and soul of our business is the computing hardware. We use enterprise grade hardware from the ground up. We offer our smallest offering of 1 core, 1GB RAM, 25GB HDD virtual servers, to one of our largest quad 10-core, 512GB RAM, multi 4TB HDD bare metal servers. With excellent hardware comes excellent options. There is almost always a path to improvement. Meaning, unless you already have the top of the line, you can always add more. Whether it be additional drive, RAM, or even processor.

I hope you enjoyed the view from the inside. If you want to see the data centers up close and personal, I am sorry to say, those are closed to the public. But you can take a virtual tour of some of our data centers via YouTube: AMS01 and DAL05

-Joshua Fox

January 17, 2014

What's Next? $1.2 Billion Investment. 15 New Data Centers.

SoftLayer was founded in a living room on May 5, 2005. We bootstrapped our vision of becoming the de facto platform for cloud computing by maxing out our credit cards and draining our savings accounts. Over the course of eight years, we built a unique global offering, and in the middle of last year, our long-term vision was validated (and supercharged) by IBM.

When I posted about IBM acquiring SoftLayer last June, I explained that becoming part of IBM "will enable us to continue doing what we've done since 2005, but on an even bigger scale and with greater opportunities." To give you an idea of what "bigger scale" and "greater opportunities" look like, I need only direct you to today's press release: IBM Commits $1.2 Billion to Expand Global Cloud Footprint.

IBM Cloud Investment

It took us the better part of a decade to build a worldwide network of 13 data centers. As part of IBM, we'll more than double our data center footprint in a fraction of that time. In 2006, we were making big moves when we built facilities on the East and West coasts of the United States. Now, we're expanding into places like China, Hong Kong, London, Japan, India, Canada and Mexico City. We had a handful of founders pushing for SoftLayer's success, and now we've got 430,000+ IBM peers to help us reach our goal. This is a whole new ballgame.

The most important overarching story about this planned expansion is what each new facility will mean for our customers. When any cloud provider builds a data center in a new location, it's great news for customers and users in that geographic region: Content in that facility will be geographically closer to them, and they'll see lower pings and better performance from that data center. When SoftLayer builds a data center in a new location, customers and users in that geographic region see performance improvements from *all* of our data centers. The new facility serves as an on-ramp to our global network, so content on any server in any of our data centers can be accessed faster. To help illustrate that point, let's look at a specific example:

If you're in India, and you want to access content from a SoftLayer server in Singapore, you'll traverse the public Internet to reach our network, and the content will traverse the public Internet to get back to you. Third-party peering and transit providers pass the content to/from our network and your ISP, and you'll get the content you requested.

When we add a SoftLayer data center in India, you'll obviously access servers in that facility much more quickly, and when you want content from a server in our Singapore data center, you'll be routed through that new data center's network point of presence in India so that the long haul from India to Singapore will happen entirely on the private network we control and optimize.

Users around the world will have faster, more reliable access to servers in every other SoftLayer data center because we're bringing our network to their front doors. When you combine that kind connectivity and access with our unique hybrid offering of powerful bare metal servers and scalable virtual server instances, it's easy to see how IBM, the most powerful technology company of the last 100 years, is positioned to remain the most powerful technology company in the world for the next century.

Now it's time to get to work.

-@lavosby

October 14, 2013

Product Spotlight: Vyatta Network Gateway Appliance

In the wake of our recent Vyatta network gateway appliance product launch, I thought I'd address some of the most common questions customers have asked me about the new offering. With inquiries spanning the spectrum from broad and general to detailed and specific, I might not be able to cover everything in this blog post, but at the very least, it should give a little more context for our new network gateway offering.

To begin, let's explore the simplest question I've been asked: "What is a network gateway?" A network gateway provides tools to manage traffic into and out of one or more VLANs (Virtual Local Area Networks). The network gateway serves a customer-configurable routing device that sits in front of designated VLANs. The servers in those VLANs route through the network gateway appliance as their first hop instead of Front-end Customer Routers (FCR) or Back-end Customer Routers (BCR). From an infrastructure perspective, SoftLayer's network gateway offering consists of a single server, and in the future, the offering will be expanded to multi-server configurations to support high availability needs and larger clustered configurations.

The general function of a network gateway may seem a little abstract, so let's look at a couple real world use cases to see how you can put that functionality to work in your own cloud environment.

Example 1: Complex Traffic Management
You have a multi-server cloud environment and a complex set of firewall rules that allow certain types of traffic to certain servers from specific addresses. Without a network gateway, you would need to configure multiple hardware and software firewalls throughout your topology and maintain multiple rules sets, but with the network gateway appliance, you streamline your configuration into a single point of control on both the public and private networks.

After you order a gateway appliance in the SoftLayer portal and configure which VLANs route through the appliance, the process of configuring the device is simple: You define your production, development and QA environments with distinct traffic rules, and the network gateway handles the traffic segmentation. If you wanted to create your own VPN to connect your hosted environment to your office or in-house data center, that configuration is quick and easy as well. The high-touch challenge of managing several sets of network rules across multiple devices is simplified and streamlined.

Example 2: Creating a Static NAT
You want to create a static NAT (Network Address Translation) so that you can direct traffic through a public IP address to an internal IP address. With the IPv4 address pool dwindling and new allocations being harder to come by, this configuration is becoming extremely popular to accommodate users who can't yet reach IPv6 addresses. This challenge would normally require a significant level of effort of even the most seasoned systems administrator, but with the gateway appliance, it's a painless process.

In addition to the IPv4 address-saving benefits, your static NAT adds a layer of protection for your internal web servers from the public network, and as we discussed in the first example, your gateway device also serves as a single configuration point for both inbound and outbound firewall rules.

If you have complex network-related needs, and you want granular control of the traffic to and from your servers, a gateway appliance might be the perfect tool for you. You get the control you want and save yourself a significant amount of time and effort configuring and tweaking your environment on-the-fly. You can terminate IPSec VPN tunnels, execute your own network address translation, and run diagnostic commands such as traffic monitoring (tcpdump) on your global environment. And in addition to that, your gateway serves as a single point of contact to configure sophisticated firewall rules!

If you want to learn more about the gateway appliance, check out KnowledgeLayer or contact our friendly sales team directly with your questions: sales@softlayer.com

-Ben

August 22, 2013

Network Cabling Controversy: Zip Ties v. Hook & Loop Ties

More than 210,000 users have watched a YouTube video of our data center operations team cabling a row of server racks in San Jose. More than 95 percent of the ratings left on the video are positive, and more than 160 comments have been posted in response. To some, those numbers probably seem unbelievable, but to anyone who has ever cabled a data center rack or dealt with a poorly cabled data center rack, the time-lapse video is enthralling, and it seems to have catalyzed a healthy debate: At least a dozen comments on the video question/criticize how we organize and secure the cables on each of our server racks. It's high time we addressed this "zip ties v. hook & loop (Velcro®)" cable bundling controversy.

The most widely recognized standards for network cabling have been published by the Telecommunications Industry Association and Electronics Industries Alliance (TIA/EIA). Unfortunately, those standards don't specify the physical method to secure cables, but it's generally understood that if you tie cables too tight, the cable's geometry will be affected, possibly deforming the copper, modifying the twisted pairs or otherwise physically causing performance degradation. This understanding begs the question of whether zip ties are inherently inferior to hook & loop ties for network cabling applications.

As you might have observed in the "Cabling a Data Center Rack" video, SoftLayer uses nylon zip ties when we bundle and secure the network cables on our data center server racks. The decision to use zip ties rather than hook & loop ties was made during SoftLayer's infancy. Our team had a vision for an automated data center that wouldn't require much server/cable movement after a rack is installed, and zip ties were much stronger and more "permanent" than hook & loop ties. Zip ties allow us to tighten our cable bundles easily so those bundles are more structurally solid (and prettier). In short, zip ties were better for SoftLayer data centers than hook & loop ties.

That conclusion is contrary to the prevailing opinion in the world of networking that zip ties are evil and that hook & loop ties are among only a few acceptable materials for "good" network cabling. We hear audible gasps from some network engineers when they see those little strips of nylon bundling our Ethernet cables. We know exactly what they're thinking: Zip ties negatively impact network performance because they're easily over-tightened, and cables in zip-tied bundles are more difficult to replace. After they pick their jaws up off the floor, we debunk those myths.

The first myth (that zip ties can negatively impact network performance) is entirely valid, but its significance is much greater in theory than it is in practice. While I couldn't track down any scientific experiments that demonstrate the maximum tension a cable tie can exert on a bundle of cables before the traffic through those cables is affected, I have a good amount of empirical evidence to fall back on from SoftLayer data centers. Since 2006, SoftLayer has installed more than 400,000 patch cables in data centers around the world (using zip ties), and we've *never* encountered a fault in a network cable that was the result of a zip tie being over-tightened ... And we're not shy about tightening those ties.

The fact that nylon zip ties are cheaper than most (all?) of the other more "acceptable" options is a fringe benefit. By securing our cable bundles tightly, we keep our server racks clean and uniform:

SoftLayer Cabling

The second myth (that cables in zip-tied bundles are more difficult to replace) is also somewhat flawed when it comes to SoftLayer's use case. Every rack is pre-wired to deliver five Ethernet cables — two public, two private and one out-of-band management — to each "rack U," which provides enough connections to support a full rack of 1U servers. If larger servers are installed in a rack, we won't need all of the network cables wired to the rack, but if those servers are ever replaced with smaller servers, we don't have to re-run network cabling. Network cables aren't exposed to the tension, pressure or environmental changes of being moved around (even when servers are moved), so external forces don't cause much wear. The most common physical "failures" of network cables are typically associated with RJ45 jack crimp issues, and those RJ45 ends are easily replaced.

Let's say a cable does need to be replaced, though. Servers in SoftLayer data centers have redundant public and private network connections, but in this theoretical example, we'll assume network traffic can only travel over one network connection and a data center technician has to physically replace the cable connecting the server to the network switch. With all of those zip ties around those cable bundles, how long do you think it would take to bring that connection back online? (Hint: That's kind of a trick question.) See for yourself:

The answer in practice is "less than one minute" ... The "trick" in that trick question is that the zip ties around the cable bundles are irrelevant when it comes to physically replacing a network connection. Data center technicians use temporary cables to make a direct server-to-switch connection, and they schedule an appropriate time to perform a permanent replacement (which actually involves removing and replacing zip ties). In the video above, we show a temporary cable being installed in about 45 seconds, and we also demonstrate the process of creating, installing and bundling a permanent network cable replacement. Even with all of those villainous zip ties, everything is done in less than 18 minutes.

Many of the comments on YouTube bemoan the idea of having to replace a single cable in one of these zip-tied bundles, but as you can see, the process isn't very laborious, and it doesn't vary significantly from the amount of time it would take to perform the same maintenance with a Velcro®-secured cable bundle.

Zip ties are inferior to hook & loop ties for network cabling? Myth(s): Busted.

-@khazard

P.S. Shout-out to Elijah Fleites at DAL05 for expertly replacing the network cable on an internal server for the purposes of this video!

Subscribe to network