Posts Tagged 'Demand'

July 5, 2012

Bandwidth Utilization: Managing a Global Network

SoftLayer has over 1,750 Gbit/s of network capacity. In each of our data centers and points of presence, we have an extensive library of peering relationships and multiple 10 Gbit/s connections to independent Tier 1 carriers. We operate one of the fastest, most reliable networks on the planet, and our customers love it:

From a network operations standpoint, that means we have our work cut out for us to keep everything running smoothly while continuing to build the network to accommodate a steady increase in customer demand. It might be easier to rest on our laurels to simply maintain what we already have in place, but when you look at the trend of bandwidth usage over the past 18 months, you'll see why we need to be proactive about expanding our network:

Long Term Bandwidth Usage Trend

The purple line above plots the 95th percentile of weekly outbound bandwidth utilization on the SoftLayer network, and the red line shows the linear trend of that consumption over time. From week to week, the total usage appears relatively consistent, growing at a steady rate, but when you look a little deeper, you get a better picture of how dynamic our network actually is:

SoftLayer Weekly Bandwidth Usage

The animated gif above shows the 2-hour average of bandwidth usage on our entire network over a seven-week period (times in CDT). As you can see, on a day-to-day basis, consumption fluctuates pretty significantly. The NOC (Network Operations Center) needs to be able to accommodate every spike of usage at any time of day, and our network engineering and strategy teams have to stay ahead of the game when it comes to planning points of presence and increasing bandwidth capacity to accommodate our customers' ever-expanding needs.

But wait. There's more.

Let's go one level deeper and look a graph of the 95th percentile bandwidth usage on 5-minute intervals from one week in a single data center:

Long Term Bandwidth Usage Trend

The variations in usage are even more dramatic. Because we have thirteen data centers geographically dispersed around the world with an international customer base, the variations you see in total bandwidth utilization understate the complexity of our network's bandwidth usage. Customers targeting the Asian market might host content in SNG01, and the peaks in bandwidth consumption from Singapore will counterbalance the valleys of consumption at the same time in the United States and Europe.

With that in mind, here's a challenge for you: Looking at the graph above, if the times listed are in CDT, which data center do you think that data came from?

It would be interesting to look at weekly usage trends, how those trends are changing and what those trends tell us about our customer base, but that assessment would probably be "information overload" in this post, so I'll save that for another day.

-Dani

P.S. If you came to this post expecting to see "a big truck" or "a series of tubes," I'm sorry I let you down.

September 5, 2011

How Scalable Are You?

The Northeastern part of the United States saw two natural disasters within the span of five days of each other. The first was in the Washington, D.C. area: A 5.8 earthquake on August 23, 2011. On August 28, Hurricane Irene made her way up the east coast, leaving nearly 5.5 million people without power. We do everything we can to prepare our facilities for natural disasters (generator power backup, staffing, redundant bandwidth links and providers, etc.), and given the recent events, now might be a good time to start thinking about how your servers respond when something out of the ordinary happens ... Let's look at two relatively easy ways you can set your business up to scale and recover.

The first option you may consider would be to set up a multi-tiered environment by deploying multiple servers in various geographical locations. Your servers in each location could be accessed via load balancing or round robin DNS. In this kind of high-availability environment, your servers could handle the incoming requests more quickly with the load being split amongst the multiple data centers. The failover would be just a few seconds should you lose connectivity to one of the locations.

The second option to consider would be the private image repository for our CloudLayer Computing. This options allows you to save a private image template in different data centers, each ready for quick deployment without having to install and configure the same operating system and applications. Should you need additional resources or lose connectivity to your instance in one facility, you can deploy the saved image in another facility. The failover time would be only in the provisioning process of the Computer Instance ... which doesn't take too long.

Scalability makes sense no matter what situation you may be facing – from natural disaster to hitting the front page of Reddit. If you have any questions about these scalability options, "Click to Chat" on our site or give us a call and a sales rep can help you get prepared. Your infrastructure may have come through these recent events unscathed, but don't let that lull you into a false sense of security. The "It's better to be safe than sorry" cliche is a cliche for a reason: It's worth saying often.

-Greg

Subscribe to demand