Our second Dallas data center went live 10 days ago and we are already pushing 10 GB of sustainable traffic out the door. I have spent some time in the DC with some of our ops guys, and the place is impressive.
A terrific amount of computing power sits in row after row of server racks, driving a diverse array of business to more than 110 countries. Each rack features powerful processors, lots of RAM and heaps of storage. There is very little that our customers are unable to do over Softlayer’s infrastructure. And if they need more, SoftLayer can add additional servers very quickly to meet this demand. I wish the rest of our business were as simple as this.
Despite the state of the art infrastructure that sits in the DC, it remains a challenge to meet the needs of our customers. Why? Network, that’s why. SoftLayer’s challenge will be to continuously stay ahead of our customers’ demands, primarily in the network. If the network is unable to support the traffic that is pushed across our DC, everything comes tumbling down.
To a degree, we are victims of our own success. As we add servers to racks, we are placing increasing demand on the network. The more successful we are, the more pressure we place on the network.
Consider the following statistics:
- When SoftLayer went live five years ago, we used two carriers and pushed 20 Gbps out the door.
- Four years ago, this had gone up to four carriers and eight 10 Gbps links.
- In January 2009 we pushed about 70 Gbps of sustained traffic. And this doubled for President Obama’s inauguration.
- Today we use over ten carriers, with over 1000 Gbps of capacity.
- In addition to the needs that our customers drive, we cannot forget to consider DDOS attacks as DDOS attacks add significant load to the network. We consistently absorb and successfully defend attacks of 5 Gbps, 10 Gbps or more and the peaks have grown by a factor of ten since SoftLayer went live.
The trend revealed is significant – in five years the amount of traffic sustained over our network has increased by more than ten times. And it shows little signs of slowing down.
Suffice to say, we spend a significant amount of time designing our networks to ensure that we are able to handle the traffic loads that are generated – we have to. Aggressively overbuilding the network brings us some short term pain, but if we are going to stay ahead of demand it is simply good business (and it makes sure our customers are happy). The new DC in Dallas is a great example of how we stay ahead of the game.
Each server has 5 NICs – 2 x 1 Gbps (bonded) for the public network, 2 x 1 Gbps (bonded) for the private network and one for management. The net of this is that customers can push 2 Gbps to the internet assuming server processors can handle the load.