Posts Tagged 'Bandwidth'

April 16, 2013

iptables Tips and Tricks - Track Bandwidth with iptables

As I mentioned in my last post about CSF configuration in iptables, I'm working on a follow-up post about integrating CSF into cPanel, but I thought I'd inject a simple iptables use-case for bandwidth tracking. You probably think about iptables in terms of firewalls and security, but it also includes a great diagnostic tool for counting bandwidth for individual rules or set of rules. If you can block it, you can track it!

The best part about using iptables to track bandwidth is that the tracking is enabled by default. To see this feature in action, add the "-v" into the command:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 2495 packets, 104K bytes)

The output includes counters for both the policies and the rules. To track the rules, you can create a new chain for tracking bandwidth:

[root@server ~]$ iptables -N tracking
[root@server ~]$ iptables -vnL
...
Chain tracking (0 references)
 pkts bytes target prot opt in out source           destination

Then you need to set up new rules to match the traffic that you wish to track. In this scenario, let's look at inbound http traffic on port 80:

[root@server ~]$ iptables -I INPUT -p tcp --dport 80 -j tracking
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35111 packets, 1490K bytes)
 pkts bytes target prot opt in out source           destination
    0   0 tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80

Now let's generate some traffic and check it again:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35216 packets, 1500K bytes)
 pkts bytes target prot opt in out source           destination
  101  9013 tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80

You can see the packet and byte transfer amounts to track the INPUT — traffic to a destination port on your server. If you want track the amount of data that the server is generating, you'd look for OUTPUT from the source port on your server:

[root@server ~]$ iptables -I OUTPUT -p tcp --sport 80 -j tracking
[root@server ~]$ iptables -vnL
...
Chain OUTPUT (policy ACCEPT 26149 packets, 174M bytes)
 pkts bytes target prot opt in out source           destination
  488 3367K tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp spt:80

Now that we know how the tracking chain works, we can add in a few different layers to get even more information. That way you can keep your INPUT and OUTPUT chains looking clean.

[root@server ~]$ iptables –N tracking
[root@server ~]$ iptables –N tracking2
[root@server ~]$ iptables –I INPUT –j tracking
[root@server ~]$ iptables –I OUTPUT –j tracking
[root@server ~]$ iptables –A tracking –p tcp --dport 80 –j tracking2
[root@server ~]$ iptables –A tracking –p tcp --sport 80 –j tracking2
[root@server ~]$ iptables -vnL
 
Chain INPUT (policy ACCEPT 96265 packets, 4131K bytes)
 pkts bytes target prot opt in out source           destination
 4002  184K tracking    all  --  *  *   0.0.0.0/0        0.0.0.0/0
 
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source           destination
 
Chain OUTPUT (policy ACCEPT 33751 packets, 231M bytes)
 pkts bytes target prot opt in out source           destination
 1399 9068K tracking    all  --  *  *   0.0.0.0/0        0.0.0.0/0
 
Chain tracking (2 references)
 pkts bytes target prot opt in out source           destination
 1208 59626 tracking2   tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80
  224 1643K tracking2   tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp spt:80
 
Chain tracking2 (2 references)
 pkts bytes target prot opt in out source           destination

Keep in mind that every time a packet passes through one of your rules, it will eat CPU cycles. Diverting all your traffic through 100 rules that track bandwidth may not be the best idea, so it's important to have an efficient ruleset. If your server has eight processor cores and tons of overhead available, that concern might be inconsequential, but if you're running lean, you could conceivably run into issues.

The easiest way to think about making efficient rulesets is to think about eating the largest slice of pie first. Understand iptables rule processing and put the rules that get more traffic higher in your list. Conversely, save the tiniest pieces of your pie for last. If you run all of your traffic by a rule that only applies to a tiny segment before you screen out larger segments, you're wasting processing power.

Another thing to keep in mind is that you do not need to specify a target (in our examples above, we established tracking and tracking2 as our targets). If you're used to each rule having a specific purpose of either blocking, allowing, or diverting traffic, this simple tidbit might seem revolutionary. For example, we could use this rule:

[root@server ~]$ iptables -A INPUT

If that seems a little bare to you, don't worry ... It is! The output will show that it is a rule that tracks all traffic in the chain at that point. We're appending the data to the end of the chain in this example ("-A") but we could also insert it ("-I") at the top of the chain instead. This command could be helpful if you are using a number of different chains and you want to see the exact volume of packets that are filtered at any given point. Additionally, this strategy could show how much traffic a potential rule would filter before you run it on your production system. Because having several of these kinds of commands can get a little messy, it's also helpful to add comments to help sort things out:

[root@server ~]$ iptables -A INPUT -m comment --comment "track all data"
 
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 11M packets, 5280M bytes)
 pkts bytes target prot opt in out source           destination
   98  9352        all  --  *  *   0.0.0.0/0        0.0.0.0/0       /* track all data */

Nothing terribly complicated about using iptables to count bandwidth, right? If you have iptables rulesets and you want to get a glimpse at how your traffic is being affected, this little trick could be useful. You can rely on the information iptables gives you about your bandwidth usage, and you won't be the only one ... cPanel actually uses iptables to track bandwidth.

-Mark

July 5, 2012

Bandwidth Utilization: Managing a Global Network

SoftLayer has over 1,750 Gbit/s of network capacity. In each of our data centers and points of presence, we have an extensive library of peering relationships and multiple 10 Gbit/s connections to independent Tier 1 carriers. We operate one of the fastest, most reliable networks on the planet, and our customers love it:

From a network operations standpoint, that means we have our work cut out for us to keep everything running smoothly while continuing to build the network to accommodate a steady increase in customer demand. It might be easier to rest on our laurels to simply maintain what we already have in place, but when you look at the trend of bandwidth usage over the past 18 months, you'll see why we need to be proactive about expanding our network:

Long Term Bandwidth Usage Trend

The purple line above plots the 95th percentile of weekly outbound bandwidth utilization on the SoftLayer network, and the red line shows the linear trend of that consumption over time. From week to week, the total usage appears relatively consistent, growing at a steady rate, but when you look a little deeper, you get a better picture of how dynamic our network actually is:

SoftLayer Weekly Bandwidth Usage

The animated gif above shows the 2-hour average of bandwidth usage on our entire network over a seven-week period (times in CDT). As you can see, on a day-to-day basis, consumption fluctuates pretty significantly. The NOC (Network Operations Center) needs to be able to accommodate every spike of usage at any time of day, and our network engineering and strategy teams have to stay ahead of the game when it comes to planning points of presence and increasing bandwidth capacity to accommodate our customers' ever-expanding needs.

But wait. There's more.

Let's go one level deeper and look a graph of the 95th percentile bandwidth usage on 5-minute intervals from one week in a single data center:

Long Term Bandwidth Usage Trend

The variations in usage are even more dramatic. Because we have thirteen data centers geographically dispersed around the world with an international customer base, the variations you see in total bandwidth utilization understate the complexity of our network's bandwidth usage. Customers targeting the Asian market might host content in SNG01, and the peaks in bandwidth consumption from Singapore will counterbalance the valleys of consumption at the same time in the United States and Europe.

With that in mind, here's a challenge for you: Looking at the graph above, if the times listed are in CDT, which data center do you think that data came from?

It would be interesting to look at weekly usage trends, how those trends are changing and what those trends tell us about our customer base, but that assessment would probably be "information overload" in this post, so I'll save that for another day.

-Dani

P.S. If you came to this post expecting to see "a big truck" or "a series of tubes," I'm sorry I let you down.

December 29, 2011

Using iPerf to Troubleshoot Speed/Throughput Issues

Two of the most common network characteristics we look at when investigating network-related concerns in the NOC are speed and throughput. You may have experienced the following scenario yourself: You just provisioned a new bad-boy server with a gigabit connection in a data center on the opposite side of the globe. You begin to upload your data and to your shock, you see "Time Remaining: 10 Hours." "What's wrong with the network?" you wonder. The traceroute and MTR look fine, but where's the performance and bandwidth I'm paying for?

This issue is all too common and it has nothing to do with the network, but in fact, the culprits are none other than TCP and the laws of physics.

In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn't send more until it receives an acknowledgement from the remote host that all data was received. This is called the "TCP Window." Data travels at the speed of light, and typically, most hosts are fairly close together. This "windowing" happens so fast we don't even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called "Bandwidth Delay Product," or BDP.

We can overcome BDP to some degree by sending more data at a time. We do this by adjusting the "TCP Window" – telling TCP to send more data per flow than the default parameters. Each OS is different and the default values will vary, but most all operating systems allow tweaking of the TCP stack and/or using parallel data streams. So what is iPerf and how does it fit into all of this?

What is iPerf?

iPerf is simple, open-source, command-line, network diagnostic tool that can run on Linux, BSD, or Windows platforms which you install on two endpoints. One side runs in a 'server' mode listening for requests; the other end runs 'client' mode that sends data. When activated, it tries to send as much data down your pipe as it can, spitting out transfer statistics as it does. What's so cool about iPerf is you can test in real time any number of TCP window settings, even using parallel streams. There's even a Java based GUI you can install that runs on top of it called, JPerf (JPerf is beyond the scope of this article, but I recommend looking into it). What's even cooler is that because iPerf resides in memory, there are no files to clean up.

How do I use iPerf?

iPerf can be quickly downloaded from SourceForge to be installed. It uses port 5001 by default, and the bandwidth it displays is from the client to the server. Each test runs for 10 seconds by default, but virtually every setting is adjustable. Once installed, simply bring up the command line on both of the hosts and run these commands.

On the server side:
iperf -s

On the client side:
iperf -c [server_ip]

The output on the client side will look like this:

#iperf -c 10.10.10.5
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 0.0.0.0 port 46956 connected with 168.192.1.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 10.0 sec  10.0 MBytes  1.00 Mbits/sec

There are a lot of things we can do to make this output better with more meaningful data. For example, let's say we want the test to run for 20 seconds instead of 10 (-t 20), and we want to display transfer data every 2 seconds instead of the default of 10 (-i 2), and we want to test on port 8000 instead of 5001 (-p 8000). For the purposes of this exercise, let's use those customization as our baseline. This is what the command string would look like on both ends:

Client Side:

#iperf -c 10.10.10.5 -p 8000 -t 20 -i 2
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.10.10.10 port 46956 connected with 10.10.10.5 port 8000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  6.00 MBytes  25.2 Mbits/sec
[  3]  2.0- 4.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3]  4.0- 6.0 sec  7.00 MBytes  29.4 Mbits/sec
[  3]  6.0- 8.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3]  8.0-10.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3] 10.0-12.0 sec  7.00 MBytes  29.4 Mbits/sec
[  3] 12.0-14.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3] 14.0-16.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3] 16.0-18.0 sec  6.88 MBytes  28.8 Mbits/sec
[  3] 18.0-20.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3]  0.0-20.0 sec  70.1 MBytes  29.4 Mbits/sec

Server Side:

#iperf -s -p 8000 -i 2
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[852] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 58316
[ ID] Interval Transfer Bandwidth
[  4]  0.0- 2.0 sec  6.05 MBytes  25.4 Mbits/sec
[  4]  2.0- 4.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4]  4.0- 6.0 sec  6.94 MBytes  29.1 Mbits/sec
[  4]  6.0- 8.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4]  8.0-10.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4] 10.0-12.0 sec  6.95 MBytes  29.1 Mbits/sec
[  4] 12.0-14.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4] 14.0-16.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4] 16.0-18.0 sec  6.95 MBytes  29.1 Mbits/sec
[  4] 18.0-20.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4]  0.0-20.0 sec  70.1 MBytes  29.4 Mbits/sec

There are many, many other parameters you can set that are beyond the scope of this article, but for our purposes, the main use is to prove out our bandwidth. This is where we'll use the TCP window options and parallel streams. To set a new TCP window you use the -w switch and you can set the parallel streams by using -P.

Increased TCP window commands:

Server side:
#iperf -s -w 1024k -i 2

Client side:
#iperf -i 2 -t 20 -c 10.10.10.5 -w 1024k

And here are the iperf results from two Softlayer file servers – one in Washington, D.C., acting as Client, the other in Seattle acting as Server:

Client Side:

# iperf -i 2 -t 20 -c 10.10.10.5 -p 8000 -w 1024k
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[  3] local 10.10.10.10 port 53903 connected with 10.10.10.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  3]  2.0- 4.0 sec  28.5 MBytes   120 Mbits/sec
[  3]  4.0- 6.0 sec  28.4 MBytes   119 Mbits/sec
[  3]  6.0- 8.0 sec  28.9 MBytes   121 Mbits/sec
[  3]  8.0-10.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 10.0-12.0 sec  29.0 MBytes   122 Mbits/sec
[  3] 12.0-14.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 14.0-16.0 sec  29.0 MBytes   122 Mbits/sec
[  3] 16.0-18.0 sec  27.9 MBytes   117 Mbits/sec
[  3] 18.0-20.0 sec  29.0 MBytes   122 Mbits/sec
[  3]  0.0-20.0 sec   283 MBytes   118 Mbits/sec

Server Side:

#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[  4] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 53903
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  4]  2.0- 4.0 sec  28.6 MBytes   120 Mbits/sec
[  4]  4.0- 6.0 sec  28.3 MBytes   119 Mbits/sec
[  4]  6.0- 8.0 sec  28.9 MBytes   121 Mbits/sec
[  4]  8.0-10.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 10.0-12.0 sec  29.0 MBytes   121 Mbits/sec
[  4] 12.0-14.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 14.0-16.0 sec  29.0 MBytes   122 Mbits/sec
[  4] 16.0-18.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 18.0-20.0 sec  29.0 MBytes   121 Mbits/sec
[  4]  0.0-20.0 sec   283 MBytes   118 Mbits/sec

We can see here, that by increasing the TCP window from the default value to 1MB (1024k) we achieved around a 400% increase in throughput over our baseline. Unfortunately, this is the limit of this OS in terms of Window size. So what more can we do? Parallel streams! With multiple simultaneous streams we can fill the pipe close to its maximum usable amount.

Parallel Stream Command:
#iperf -i 2 -t 20 -c -p 8000 10.10.10.5 -w 1024k -P 7

Client Side:

#iperf -i 2 -t 20 -c -p 10.10.10.5 -w 1024k -P 7
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
 [ ID] Interval       Transfer     Bandwidth
[  9]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  4]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  7]  0.0- 2.0 sec  25.6 MBytes   107 Mbits/sec
[  8]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  5]  0.0- 2.0 sec  25.8 MBytes   108 Mbits/sec
[  3]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  6]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[SUM]  0.0- 2.0 sec   178 MBytes   746 Mbits/sec
 
(output omitted for brevity on server & client)
 
[  7] 18.0-20.0 sec  28.2 MBytes   118 Mbits/sec
[  8] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  5] 18.0-20.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 18.0-20.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 18.0-20.0 sec  28.9 MBytes   121 Mbits/sec
[  9] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  6] 18.0-20.0 sec  28.9 MBytes   121 Mbits/sec
[SUM] 18.0-20.0 sec   200 MBytes   837 Mbits/sec
[SUM]  0.0-20.0 sec  1.93 GBytes   826 Mbits/sec 

Server Side:

#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[  4] local 10.10.10.10 port 8000 connected with 10.10.10.5 port 53903
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0- 2.0 sec  25.7 MBytes   108 Mbits/sec
[  8]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  4]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  9]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[ 10]  0.0- 2.0 sec  25.9 MBytes   108 Mbits/sec
[  7]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  6]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[SUM]  0.0- 2.0 sec   178 MBytes   747 Mbits/sec
 
[  4] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  5] 18.0-20.0 sec  28.3 MBytes   119 Mbits/sec
[  7] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[ 10] 18.0-20.0 sec  28.1 MBytes   118 Mbits/sec
[  9] 18.0-20.0 sec  28.0 MBytes   118 Mbits/sec
[  8] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  6] 18.0-20.0 sec  29.0 MBytes   121 Mbits/sec
[SUM] 18.0-20.0 sec   200 MBytes   838 Mbits/sec
[SUM]  0.0-20.1 sec  1.93 GBytes   825 Mbits/sec

As you can see from the tests above, we were able to increase throughput from 29Mb/s with a single stream and the default TCP Window to 824Mb/s using a higher window and parallel streams. On a Gigabit link, this about the maximum throughput one could hope to achieve before saturating the link and causing packet loss. The bottom line is, I was able to prove out the network and verify bandwidth capacity was not an issue. From that conclusion, I could focus on tweaking TCP to get the most out of my network.

I'd like to point out that we will never get 100% out of any link. Typically, 90% utilization is about the real world maximum anyone will achieve. If you get any more, you'll begin to saturate the link and incur packet loss. I should also point out that Softlayer doesn't directly support iPerf, so it's up to you install and play around with. It's such a versatile and easy to use little piece of software that it's become invaluable to me, and I think it will become invaluable to you as well!

-Andrew

November 29, 2011

SoftLayer Mobile v. 1.1 on Windows Phone: New Features

I was on a Caribbean cruise during the second week of November, and I kept telling myself that the first thing I needed to taste was a delicious mango. Even though I knew it's out of season, I still had hopes. I had a chance to indulge in that tropical fruit, and I couldn't help but think about a mango that gets tastier with every day: the new Windows Phone OS 7.1, codenamed "Mango."

I'm not going to talk about Mango or its new sensational features, but I do want to share a few of the changes that we pushed out to the Windows Phone Marketplace as a version 1.1 of SoftLayer Mobile. While I could ramble for pages about all of the updates and our strategy in building out and improving the mobile platform, but I'll try to be brief and only share four of the biggest new features the team included in this release.

Verisign Authentication
The first update you'll notice when you fire up SoftLayer Mobile 1.1 on Windows Phone is the security-rich inclusion of VeriSign authentication. You are able to activate an additional layer of security by requiring that users confirm their identity with a trusted third party tool before they get access to your account. In this case, the third party vendor is VeriSign. Every customer looking to bake in additional security on their account will appreciate this addition.

SoftLayer Mobile WP

VeriSign authentication in SoftLayer Mobile on WP7

Device-Based Bandwidth
The next big addition to this Windows Phone app release is the inclusion of device-based bandwidth for two billing cycles – your current cycle and the previous cycle. In v. 1.0 of SoftLayer Mobile, users were only able to see bandwidth data for the current billing cycle ... It's useful, but you don't have a frame of reference immediately available. This release provides that frame of reference. One of the coolest parts is the aesthetically pleasing presentation: our metro-style container, "pivot control." Just slide through and see your billing cycles in one long view!

SoftLayer Mobile WP

Billing cycle view along with a button to view graph for that cycle

Bandwidth Graphs
If you didn't notice from the picture, its caption or the heading of this section, the next big update is the inclusion of bandwidth graphs! The bandwidth graph page gives you a bird's eye view of your bandwidth activity for any selected billing cycle. You'll see the max "Inbound," "Outbound" and "Total" values. Those different marks are very useful if you're tracking which days your device uses the most bandwidth and when those surges subside. The application uses the built-in charting functionality that comes with Silverlight libraries. Since we're taking advantage of those goodies, you can bet it looks beautiful. No, it's not a bitmap image ... it's a real bandwidth chart. As with the other bandwidth update, the graphs are available for both the current and the previous billing cycle.

SoftLayer Mobile WP

Bandwidth chart for a previous billing cycle

Ticket Updates
The next addition to the family is a new way to visually distinguish your unread updates on tickets while viewing a ticket list page. The "toast" notification for the ticket list view gives flags unread ticket updates, and the ticket list will feature bold text on the ticket's subject if that ticket is marked with an "unread update" *ndash; meaning an employee or someone has an update to that ticket which you haven't seen yet. This is very much Outlook-y style and very native to Windows Phone.

SoftLayer Mobile WP

Toast notification along with Outlook-style unread ticket

What's Next?
With this release, we're not resting on our laurels, so what are we doing in our labs? Right now we're working on OS migration to move our existing app from OS 7.0 to the new Mango-flavored Windows Phone 7 version I mentioned a little earlier. Now you see why I was so fixated on mangoes while I was on vacation. The migrated mango app will only be available to devices that are mango-licious (Upgraded to 7.1).

Stay tuned, and you'll see some of the other new features we're working on very soon. If you have a Windows Phone, you need to download SoftLayer Mobile, rate it and give us your feedback!

-Imran

September 28, 2011

A Whole New World: SoftLayer on Windows Phone 7

As SLayers, our goal is always to bring creativity in every aspect of work we do at SoftLayer. It was not too long ago when the Interface Development team was presented with a new and exciting challenge: To develop a Windows Phone 7 Series app. Like me, many questioned whether we should tap into the market of Windows Phone OS ... What was the scope of this OS? What is the future of Windows Phone OS smartphones? The business relationship that NOKIA and Microsoft signed to produce smartphones with Windows Phone 7 OS will provide consumers with a new interface and unique features, so smartphone users are paying attention ... And we are too.

The SoftLayer Mobile world had already made huge strides with iPhone and Android based apps, so our work was cut out for us as we entered the Windows Phone 7 world. We put together a small, energetic and skilled group of SLayers who wanted to make SoftLayer proud, and I am proud to be a member of that team!

Our focus was to design and develop an application that would not only provide the portal functionality on mobile phone but also incorporate the awesome features of Windows Phone 7. Keeping all that in consideration, the choice of using an enterprise quality framework was essential. After a lot of research, we put our finger on the Microsoft's Patterns and Practices-backed Prism Framework for Windows Phone 7. The Prism Framework is a well-known and recognized name among Silverlight and Windows Presentation Framework developers, and since Windows Phone 7 is built upon the Silverlight and XNA Framework, our choice was clearly justified.

After selecting the framework, we wanted to make the whole asynchronous experience smooth while talking to SoftLayer's mobile API. That' where we met the cool kid on the block: Reactive Extensions for .NET (also known as Rx). The Rx is a library used to compose asynchronous and event-based programs. The learning curve was pretty intense for the team, but we operate under the mantra of CBNO (Challenging-But-Not-Overwhelming), so it was learning we knew would bear fruits.

The team's plan was to create an app that had the most frequently used features from the portal. The features to be showcased in the first release were to be basic but at the same time essential. The features we pinpointed were ticket management, hardware management, bandwidth and account management. Bringing these features to the phone posed a challenge, though ... How do we add a little more spice to what cold be a rather plain and basic app?

Windows Phone 7 controls came to our rescue and we utilized the Pivot and Panorama controls to design the Ticket Lists and Ticket Details. The pivot control works like a tabbed-style control that is viewable by sliding left or right. This lets us put the ticket-based-categories in a single view so users don't have to navigate back-and-forth to see different types of tickets. It also provides context-menu style navigation by holding onto the ticket item, giving an option to view or edit ticket with one tap. Here is a screen shot of pivot control in use to view tickets by categories and device list:

Win7 Phone Screen

Another achievement was made by using the panorama control. The control works like a long page with different relevant sections of similar content. This control was used to show a snap shot of a ticket, and the view displays basic ticket details, updates, attachments and any hardware attached to a ticket. This makes editing a ticket as easy as a tap! This is a screenshot of panorama control in use to view ticket detail:

Win7 Phone Screen

The device list view will help people see the dedicated and virtual devices in a pivot control giving a visual distinction. The list can be searched by tapping on the filter icon at the application bar. The filtering is search-as-you-type style and can be turned off by tapping the icon again. This screenshot shows the device list with a filtering option:

Win7 Phone Screen

To perform further hardware operations like pinging, rebooting and power cycling the server, you can use the hardware detail view as well. The bandwidth view may not be as flashy, but it's a very useful representation of a server's bandwidth information. Charting is not available with this release but will be available in the upcoming releases.

If you own a Windows Phone 7 device, go ahead and download "SoftLayer Mobile" and send us the feedback on what features you would like to see next and most importantly whether you love this app or not. We have and will always strive for excellence, and we know there's always room to improve!

-Imran

February 15, 2011

Five Ways to Use Your VPN

One of the many perks of being a SoftLayer customer is having access to your own private network. Perhaps you started out with a server in Dallas, later expanded to Seattle, and are now considering a new box in Washington, D.C. for complete geographic diversity. No matter the distance or how many servers you have, the private network bridges the gaps between you, your servers, and SoftLayer's internal services by bringing all of these components together into a secure, integrated environment that can be accessed as conveniently as if you were sitting right in the data center.

As if our cutting-edge management portal and API weren't enough, SoftLayer offers complimentary VPN access to the private network. This often-underestimated feature allows you to integrate your SoftLayer private network into your personal or corporate LAN, making it possible to access your servers with the same security and flexibility that a local network can offer.

Let's look at a few of the many ways you can take advantage of your VPN connection:

1. Unmetered Bandwidth

Unlike the public network that connects your servers to the outside world, the traffic on your private network is unlimited. This allows you to transfer as much data as you wish from one server to another, as well as between your servers and SoftLayer's backup and network storage devices – all for free.

When you use the VPN service to tap into the private network from your home or office, you can download and upload as much data as you want without having to worry about incurring additional charges.

2. Secure Data Transfer

Because your VPN connection is encrypted, all traffic between you and your private network is automatically secure — even when transferring data over unencrypted protocols like FTP.

3. Protect Sensitive Services

Even with strong passwords, leaving your databases and remote access services exposed to the outside world is asking for trouble. With SoftLayer, you don't have to take these risks. Simply configure sensitive services to only listen for connections from your private network, and use your secure VPN to access them.

If you run Linux or BSD, securing your SSH daemon is as easy as adding the line ListenAddress a.b.c.d to your /etc/ssh/sshd_config file (replace a.b.c.d with the IP address assigned to your private network interface)

4. Lock Down Your Server in Case of Emergency

In the unfortunate event of a security breach or major software bug, SoftLayer allows you to virtually "pull the plug" on your server, effectively cutting off all communication with the outside world.

The difference with the competition? Because you have a private network, you can still access your server over the VPN to work on the problem – all with the peace of mind that your server is completely off-limits until you're ready to bring it back online.

5. Remote Management

SoftLayer's dedicated servers sport a neat IP management interface (IPMI) which takes remote management to a whole new level. From reboots to power supply control to serial console and keyboard-video-mouse (KVM) access, you can do anything yourself.

Using tools like SuperMicro's IPMIView, you can connect to your server's management interface over the VPN to perform a multitude of low-level management tasks, even when your server is otherwise unreachable. Has your server shut itself off? You can power it back on. Frozen system? Reboot from anywhere in the world. Major crash? Feeling adventurous? Mount a CD-ROM image and use the KVM interface to install a new operating system yourself.

This list is just the beginning. Once you've gotten a taste of the infinite possibilities that come with having out-of-band access to your hosted environment, you'll never want to go back.

Now, go have some fun!

-Nick

October 27, 2010

Oh No CoLo, Go Go Godzilla (Apologies to Blue Oyster Cult)

A traditional Co-location has certain advantages and for some customers it makes a great deal of sense. At least it does at first blush. Take a look:

  • Colo is cheaper than doing it yourself as physical infrastructure costs are shared across a number of customers.
  • The hardware is yours, not the co-location company’s. This means you can scale in the manner you please versus what suits the business model of the co-location company. The potential downside is that this assumes you were smart enough to pre-buy the space to grow into…
  • The software is yours, too. You are not limited to the management suite provided by the co-location company. Use what you wish.
  • Colo centers are usually more robust than a typical business environment. They deploy more physical security in an environment that is designed to properly manage power (multiple generators on-site for example) and the risks associated with fire and other natural disasters.
  • Upgrade paths are determined by you versus the hosting provider.

But what about the cost side of the equation? What does that look like? It goes without saying that it is (usually) cheaper to use a provider like SoftLayer to host your gear, but by how much? We have built a relatively simple model to get at some of these answers.

Assumptions:

  • A mix of 75 small servers (Xeon 5503, 2 GB RAM, 250 GB SATA) and 75 large servers (Xeon 5520, 3 GB RAM, 250 GB SATA)
  • Colo pricing was based on $100 per U per month, or $2,500 per 40U rack per month cost. Colo capex assumed the same base configuration but at current market prices.
  • We assumed a $199 price point for SoftLayer’s small servers and $359 for large servers
  • Bandwidth consumption of 2500 GB per server per month (this is about 50% of what we see in house). A price of $50 per Mbps was used.
  • A refresh schedule of 50% at 36 months, 25% at 48 months and 25% at 60 months

So what do the numbers tell us? Well, I think it paints a pretty compelling picture for SoftLayer. The 60 month Total Cash Outlay (TCO) for Colocation is 131% of the SoftLayer cost.

Total Cash Outlay

  Collocation Softlayer
Initial Capital Expenditure (Cash Outlay) $341,700 $0
Monthly Recurring Charges $64,778 $60,450
60 Month TCO $4,740,917 $3,627,000

In addition to the total cash outlay, we can add in a bunch of additional “hassle costs” – the hassle of driving to the DC in the middle of the night for an emergency, the hassle of doing your own software patching, setting up your own monitoring, waiting on hardware delivery (and you are not going to be first in line given your volumes are likely to be low compared to SoftLayer), the hassle of booking assets to the balance sheet, depreciation entries, salvage accounting entries, actual equipment disposal, downtime while you perform upgrades – ugh, the list is almost endless.

The argument for a SoftLayer solution is pretty strong based on the numbers alone. And I think that they ought to be persuasive enough for most to rethink a colocation decision. That said colocation decisions are not made from a cost perspective alone.

For example:

  • Issues around data integrity and security often drive companies to adopt a corporate philosophy that dictates co-location (or an on premise solution) over an outsourced solution. There is a deemed corporate need to have data / applications running over their own iron. Indeed, for many, colocation represents a significant and progressive decision.
  • Many companies have infrastructure in place and a decision will not be made to veer from the current solution until a technology refresh is in order. Never mind that fact that a transition to an outsourced solution (and this is the case when lots of things are outsourced, not just infrastructure) can generate significant internal anxiety.

Many outsourcing adoption models seem to show a similar trend. To a degree much of this becomes a market evolution consideration.

  1. Adoption is very slow to start. Companies do not understand the new model and as a result do not trust vendor promises of cost savings and service delivery. To be fair to customers, service delivery for many solutions is poor at the beginning and cost savings often disappear as a result.
  2. The vendor population responds to initial concerns regarding service delivery and perceptions around cost savings. Innovation drives significant improvements from a product and service delivery perspective. The solution now seems more viable and adoption picks up.
  3. For some services (payroll is a good example), the cost savings of outsourcing the solution are realized across the marketplace with excellent service delivery and support being commonplace. We are close to mass market adoption, but some companies will opt to keep things in house regardless.

So where are we on the evolutionary curve? That is a difficult question to answer as there are numerous things to consider dependent upon where you want to look.

For most SMBs, outsourcing functions like HR/Payroll or their IT infrastructure is a no brainer – capital is not as readily available and existing staff is likely overburdened making sure everything else works. At the end of the day, the desire is to focus on running their business, not the technology that enables it. The decision is relatively easy to make.

As we go further up the food chain, the decision matrix gets infinitely more complex driven by an increase in geographic reach (local – national – international), an increase in the complexity of requirements, an increase in the number (and complexity) of systems being used and typically large IT organization that can be a terrific driving (or drowning?) force in the organization. The end result is that decisions to outsource anything are not easy to reach. Outsourcing occurs in pockets and SoftLayer certainly sees some of this where enterprise customers use us for a few things versus everything.

At the end of the day, the hosting market will continue to be multifaceted. All businesses are not alike and different needs (real or otherwise) will drive different business decisions. While I believe colocation will remain a viable solution, I believe that it will be less important in the future. The advantages presented by companies like SoftLayer only get more powerful over time, and we are going to be ready.

-Mike

September 23, 2010

Movies are Becoming Like Books

One thing that I’ve noticed about our customer behavior at SoftLayer is that as these Internet-centric businesses grow and they add more servers, their bandwidth usage per server also grows. A lot. Why? Their customers are using more bandwidth. I’ll wager that this trend is not unique to SoftLayer customers, but it’s something that’s happening across the board.

Here’s how I’ve been contributing to this end user bandwidth demand. Back in June, I ordered an iPad. Since I was already a Netflix customer, I downloaded their free iPad app. I found that the instant movie streaming is awesome. Every few days now, I look at what’s been newly released for instant streaming, put it in list view and sort by star rating high to low. It’s not only new movies but also old movies just newly set up for instant streaming. Then I pick something I’ve never seen and start watching.

What I really like is that I don’t have to budget the time to watch the whole movie. With my iPad, I can catch 15 minutes here, 20 minutes there, and watch the movie at my leisure over two or three days. Netflix restarts the movie where I left off when I open the app again.

This makes watching a movie much like reading a book. You can mark the spot you left off and pick it up again when you get a chance. You can stop the movie and back up to a particular time stamp to review a plot twist that you didn’t fully understand, or see that action sequence once again, just like my DVR at home. I’m currently working through “Eight Men Out” in this way.

So if I run to the car wash (which provides free wifi) and I know I’ll be waiting 15-20 minutes, I can grab my iPad and I have a choice of reading a book, watching a movie, playing games, or even getting some work done. If I go the movie route, I’m helping to increase the demand for bandwidth.

I’d actually like Netflix to let watching a movie become even more like reading a book. Like allowing “highlighting” to mark a beginning and ending timestamp to a clip you can save for future use. Or the ability to save notes at a particular timestamp. Or even better – allow you to do vocal commentary on a separate audio track. There are a couple of clips in “Eight Men Out” that I’d like to save for future use.

So, get to work on all that Netflix. :-)

April 14, 2010

The “Truth” (Or Common Sentiments) of Data Center and IT Professionals

In a recent column at searchdatacenter.com there was a list presented regarding the 20 universal truths in the Data Center. It’s a pretty funny list, but as an outsourced, on demand data center services provider, we are often catering to the IT operator’s mentality that resides in these truths. We have a good subset of customers that fall into many of these statements and we are continuously working to address, help, and augment—with the idea to help complete the IT story rather than compete with the IT strategy/needs of our customers… Below, I pulled out a few of the “truths” listed and added Softlayer views of them.

#2 - Upgrading Hardware is cheaper than improving Software – In the Softlayer world our services cater to this theory as a baseline for our offerings. We are constantly allowing customers to ‘right-size’ their compute needs and we are able to do this because of the robust compute offering and the flexible structure embedded in our business model.

#9 – Bandwidth is the same as energy. As more is provided, more is used – We have seen bandwidth usage grow almost threefold over the last 4 years and it’s a result of the internet applications demanding more bandwidth for things like video, voice, etc. Also, linear pricing models allow bandwidth to be less of an unknown and move towards a very predictable usage model.

#14 – It is always costlier and more time-consuming to wait and fix it later – Being able to quickly assess through metrics and functionality reviews, we fully subscribe to if it’s broke, fix it quickly and remove the legacy of the deficiencies. We are all human and will make errors and mistakes and being forward enough to recognize and repair these will continue to ensure your customer, employers, and employees that you have a handle on your business. Have you seen Lance Crosby’s printer stand?

#15 – By the time the CEO has learned enough to ask about a technology, it’s no longer a strategic advantage – My Favorite and have you met Lance Crosby?

#16 Exactly what you want will cost you more that you budget – In the spirit of full disclosure, our CFO, Mike Jones, takes our numbers that we budget for purchases and adds the “actual factor” to it of a +20-30%!!

The list of 20 is well worth the quick read and as I did the first time reading, I would imagine that many of you feel like you could have written this yourself. IT and Data Centers are tough. The goal for all of us is to increase efficiency, reduce costs and ensure that we spend more time moving forward and progressing rather than spending the bulk of our time fixing the past!!

January 20, 2009

Hope and Change

Hope and Change (oh, and make that change quick and it better be robust)

Remember when the internet used to be about bulletin boards, e-mail and other random tasks like keeping up with CNN, ESPN or whatever news outlet you may fancy? It wasn’t that long ago, but after some time in the internet industry I have to tell you that I was amazed today by a real life representation of the evolution not just of the internet, but communications as we know it.

As I write this, it’s 4:00pm CST on January 20, 2009. The significance of this day will be marked in history by the inauguration of the 44th president, Barack Obama. Love him, hate him, whatever your position is, you cannot deny the sheer volume of intrigue as we enter into this presidency and its influence on the next 4 or 8 years, depending on how history plays itself out.

This volume of intrigue has officially impacted the internet in a manner yet to be seen prior to today, but in a manner that is likely to be seen more and more as technology continues to progress. In Softlayer HQ, we have a U shaped office the spans two sides of a corporate office building with the glass walls of the exterior creating the exterior barrier, while the interior barriers are your typical sheetrock, egg white colored walls. In between the Glass and the sheetrock lie some 60-100 cubicles. As I walked from conference room to conference room, I could easily see the video streaming of the inauguration on dozens of our employees computers. Some used the really cool CNN/Facebook stream, some used the MSNBC Stream, some used others, but you get the idea. The fact that live streaming video of monumental events occurs on a video screen; while the tasks at hand are being completed is something that old movies portrayed as beyond belief. It’s really impressive the technologies that are at our fingertips and the abilities that we have to utilize these in our daily lives.

Softlayer had the opportunity to experience a real life “so what does that mean for internet going forward” example today. Recently we were approached by a large scale content delivery firm with the expectation that they had been contracted to do live streaming of the inauguration. With a simple introduction we indicated that we were well prepped to provide you the turnkey infrastructure to accomplish their task. Without going into great detail, the infrastructure included 200+ servers, multiple load balancers, firewalls, and other ancillary devices. With the on-demand nature of our business we were able to enable the infrastructure to functional within a 4 hour period. Although stated to the customer, they had their reservations, but true to our stated deployment times, we met with flying colors, the expectations.

So the real test, Performance! Although still streaming through what is likely to be one of the biggest, most watched events on the internet, Softlayer increased sustained bandwidth north of an additional 30Gbps to our network IP over and above our usual sustained bandwidth levels. Utilizing the 200+ Gbps of capacity throughout our network, we were in a fortunate position to have the capacity and the infrastructure in place to support such a large event. I am sure that the cellular firms wish they had prepped for better capacity in terms of spikes in usage. With many hearts racing in the throughout the office, but especially in the network department due to the bandwidth graphs racing upwards, all of here at Softlayer are excited that we were part of the day’s events. The many many meetings that involved robust network discussions, capacity planning, future growth models, etc. were all validated today with this event. The ‘We’ll never use that much’ and ‘that’s overkill’ discussions have all been put to rest. By deploying 40Gbps to each rack and building upstream capabilities that have capacity not as an issue, but as a planning and growth tool, we are extremely excited about what the future holds in terms of online, internet communications. We are looking forward to the next generation of internet technology as it becomes more and more robust. Our mantra remains firm as the leader in next generation virtualized data center services and we look forward to realizing the things that movies portray as beyond belief.

Categories: 
Subscribe to bandwidth