Posts Tagged 'NOC'

July 5, 2012

Bandwidth Utilization: Managing a Global Network

SoftLayer has over 1,750 Gbit/s of network capacity. In each of our data centers and points of presence, we have an extensive library of peering relationships and multiple 10 Gbit/s connections to independent Tier 1 carriers. We operate one of the fastest, most reliable networks on the planet, and our customers love it:

From a network operations standpoint, that means we have our work cut out for us to keep everything running smoothly while continuing to build the network to accommodate a steady increase in customer demand. It might be easier to rest on our laurels to simply maintain what we already have in place, but when you look at the trend of bandwidth usage over the past 18 months, you'll see why we need to be proactive about expanding our network:

Long Term Bandwidth Usage Trend

The purple line above plots the 95th percentile of weekly outbound bandwidth utilization on the SoftLayer network, and the red line shows the linear trend of that consumption over time. From week to week, the total usage appears relatively consistent, growing at a steady rate, but when you look a little deeper, you get a better picture of how dynamic our network actually is:

SoftLayer Weekly Bandwidth Usage

The animated gif above shows the 2-hour average of bandwidth usage on our entire network over a seven-week period (times in CDT). As you can see, on a day-to-day basis, consumption fluctuates pretty significantly. The NOC (Network Operations Center) needs to be able to accommodate every spike of usage at any time of day, and our network engineering and strategy teams have to stay ahead of the game when it comes to planning points of presence and increasing bandwidth capacity to accommodate our customers' ever-expanding needs.

But wait. There's more.

Let's go one level deeper and look a graph of the 95th percentile bandwidth usage on 5-minute intervals from one week in a single data center:

Long Term Bandwidth Usage Trend

The variations in usage are even more dramatic. Because we have thirteen data centers geographically dispersed around the world with an international customer base, the variations you see in total bandwidth utilization understate the complexity of our network's bandwidth usage. Customers targeting the Asian market might host content in SNG01, and the peaks in bandwidth consumption from Singapore will counterbalance the valleys of consumption at the same time in the United States and Europe.

With that in mind, here's a challenge for you: Looking at the graph above, if the times listed are in CDT, which data center do you think that data came from?

It would be interesting to look at weekly usage trends, how those trends are changing and what those trends tell us about our customer base, but that assessment would probably be "information overload" in this post, so I'll save that for another day.

-Dani

P.S. If you came to this post expecting to see "a big truck" or "a series of tubes," I'm sorry I let you down.

December 29, 2011

Using iPerf to Troubleshoot Speed/Throughput Issues

Two of the most common network characteristics we look at when investigating network-related concerns in the NOC are speed and throughput. You may have experienced the following scenario yourself: You just provisioned a new bad-boy server with a gigabit connection in a data center on the opposite side of the globe. You begin to upload your data and to your shock, you see "Time Remaining: 10 Hours." "What's wrong with the network?" you wonder. The traceroute and MTR look fine, but where's the performance and bandwidth I'm paying for?

This issue is all too common and it has nothing to do with the network, but in fact, the culprits are none other than TCP and the laws of physics.

In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn't send more until it receives an acknowledgement from the remote host that all data was received. This is called the "TCP Window." Data travels at the speed of light, and typically, most hosts are fairly close together. This "windowing" happens so fast we don't even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called "Bandwidth Delay Product," or BDP.

We can overcome BDP to some degree by sending more data at a time. We do this by adjusting the "TCP Window" – telling TCP to send more data per flow than the default parameters. Each OS is different and the default values will vary, but most all operating systems allow tweaking of the TCP stack and/or using parallel data streams. So what is iPerf and how does it fit into all of this?

What is iPerf?

iPerf is simple, open-source, command-line, network diagnostic tool that can run on Linux, BSD, or Windows platforms which you install on two endpoints. One side runs in a 'server' mode listening for requests; the other end runs 'client' mode that sends data. When activated, it tries to send as much data down your pipe as it can, spitting out transfer statistics as it does. What's so cool about iPerf is you can test in real time any number of TCP window settings, even using parallel streams. There's even a Java based GUI you can install that runs on top of it called, JPerf (JPerf is beyond the scope of this article, but I recommend looking into it). What's even cooler is that because iPerf resides in memory, there are no files to clean up.

How do I use iPerf?

iPerf can be quickly downloaded from SourceForge to be installed. It uses port 5001 by default, and the bandwidth it displays is from the client to the server. Each test runs for 10 seconds by default, but virtually every setting is adjustable. Once installed, simply bring up the command line on both of the hosts and run these commands.

On the server side:
iperf -s

On the client side:
iperf -c [server_ip]

The output on the client side will look like this:

#iperf -c 10.10.10.5
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 0.0.0.0 port 46956 connected with 168.192.1.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 10.0 sec  10.0 MBytes  1.00 Mbits/sec

There are a lot of things we can do to make this output better with more meaningful data. For example, let's say we want the test to run for 20 seconds instead of 10 (-t 20), and we want to display transfer data every 2 seconds instead of the default of 10 (-i 2), and we want to test on port 8000 instead of 5001 (-p 8000). For the purposes of this exercise, let's use those customization as our baseline. This is what the command string would look like on both ends:

Client Side:

#iperf -c 10.10.10.5 -p 8000 -t 20 -i 2
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.10.10.10 port 46956 connected with 10.10.10.5 port 8000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  6.00 MBytes  25.2 Mbits/sec
[  3]  2.0- 4.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3]  4.0- 6.0 sec  7.00 MBytes  29.4 Mbits/sec
[  3]  6.0- 8.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3]  8.0-10.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3] 10.0-12.0 sec  7.00 MBytes  29.4 Mbits/sec
[  3] 12.0-14.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3] 14.0-16.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3] 16.0-18.0 sec  6.88 MBytes  28.8 Mbits/sec
[  3] 18.0-20.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3]  0.0-20.0 sec  70.1 MBytes  29.4 Mbits/sec

Server Side:

#iperf -s -p 8000 -i 2
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[852] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 58316
[ ID] Interval Transfer Bandwidth
[  4]  0.0- 2.0 sec  6.05 MBytes  25.4 Mbits/sec
[  4]  2.0- 4.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4]  4.0- 6.0 sec  6.94 MBytes  29.1 Mbits/sec
[  4]  6.0- 8.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4]  8.0-10.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4] 10.0-12.0 sec  6.95 MBytes  29.1 Mbits/sec
[  4] 12.0-14.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4] 14.0-16.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4] 16.0-18.0 sec  6.95 MBytes  29.1 Mbits/sec
[  4] 18.0-20.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4]  0.0-20.0 sec  70.1 MBytes  29.4 Mbits/sec

There are many, many other parameters you can set that are beyond the scope of this article, but for our purposes, the main use is to prove out our bandwidth. This is where we'll use the TCP window options and parallel streams. To set a new TCP window you use the -w switch and you can set the parallel streams by using -P.

Increased TCP window commands:

Server side:
#iperf -s -w 1024k -i 2

Client side:
#iperf -i 2 -t 20 -c 10.10.10.5 -w 1024k

And here are the iperf results from two Softlayer file servers – one in Washington, D.C., acting as Client, the other in Seattle acting as Server:

Client Side:

# iperf -i 2 -t 20 -c 10.10.10.5 -p 8000 -w 1024k
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[  3] local 10.10.10.10 port 53903 connected with 10.10.10.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  3]  2.0- 4.0 sec  28.5 MBytes   120 Mbits/sec
[  3]  4.0- 6.0 sec  28.4 MBytes   119 Mbits/sec
[  3]  6.0- 8.0 sec  28.9 MBytes   121 Mbits/sec
[  3]  8.0-10.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 10.0-12.0 sec  29.0 MBytes   122 Mbits/sec
[  3] 12.0-14.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 14.0-16.0 sec  29.0 MBytes   122 Mbits/sec
[  3] 16.0-18.0 sec  27.9 MBytes   117 Mbits/sec
[  3] 18.0-20.0 sec  29.0 MBytes   122 Mbits/sec
[  3]  0.0-20.0 sec   283 MBytes   118 Mbits/sec

Server Side:

#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[  4] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 53903
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  4]  2.0- 4.0 sec  28.6 MBytes   120 Mbits/sec
[  4]  4.0- 6.0 sec  28.3 MBytes   119 Mbits/sec
[  4]  6.0- 8.0 sec  28.9 MBytes   121 Mbits/sec
[  4]  8.0-10.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 10.0-12.0 sec  29.0 MBytes   121 Mbits/sec
[  4] 12.0-14.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 14.0-16.0 sec  29.0 MBytes   122 Mbits/sec
[  4] 16.0-18.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 18.0-20.0 sec  29.0 MBytes   121 Mbits/sec
[  4]  0.0-20.0 sec   283 MBytes   118 Mbits/sec

We can see here, that by increasing the TCP window from the default value to 1MB (1024k) we achieved around a 400% increase in throughput over our baseline. Unfortunately, this is the limit of this OS in terms of Window size. So what more can we do? Parallel streams! With multiple simultaneous streams we can fill the pipe close to its maximum usable amount.

Parallel Stream Command:
#iperf -i 2 -t 20 -c -p 8000 10.10.10.5 -w 1024k -P 7

Client Side:

#iperf -i 2 -t 20 -c -p 10.10.10.5 -w 1024k -P 7
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
 [ ID] Interval       Transfer     Bandwidth
[  9]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  4]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  7]  0.0- 2.0 sec  25.6 MBytes   107 Mbits/sec
[  8]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  5]  0.0- 2.0 sec  25.8 MBytes   108 Mbits/sec
[  3]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  6]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[SUM]  0.0- 2.0 sec   178 MBytes   746 Mbits/sec
 
(output omitted for brevity on server & client)
 
[  7] 18.0-20.0 sec  28.2 MBytes   118 Mbits/sec
[  8] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  5] 18.0-20.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 18.0-20.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 18.0-20.0 sec  28.9 MBytes   121 Mbits/sec
[  9] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  6] 18.0-20.0 sec  28.9 MBytes   121 Mbits/sec
[SUM] 18.0-20.0 sec   200 MBytes   837 Mbits/sec
[SUM]  0.0-20.0 sec  1.93 GBytes   826 Mbits/sec 

Server Side:

#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[  4] local 10.10.10.10 port 8000 connected with 10.10.10.5 port 53903
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0- 2.0 sec  25.7 MBytes   108 Mbits/sec
[  8]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  4]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  9]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[ 10]  0.0- 2.0 sec  25.9 MBytes   108 Mbits/sec
[  7]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  6]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[SUM]  0.0- 2.0 sec   178 MBytes   747 Mbits/sec
 
[  4] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  5] 18.0-20.0 sec  28.3 MBytes   119 Mbits/sec
[  7] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[ 10] 18.0-20.0 sec  28.1 MBytes   118 Mbits/sec
[  9] 18.0-20.0 sec  28.0 MBytes   118 Mbits/sec
[  8] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  6] 18.0-20.0 sec  29.0 MBytes   121 Mbits/sec
[SUM] 18.0-20.0 sec   200 MBytes   838 Mbits/sec
[SUM]  0.0-20.1 sec  1.93 GBytes   825 Mbits/sec

As you can see from the tests above, we were able to increase throughput from 29Mb/s with a single stream and the default TCP Window to 824Mb/s using a higher window and parallel streams. On a Gigabit link, this about the maximum throughput one could hope to achieve before saturating the link and causing packet loss. The bottom line is, I was able to prove out the network and verify bandwidth capacity was not an issue. From that conclusion, I could focus on tweaking TCP to get the most out of my network.

I'd like to point out that we will never get 100% out of any link. Typically, 90% utilization is about the real world maximum anyone will achieve. If you get any more, you'll begin to saturate the link and incur packet loss. I should also point out that Softlayer doesn't directly support iPerf, so it's up to you install and play around with. It's such a versatile and easy to use little piece of software that it's become invaluable to me, and I think it will become invaluable to you as well!

-Andrew

October 28, 2010

Settling In

One of the small thrills in life is settling into a new house. While moving can be stressful, once you get settled into your place, there’s a certain feeling of pride associated with the new move. In the not-too-distant past our staff moved over to the new corporate headquarters in Dallas. Given, there’s the obvious unpacking and exploring every nook and cranny. Once you get settled in, though, set up all your stuff, and explore every corner of the new place, you can finally hang your name on the mailbox and call it your own.

It’s a far cry from our previous space (equate it to moving from a decent apartment, to a squeaky clean new house, full of nifty bells and whistles). We’ve got a brand new A/C system (that works almost too well in the opinion of some), a sonic-style ice machine, and room for three new datacenter pods. We’ve got coffee makers in almost every department (what’s a large scale data provider without caffeine). We’ve got a nifty display in the NOC that gives us an at-a-glance idea of what’s going on within our network. That’s just a few of the things. Ask anyone in our new “house” and they’ll tell you they like the new digs.

I’ve gotten fairly well settled in, and am starting to fall into my new daily routine at the new home. Admittedly I got lost the first few days, but now I can navigate with a fairly reliable degree of certainty. I can locate the coffee machines blindfolded as well. I’m also enjoying the privilege of working so closely with our other departments, now that we’re all housed in the same location. I’m certainly looking forward to seeing what the future has to offer in our new home.

-Matthew

February 1, 2010

Fuel!

Fuel!

Ask anyone here on our staff, and they’ll tell you a few things about their position:

  1. It’s never boring
  2. It can be quite demanding
  3. We’re never technically “off duty”

That being said, we all need our fuel to keep us going at warp speed. Luckily at the NOC we’re lucky enough to have a fully stocked break room with all sorts of odds and ends to keep us going when the energy levels get low. Allow me to show a few of my personal favorites:

  1. Chocolate Covered Raisins
    These little buggers are great when you’re running like mad and just need a quick snack. You can scoop up a good cupful and keep them at the desk for the remainder of your shift. You can take a little detour to grab a couple while en route to your destination. You can also trick yourself into thinking that they’re healthy since they have raisins in them.
  2. Doritos
    These have made a reliable meal substitute on multiple occasions. A few bags of these can trick your hunger pains and quiet the ache for a few until you can grab an actual meal (not always a guarantee).
  3. Coffee
    Any fan of caffeine knows why I’m adding this. It’s often the first thing ingested at the start of the day, and is famous for its energy-inducing properties. Love it or hate it, you cannot deny the eye-opening effects of this one.
  4. Dr Pepper
    My second favorite carbonated beverage provided here. A quick drink and a quick pick me up.

And for my favorite:

RedBull!

Much like many of the techs here, I have a clinical addiction to caffeine. Caffeine is the lifeblood of the NOC and keeps us working at top speed and form. To date, I have found no quicker delivery of this than through the 8.2 ounce can of this elixir.

And there you have it. These are the snacks and beverages provided that keep me going. And while it’s no health food store, it certainly spikes the blood sugar or caffeine levels enough to sustain a happy and proficient technician through the long night hours.

November 25, 2009

The Secret Mind of a SoftLayer Tech

I sit right in the middle of the NOC (Network Operations Center) here at SoftLayer. I hear all the tech calls, project discussion, and random banter from the techs on a daily basis. Most techs are also propeller heads on their own time. They have servers of their own, apps they like to run, preferences as to what hardware and software they like best, etc. Now, working in this field for most of my life I know that techs are not company loyal when it comes to their personal geeky funness (yes, that’s a word) I don’t care if spell check, Google and the rest of the world doesn’t think so (but I digress) they like what does the best job regardless of where it comes from.

I routinely hear techs talking about their personal servers, apps, etc. and referring back to SoftLayer with comments like, “I just host it on my server here at SoftLayer so I don’t have an issue.” With the issue being whatever the topic of conversation might have been. Network speed and stability, hardware and software reliability, ease of access (KVM over IP, the portal in general, multiple remote control options) cost, endless amount of add-ons, and the latest and greatest in everything!

I can relate.

I realized the potential of SoftLayer from the beginning and this place continues to exceed my expectations- and my expectations are always over the top! Simply put, after working in the corporate world and realizing what could be done with the right people and the right attitudes, I vowed only to work with a company that shared those views. And quite honestly I never thought I would see it happen. Then along came SoftLayer.

When techs constantly refer back to SoftLayer for their own fun computer projects as being the best solution, it just confirms what I already knew:

SoftLayer Rocks!

September 25, 2009

How a great NOC team is just like a great F1 Team.

Those of you who follow auto sports understand that it’s not just a sport of physical endurance and skill. Those two traits are definitely part of it but a large part of a team’s performance in a race also comes down to the tools and devices the team interacts with to achieve their results. If the driver is as strong as an ox, skillful and able to endure 12hrs of in the seat driving it still won’t guarantee him the race unless his car and crew are up to the task and able to perform at that same level. Likewise if the driver is not up to task but the car and team are you will have a similar inability to achieve. This sets auto sports aside from many of the other team sports we have come to love over time like football, baseball, soccer, etc. These sports all involve teammates however their reliance on tools and other devices to achieve their results is much less than in racing. Because of this it is incredibly important that all members of a Formula One team, from the car designers to the pit crew and driver, be performing at 100% at all times. This is not entirely unlike how a great operations team works in a datacenter. All members of that team must be able to fulfill their role to the best of their ability and then some. An ops team that has the best hardware and tools along with the best technicians and knowledge is an unstoppable force comparable to the Ferrari’s and Brawn GP’s of F1.

A driver can only do so much with the equipment they are handed on race day. If Sebastian Vettel is given a car with a bad engine for example it makes his job much more difficult, if not impossible, to succeed. The same goes with datacenter equipment. That is why SoftLayer prides itself on using high quality components from high quality manufactures for all networking and server applications. Of course being the best requires more than just high end equipment and tools. It requires people of an equal caliber. That is why SoftLayer goes above and beyond to ensure that their staff is well informed, capable and happy. This creates an environment where people not only want to personally succeed but also want to share in the successes and failures that the company experiences, much like any well developed team would. This also creates a feeling of investment by those who are on the team which in turn pushes each member to do their absolute best at all times. SoftLayer’s involvement in recent large media events was a huge undertaking that the company turned in to successful ventures. Just like how Ferrari is always pushing for the win and to be the best, so is SoftLayer.

Innovation is another competitive trait that you see often in F1. BrawnGP and RedBull Racing (both relatively fledgling teams in Formula 1) took an alternate interpretation of the design guidelines this season which, after much ado, was found to be a perfectly legal interpretation that many of the other teams didn’t see or use. These innovations helped BrawnGP, a new comer to the sport (technically they are the defunct Honda team but that is for a different discussion), lead the championship standings this season and has handed them a number of victories. Here again the kinds of innovation you see in the top tier or racing you also see with SoftLayer. No, we didn’t add wings to our servers but our network within a network topography and CloudLayer services are great examples of how SoftLayer is taking the old rule book and innovating new ideas, products and services utilizing a different yet valid interpretation. The success yielded from these experiences continues to motivate the SoftLayer team and is proof that following to the beat of a new drum can, in many aspects of business and sport, be a good idea.

August 11, 2008

Knowledge is Power

A few years ago, I once had a few managers who made quite an impression on me… each of them pushed me to learn as much as I could about my given profession. Each of them had a personal guideline that really stuck with me. One’s was to “learn two new things a day”, while the other’s was to “improve yourself at every opportunity”.

To this day, I still strive to learn as much as I can about the different facets of my profession. As time permits I enjoy asking my peers questions regarding the plethora of Operating Systems we use here at SoftLayer. Needless to say, there’s a limitless amount of knowledge here to learn.

Additionally, we have such resources as the local Wiki (er, SLiki – sorry Brad) where we can find almost any answer to any question we can fathom. Between the Wiki, the brain trust here at the NOC, and the wondrous internet, there’s no shortage of resources to get the answers to the questions that baffle me.

Lucky for you, the customer, we have our KnowledgeLayer, in which our team takes their knowledge, and passes it on to you, so that you, too, can benefit and quite possibly learn two new things a day.

Now, of course, I sit around and ponder - Two things per day? Why would he have set his bar so low?

-Matthew

March 3, 2008

I'm NOC Gonna Get Sick!

**Cough, cough, sniffle, sniffle, hack hack**

These are the famous noises that come from the NOC every so often. I swore and swore that I wouldn't get sick. To be honest, there was something going around about four months ago, and I was just about the only one that didn't get sick, and I was King-of-the-NOC!

Not this time.

Emails were sent out -- "Clean your workstations -- wash your hands -- don't throw your used Kleenex tissues at other NOC personnel -- and for the love of God, don't get sick". Oops. So, one by one, each NOC technician started getting sick. One down, two down, three down…

Then it hit me.

You know how it starts, don't act dumb. It all starts with that sore throat, that isn't that sore, but makes you wonder if you're getting sick, and everything ends up becoming a psychological battle of "do's and don'ts" to get better, before you get any worse. It never works. You start feeling that sore throat, which gets worse as every hour goes by, as you start overdosing on Vitamin C drops/pills. Then you think, "I don't just need Vitamin C, right!?" So, then you did around the infamous Softlayer NOC Pharmacy, and start overdosing on off-brand multivitamin, Centrum wanna-be's**.

Things get worse.

So, by the end of the shift, your throat feels like it's on fire. You have to make a Wal-Mart run at 12am in the morning (depending on your shift), and you buy every little piece of medicine you think you might need to make life better while you are…Sick.

So, for a few days, you end up chugging cough syrup, feeding on Centrum wanna-be's, Vitamin C pills/drops, Halls Mentho-lyptus "Mountain Menthol" cough drops, Airborne Formula (more on this later)...and VITAMIN GUMBALLS!!

That's right, folks. We have vitamin gumballs, and they are GROSS! The pink one is okay, probably the best out of them all, but it still taste like rubber. Ugh. Now, as for the Airborne Formula, I just don't trust it. I mean, people say it's GREAT, however, I need proof. I mean, come on, it was created by a second grade teacher. Was this teacher a doctor before he/she decided to actually teach kids multiplication? Think people. Think.

Overall, most of us in the NOC got sick, including me. I'm just now getting over this, while I still fight off a tickle cough, but I’m sure this will never be the end. So folks, keep taking that Vitamin C and that Centrum wanna-be, and don’t get sick. I’m sure next time I’m NOC gonna get sick!

** Centrum Wanna-be is what I call Off-Brand (Equate) Multivitamin Tablets.

-Drew

Subscribe to noc