Posts Tagged 'Performance'

April 9, 2012

Scaling SoftLayer

SoftLayer is in the business of helping businesses scale. You need 1,000 cloud computing instances? We'll make sure our system can get them online in 10 minutes. You need to spin up some beefy dedicated servers loaded with dual 8-core Intel Xeon E5-2670 processors and high-capacity SSDs for a new application's I/O-intensive database? We'll get it online anywhere in the world in under four hours. Everywhere you look, you'll see examples of how we help our customers scale, but what you don't hear much about is how our operations team scales our infrastructure to ensure we can accommodate all of our customers' growth.

When we launch a new data center, there's usually a lot of fanfare. When AMS01 and SNG01 came online, we talked about the thousands of servers that are online and ready. We meet huge demand for servers on a daily basis, and that presents us with a challenge: What happens when the inventory of available servers starts dwindling?

Truck Day.

Truck Day not limited to a single day of the year (or even a single day in a given month) ... It's what we call any date our operations team sets for delivery and installation of new hardware. We communicate to all of our teams about the next Truck Day in each location so SLayers from every department can join the operations team in unboxing and preparing servers/racks for installation. The operations team gets more hands to speed up the unloading process, and every employee has an opportunity to get first-hand experience in how our data centers operate.

If you want a refresher course about what happens on a Truck Day, you can reference Sam Fleitman's "Truck Day Operations" blog, and if you want a peek into what it looks like, you can watch Truck Day at SR02.DAL05. I don't mean to make this post all about Truck Day, but Truck Day is instrumental in demonstrating the way SoftLayer scales our own infrastructure.

Let's say we install 1,000 servers to officially launch a new pod. Because each pod has slots for 5,000 servers, we have space/capacity for 3,000-4,000 more servers in the server room, so as soon as more server hardware becomes available, we'll order it and start preparing for our next Truck Day to supplement the pod's inventory. You'd be surprised how quickly 1,000 servers can be ordered, and because it's not very easy to overnight a pallet of servers, we have to take into account lead time and shipping speeds ... To accommodate our customers' growth, we have to stay one step ahead in our own growth.

This morning in a meeting, I saw a pretty phenomenal bullet that got me thinking about this topic:

Truck Day — 4/3 (All Sites): 2,673 Servers

In nine different data center facilities around the world, more than 2,500 servers were delivered, unboxed, racked and brought online. Last week. In one day.

Now I know the operations team wasn't looking for any kind of recognition ... They were just reporting that everything went as planned. Given the fact that an accomplishment like that is "just another day at SoftLayer" for those guys, they definitely deserve recognition for the amazing work they do. We host some of the most popular platforms, games and applications on the Internet, and the DC-Ops team plays a huge role in scaling SoftLayer so our customers can scale themselves.

-@gkdog

January 26, 2012

Up Close and Personal: Intel Xeon E7-4850

Last year, we announced that we would be the first provider to offer the Intel E7-4800 series server. This bad boy has record-breaking compute power, tons of room for RAM and some pretty amazing performance numbers, and as of right now, it's one of the most powerful servers on the market.

Reading about the server and seeing it at the bottom of the "Quad Processor Multi-core Servers" list on our dedicated servers page is pretty interesting, but the real geeks want to see the nuts and bolts that make up such an amazing machine. I took a stroll down to the inventory room in our DAL05 data center in hopes that they had one of our E7-4850s available for a quick photo shoot to share with customers, and I was in luck.

The only way to truly admire a server is to put it through its paces in production, but getting to see a few pictures of the server might be a distance second.

Intel Xeon E7-4850

When you see the 2U face of the server in a rack, it's a little unassuming. You can load it up with six of our 3TB SATA hard drives for a total of 18TB of storage if you're looking for a ton of space, and if you're focused on phenomenal disk IO to go along with your unbelievable compute power, you can opt for SSDs. If you still need more space,can order a 4U version fill ten drive bays!

Intel Xeon E7-4850

The real stars of the show when it comes to the E7-4850 server are nestled right underneath these heatsinks. Each of the four processors has TEN cores @ 2.00GHz, so in this single box, you have a total of forty cores! I'm not sure how Moore's Law is going to keep up if this is the next step to jump from.

Intel Xeon E7-4850

With the abundance of CPU power, you'll probably want an abundance of RAM. Not coincidentally, we can install up to 512GB of RAM in this baby. It's pretty unbelievable to read the specs available in the decked-out version of this server, and it's even crazier to think that our servers going to get more and more powerful.

Intel Xeon E7-4850

With all of the processing power and RAM in this box, the case fans had to get a bit of an upgrade as well. To keep enough air circulating through the server, these three case fans pull air from the cold aisle in our data center, cool the running components and exhaust the air into the data center's "hot aisle."

Intel Xeon E7-4850

Because this machine could be used to find the last digit of pi or crunch numbers to find the cure for cancer, it's important to have redundancy ... In the picture above, you see the redundant power supplies that safeguard against a single point of failure when it comes to server power. In each of our data centers, we have N+1 power redundancy, so adding N+1 power redundancy into the server isn't very redundant at all ... It's almost expected!

If your next project requires a ton of processing power, a lot of room for RAM, and redundant power, this server is up for the challenge! Configure your own quad-proc ten-core beast of a machine in our shopping cart or contact our SLales team for a customized quote on one: sales@softlayer.com

When you get done benchmarking it against your old infrastructure, let us know what you think!

-Summer

January 17, 2012

Web Development - HTML5 - Web Fonts

All but gone are the days of plain, static webpages flowered with horrible repeating neon backgrounds and covered with nauseating animated GIFs created by amateur designers that would make your mother cry and induce seizures in your grandpa. Needless to say, we have come a long way since Al Gore first "created the intarwebs" in the early '90's. For those of you born in this century, that's the 1990's ... Yes, the World Wide Web is still very new. Luckily for the seven billion people on this lovely planet, many advancements have been introduced into our web browsers that make our lives as designers and developers just a little bit more tolerable.

Welcome to the third installment in Web Development series. If you're just joining us, the first posts in the series covered JavaScript Optimization and HTML5 Custom Data Attributes ... If you haven't read those yet, take a few minutes to catch up and head back to this blog where we'll be looking at how custom web fonts can add a little spice to your already-fantastic website.

If you're like me, you've probably used the same three or four fonts on most sites you've designed in the past: Arial, Courier New, Trebuchet MS and Verdana. You know that pretty much all browsers will have support for these "core" fonts, so you never ventured beyond them because you wanted the experience to remain the same for everyone, no matter what browser a user was using to surf. If you were adventurous and wanted to throw in a little typographical deviation, you might have created a custom image of the text in whatever font Photoshop would allow, but those days are in the past (or at least they should be).

Why is using an image instead of plain text unfriendly?

  1. Lack of Flexibility - Creating an image is time-consuming. Even if you have really fast fingers and know your way around Photoshop, it will never be as fast as simply typing that text into your favorite editor. Also, you can't change the styles (font-size, color, text-decoration, etc.) of an image using CSS like you can with text.
  2. Lack of Accessibility – Not everyone is alike. Some of your readers or clients may have impairments that require screen readers or a really large font. Using an image – especially one that doesn't contain a good long description – prevents those users from getting the full experience. Also, some people use text-only browsers that don't display any images. Think about your whole audience!
  3. More to Download – Plain text doesn't require the same number of bytes as an image of that same text. By not having another image, you are saving on the amount of time it takes to load your page.

Now that we're on the same page about the downsides of the "old way" of doing things, let's look at some cool HTML5-powered methods for displaying custom fonts. Before we get started, we need to have some custom fonts to use. Google has a nice interface for downloading custom fonts (http://www.google.com/webfonts), and there are plenty of other sites that provide free and non-free fonts that can suit your taste/needs. You can pick and choose which ones you'd like to use (remembering to always follow copyright guidelines), and once you've created and downloaded your collection of fonts, you'll need to setup your CSS to read them.

For simplicity, my file structure will be setup with the HTML and CSS files in the same root directory. I will have a fonts directory where I will keep all my custom fonts.

/fonts.html
/fonts.css
/styles.css
/fonts/MyCustomFont/MyCustomFont-Regular.ttf
/fonts/MyCustomFont/MyCustomFont-Bold.ttf
/fonts/...

My fonts.html file will include the two CSS files in the head section. The order in which you include the CSS files does not matter.

<link rel="stylesheet" type="text/css" href="fonts.css" />
<link rel="stylesheet" type="text/css" href="styles.css" />

The fonts.css file will include the definitions for all of our custom fonts. The styles.css file will be our main CSS file for our website. Defining our custom fonts (in fonts.css) is really simple:

@font-face {
    font-family: 'MyCustomFont';
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.ttf') format('truetype');
}

It's almost too easy thanks to HTML5!

Let's break this down into its components to better understand what's going on here. The @font-face declaration will be ignored by older browsers that don't understand it, so this standards-compliant definition degrades nicely. The font-family descriptor is the name that you'll use to reference this font family in your other CSS file(s). The src descriptor contains the location of where your font is stored and the format of the font.

There are several things to note here. The quotes around MyCustomFont in the font-family descriptor are optional. If it were My Custom Font instead (in fonts.css and styles.css), it would still be successfully read. The quotes around the url portion are also optional. However, the quotes around the format portion are not optional. To keep things consistent, I have a habit of adding quotes around all of these items.

An alternative way to define the same font would be to leave off the format portion of the src descriptor. Browsers don't need the format portion if it's a standard font format (described below).

@font-face {
    font-family: 'MyCustomFont';
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.ttf');
}

Like standard url inclusions in other CSS definitions, the URL item is relative to the location of the definition file (fonts.css). The URL may also be an absolute location or point to a different website altogether. If using the Google web fonts site mentioned earlier (or similar site), you may simply point the URL to the location suggested instead of downloading the actual font.

If you've dealt with web fonts before, you may already be familiar with the multiple formats: WOFF (Web Open Font Format, .woff), TrueType (.ttf), OpenType (.ttf, .otf), Embedded Open Type (.eot) and SVG Font (.svg, .svgz). I won't go into great detail here about these, but if you're interested in learning more, Google and W3C are great resources.

It should be noted that all browsers are not alike (no shock there) and some may not render some font formats correctly or at all. You can get around this by including multiple src descriptors in your @font-face declaration to try and support all the browsers.

@font-face {
    font-family: 'MyCustomFont';
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.eot'); /* Old IE */
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.ttf'); /* Cool browsers */
}

Now that we have our font definition setup, we have to include our new custom font in our styles.css. You've done this plenty of times:

h1, p {
    font-family: MyCustomFont, Arial;
}

There you go! For some reason if MyCustomFont is not understood, the browser will default to Arial. This degrades gracefully and is really simple to use. One thing to note is that even though your fonts.css file may define twenty custom fonts, only the fonts that are included and used in your styles.css file will be downloaded. This is very smart of the browser – it only downloads what it's going to use.

So now you have one more tool to add to your development box. As more users adopt newer, standards-compliant browsers, it's easier to give your site some spice without the headaches of creating unnecessary images. Go forth and impress your friends with your new web font knowledge!

Happy Coding!

-Philip

P.S. As a bonus, you can check out the in-line style declaration in the source of this post to see how "Happy Coding!" is coded to use the Monofett font family.

January 10, 2012

Web Development - HTML5 - Custom Data Attributes

I recently worked on a project that involved creating promotion codes for our clients. I wanted to make this tool as simple as possible to use and because this involved dealing with thousands of our products in dozens of categories with custom pricing for each of these products, I had to find a generic way to deal with client-side form validation. I didn't want to write custom JavaScript functions for each of the required inputs, so I decided to use custom data attributes.

Last month, we started a series focusing on web development tips and tricks with a post about JavaScript optimization. In this installment, we're cover how to use HTML5 custom data attributes to assist you in validating forms.

Custom data attributes for elements are "[attributes] in no namespace whose name starts with the string 'data-', has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z)." Thanks W3C. That definition is bookish, so let's break it down and look at some examples.

Valid:

<div data-name="Philip">Mr. Thompson is an okay guy.</div>
<a href="softlayer.com" data-company-name="SoftLayer" data-company-state="TX">SoftLayer</a>
<li data-color="blue">Smurfs</li>

Invalid:

// This attribute is not prefixed with 'data-'
    <h2 database-id="244">Food</h2>
 
// These 2 attributes contain capital letters in the attribute names
    <p data-firstName="Ashley" data-lastName="Thompson">...</p>
 
// This attribute does not have any valid characters following 'data-'
    <img src="/images/pizza.png" data-="Sausage" />

Now that you know what custom data attributes are, why would we use them? Custom attributes allow us to relate specific information to particular elements. This information is hidden to the end user, so we don't have to worry about the data cluttering screen space and we don't have to create separate hidden elements whose purpose is to hold custom data (which is just bad practice). This data can be used by a JavaScript programmer to many different ends. Among the most common use cases are to manipulate elements, provide custom styles (using CSS) and perform form validation. In this post, we'll focus on form validation.

Let's start out with a simple form with two radio inputs. Since I like food, our labels will be food items!

<input type="radio" name="food" value="pizza" data-sl-required="food" data-sl-show=".pizza" /> Pizza
<input type="radio" name="food" value="sandwich" data-sl-required="food" data-sl-show="#sandwich" /> Sandwich
<div class="hidden required" data-sl-error-food="1">A food item must be selected</div>

Here we have two standard radio inputs with some custom data attributes and a hidden div (using CSS) that contains our error message. The first input has a name of food and a value of pizza – these will be used on the server side once our form is submitted. There are two custom data attributes for this input: data-sl-required and data-sl-show. I am defining the data attribute data-sl-required to indicate that this radio button is required in the form and one of them must be selected to pass validation. Note that I have prefixed required with sl- to namespace my data attribute. required is generic and I don't want to have any conflicts with any other attributes – especially ones written for other projects! The value of data-sl-required is food, which I have tied to the div with the attribute data-sl-error-food (more on this later).

The second custom attribute is data-sl-show with a selector of .pizza. (To review selectors, jump back to the JavaScript optimization post.) This data attribute will be used to show a hidden div that contains the class pizza when the radio button is clicked. The sandwich radio input has the same data attributes but with slightly different values.

Now we can move on to our HTML with the hidden text inputs:

<div class="hidden pizza">
    Pizza type: <input type="text" name="pizza" data-sl-required="pizza" data-sl-depends="{&quot;type&quot;:&quot;radio&quot;,&quot;name&quot;:&quot;food&quot;,&quot;value&quot;:&quot;sandwich&quot;}" />
    <div class="hidden required" data-sl-error-pizza="1">The pizza type must not be empty</div>
</div>
 
<div class="hidden" id="sandwich">
    Sandwich: <input type="text" name="sandwich" data-sl-required="sandwich" data-sl-depends="{&quot;type&quot;:&quot;radio&quot;,&quot;name&quot;:&quot;food&quot;,&quot;value&quot;:&quot;sandwich&quot;}" />
    <div class="hidden required" data-sl-error-sandwich="1">The sandwich must not be empty</div>
</div>

These are hidden divs that contain text inputs that will be used to input more data depending on the radio input selected. Notice that the first outer div has a class of pizza and the second outer div has an id of sandwich. These values tie back to the data-sl-show selectors for the radio inputs. These text inputs also contain the data-sl-required attribute like the previous radio inputs. The data-sl-depends data attribute is a fun attribute that contains a JSON-encoded object that's been run through PHP's htmlentities() function. For readability, the data-sl-depends values contain:

{"type":"radio","name":"food","value":"pizza"}
{"type":"radio","name":"food","value":"sandwich"}

This first attribute says that our text input “depends” on the radio input with the name food and value pizza to be visible and selected in order to be processed as required. You can just imagine the possibilities of combinations you can create to make very custom functionality with very generic JavaScript.

Now we can examine our JavaScript to make sense of all these custom data attributes. Note that I'll be using Mootools' syntax, but the same can fairly easily be accomplished using jQuery or your favorite JavaScript framework. I'm going to start by creating a DataForm class. This will be generic enough so that you can use it in multiple forms and it's not tied to this specific instance. Reuse is good! To have it fit our needs, we're going to pass some options when instantiating it.

new DataForm({
    ...,
    dataAttributes: {
        required: 'data-sl-required',
        errorPrefix: 'data-sl-error',
        depends: 'data-sl-depends',
        show: 'data-sl-show'
    }
});

As you can see, I'm using the data attribute names from our form as the options – this will allow you to create your own data attribute names depending on your uses. We first need to make our hidden divs visible whenever our radio buttons are clicked – we'll use our initData() method for that.

initData: function() {
    var attrs = this.options.dataAttributes,
        divs = [];
 
    $$('input[type=radio]['+attrs.show+']').each(function(input) {
        var div = $$(input.get(attrs.show));
        divs.push(div);
        input.addEvent('change', function(e) {
            divs.each(function(div) { div.hide(); });
            div.show();
        });
    });
}

This method grabs all the radio inputs with the show attribute (data-sl-show) and adds an onchange event to each of them. When a radio input is checked, it first hides all the divs, and then shows the div that's associated with that radio input.

Great! Now we have our text inputs showing and hiding depending on which radio button is selected. Now onto the actual validation. First, we'll make sure our required radio inputs are checked:

$$('input[type=radio]['+attrs.required+']:not(:checked)').each(function(input) {
    var name = input.get('name');
    checkError(name, isRadioWithNameSelected(name))
});

This grabs all the unchecked radio inputs with the required attribute (data-sl-required) and checks to see if any radio with that same name is selected using the function isRadioWithNameSelected(). The checkError() function will show or hide the error div (data-sl-error-*) depending if the radio is checked or not. Don't worry - you'll see how these functions are implemented in the JSFiddle below.

Now we can check our text inputs:

$$('input[type=text]['+attrs.required+']:visible').each(function(input) {
    var name = input.get('name'),
        depends = input.get(attrs.depends),
        value = input.get('value').trim();
 
    if (depends) {
        depends = JSON.encode(depends);
        switch (depends.type) {
            case 'radio':
                if (depends.name &amp;&amp; depends.value) {
                    var radio = $$('input[type=radio][name="'+depends.name+'"][value="'+depends.value+'"]:checked');
                    if (!radio) {
                        checkError(input.get(attrs.required), true);
                        return;
                    }
                }
                break;
        }
    }
 
    checkError(name, value!='');
});

This obtains the required and visible text inputs and determines if they are empty or not. Here we look at our depends object to grab the associated radio inputs. If that radio is checked and the text input is empty, the error message appears. If that radio is not checked, it doesn't even evaluate that text input. The depends.type is in a switch statement to allow for easy expansion of types. There are other cases for evaluating relationships ... I'll let you come up with more for yourself.

That concludes our usage of custom data attributes in form validation. This really only touched upon the very tip of the iceberg. Data attributes allow you – the creative developer – to interact in a new, HTML-valid way with your web pages.

Check out the working example of the above code at jsfiddle.net. To read more on custom data attributes, see what Google has to say. If you want to see really cool functionality that uses data attributes plus so much more, check out Aaron Newton's Behavior module over at clientcide.com.

Happy coding!

-Philip

December 29, 2011

Using iPerf to Troubleshoot Speed/Throughput Issues

Two of the most common network characteristics we look at when investigating network-related concerns in the NOC are speed and throughput. You may have experienced the following scenario yourself: You just provisioned a new bad-boy server with a gigabit connection in a data center on the opposite side of the globe. You begin to upload your data and to your shock, you see "Time Remaining: 10 Hours." "What's wrong with the network?" you wonder. The traceroute and MTR look fine, but where's the performance and bandwidth I'm paying for?

This issue is all too common and it has nothing to do with the network, but in fact, the culprits are none other than TCP and the laws of physics.

In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn't send more until it receives an acknowledgement from the remote host that all data was received. This is called the "TCP Window." Data travels at the speed of light, and typically, most hosts are fairly close together. This "windowing" happens so fast we don't even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called "Bandwidth Delay Product," or BDP.

We can overcome BDP to some degree by sending more data at a time. We do this by adjusting the "TCP Window" – telling TCP to send more data per flow than the default parameters. Each OS is different and the default values will vary, but most all operating systems allow tweaking of the TCP stack and/or using parallel data streams. So what is iPerf and how does it fit into all of this?

What is iPerf?

iPerf is simple, open-source, command-line, network diagnostic tool that can run on Linux, BSD, or Windows platforms which you install on two endpoints. One side runs in a 'server' mode listening for requests; the other end runs 'client' mode that sends data. When activated, it tries to send as much data down your pipe as it can, spitting out transfer statistics as it does. What's so cool about iPerf is you can test in real time any number of TCP window settings, even using parallel streams. There's even a Java based GUI you can install that runs on top of it called, JPerf (JPerf is beyond the scope of this article, but I recommend looking into it). What's even cooler is that because iPerf resides in memory, there are no files to clean up.

How do I use iPerf?

iPerf can be quickly downloaded from SourceForge to be installed. It uses port 5001 by default, and the bandwidth it displays is from the client to the server. Each test runs for 10 seconds by default, but virtually every setting is adjustable. Once installed, simply bring up the command line on both of the hosts and run these commands.

On the server side:
iperf -s

On the client side:
iperf -c [server_ip]

The output on the client side will look like this:

#iperf -c 10.10.10.5
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 0.0.0.0 port 46956 connected with 168.192.1.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 10.0 sec  10.0 MBytes  1.00 Mbits/sec

There are a lot of things we can do to make this output better with more meaningful data. For example, let's say we want the test to run for 20 seconds instead of 10 (-t 20), and we want to display transfer data every 2 seconds instead of the default of 10 (-i 2), and we want to test on port 8000 instead of 5001 (-p 8000). For the purposes of this exercise, let's use those customization as our baseline. This is what the command string would look like on both ends:

Client Side:

#iperf -c 10.10.10.5 -p 8000 -t 20 -i 2
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.10.10.10 port 46956 connected with 10.10.10.5 port 8000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  6.00 MBytes  25.2 Mbits/sec
[  3]  2.0- 4.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3]  4.0- 6.0 sec  7.00 MBytes  29.4 Mbits/sec
[  3]  6.0- 8.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3]  8.0-10.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3] 10.0-12.0 sec  7.00 MBytes  29.4 Mbits/sec
[  3] 12.0-14.0 sec  7.12 MBytes  29.9 Mbits/sec
[  3] 14.0-16.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3] 16.0-18.0 sec  6.88 MBytes  28.8 Mbits/sec
[  3] 18.0-20.0 sec  7.25 MBytes  30.4 Mbits/sec
[  3]  0.0-20.0 sec  70.1 MBytes  29.4 Mbits/sec

Server Side:

#iperf -s -p 8000 -i 2
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[852] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 58316
[ ID] Interval Transfer Bandwidth
[  4]  0.0- 2.0 sec  6.05 MBytes  25.4 Mbits/sec
[  4]  2.0- 4.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4]  4.0- 6.0 sec  6.94 MBytes  29.1 Mbits/sec
[  4]  6.0- 8.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4]  8.0-10.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4] 10.0-12.0 sec  6.95 MBytes  29.1 Mbits/sec
[  4] 12.0-14.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4] 14.0-16.0 sec  7.19 MBytes  30.2 Mbits/sec
[  4] 16.0-18.0 sec  6.95 MBytes  29.1 Mbits/sec
[  4] 18.0-20.0 sec  7.19 MBytes  30.1 Mbits/sec
[  4]  0.0-20.0 sec  70.1 MBytes  29.4 Mbits/sec

There are many, many other parameters you can set that are beyond the scope of this article, but for our purposes, the main use is to prove out our bandwidth. This is where we'll use the TCP window options and parallel streams. To set a new TCP window you use the -w switch and you can set the parallel streams by using -P.

Increased TCP window commands:

Server side:
#iperf -s -w 1024k -i 2

Client side:
#iperf -i 2 -t 20 -c 10.10.10.5 -w 1024k

And here are the iperf results from two Softlayer file servers – one in Washington, D.C., acting as Client, the other in Seattle acting as Server:

Client Side:

# iperf -i 2 -t 20 -c 10.10.10.5 -p 8000 -w 1024k
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[  3] local 10.10.10.10 port 53903 connected with 10.10.10.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  3]  2.0- 4.0 sec  28.5 MBytes   120 Mbits/sec
[  3]  4.0- 6.0 sec  28.4 MBytes   119 Mbits/sec
[  3]  6.0- 8.0 sec  28.9 MBytes   121 Mbits/sec
[  3]  8.0-10.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 10.0-12.0 sec  29.0 MBytes   122 Mbits/sec
[  3] 12.0-14.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 14.0-16.0 sec  29.0 MBytes   122 Mbits/sec
[  3] 16.0-18.0 sec  27.9 MBytes   117 Mbits/sec
[  3] 18.0-20.0 sec  29.0 MBytes   122 Mbits/sec
[  3]  0.0-20.0 sec   283 MBytes   118 Mbits/sec

Server Side:

#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[  4] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 53903
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  4]  2.0- 4.0 sec  28.6 MBytes   120 Mbits/sec
[  4]  4.0- 6.0 sec  28.3 MBytes   119 Mbits/sec
[  4]  6.0- 8.0 sec  28.9 MBytes   121 Mbits/sec
[  4]  8.0-10.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 10.0-12.0 sec  29.0 MBytes   121 Mbits/sec
[  4] 12.0-14.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 14.0-16.0 sec  29.0 MBytes   122 Mbits/sec
[  4] 16.0-18.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 18.0-20.0 sec  29.0 MBytes   121 Mbits/sec
[  4]  0.0-20.0 sec   283 MBytes   118 Mbits/sec

We can see here, that by increasing the TCP window from the default value to 1MB (1024k) we achieved around a 400% increase in throughput over our baseline. Unfortunately, this is the limit of this OS in terms of Window size. So what more can we do? Parallel streams! With multiple simultaneous streams we can fill the pipe close to its maximum usable amount.

Parallel Stream Command:
#iperf -i 2 -t 20 -c -p 8000 10.10.10.5 -w 1024k -P 7

Client Side:

#iperf -i 2 -t 20 -c -p 10.10.10.5 -w 1024k -P 7
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
 [ ID] Interval       Transfer     Bandwidth
[  9]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  4]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  7]  0.0- 2.0 sec  25.6 MBytes   107 Mbits/sec
[  8]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  5]  0.0- 2.0 sec  25.8 MBytes   108 Mbits/sec
[  3]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  6]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[SUM]  0.0- 2.0 sec   178 MBytes   746 Mbits/sec
 
(output omitted for brevity on server &amp; client)
 
[  7] 18.0-20.0 sec  28.2 MBytes   118 Mbits/sec
[  8] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  5] 18.0-20.0 sec  28.0 MBytes   117 Mbits/sec
[  4] 18.0-20.0 sec  28.0 MBytes   117 Mbits/sec
[  3] 18.0-20.0 sec  28.9 MBytes   121 Mbits/sec
[  9] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  6] 18.0-20.0 sec  28.9 MBytes   121 Mbits/sec
[SUM] 18.0-20.0 sec   200 MBytes   837 Mbits/sec
[SUM]  0.0-20.0 sec  1.93 GBytes   826 Mbits/sec 

Server Side:

#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[  4] local 10.10.10.10 port 8000 connected with 10.10.10.5 port 53903
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0- 2.0 sec  25.7 MBytes   108 Mbits/sec
[  8]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  4]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[  9]  0.0- 2.0 sec  24.9 MBytes   104 Mbits/sec
[ 10]  0.0- 2.0 sec  25.9 MBytes   108 Mbits/sec
[  7]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[  6]  0.0- 2.0 sec  25.9 MBytes   109 Mbits/sec
[SUM]  0.0- 2.0 sec   178 MBytes   747 Mbits/sec
 
[  4] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  5] 18.0-20.0 sec  28.3 MBytes   119 Mbits/sec
[  7] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[ 10] 18.0-20.0 sec  28.1 MBytes   118 Mbits/sec
[  9] 18.0-20.0 sec  28.0 MBytes   118 Mbits/sec
[  8] 18.0-20.0 sec  28.8 MBytes   121 Mbits/sec
[  6] 18.0-20.0 sec  29.0 MBytes   121 Mbits/sec
[SUM] 18.0-20.0 sec   200 MBytes   838 Mbits/sec
[SUM]  0.0-20.1 sec  1.93 GBytes   825 Mbits/sec

As you can see from the tests above, we were able to increase throughput from 29Mb/s with a single stream and the default TCP Window to 824Mb/s using a higher window and parallel streams. On a Gigabit link, this about the maximum throughput one could hope to achieve before saturating the link and causing packet loss. The bottom line is, I was able to prove out the network and verify bandwidth capacity was not an issue. From that conclusion, I could focus on tweaking TCP to get the most out of my network.

I'd like to point out that we will never get 100% out of any link. Typically, 90% utilization is about the real world maximum anyone will achieve. If you get any more, you'll begin to saturate the link and incur packet loss. I should also point out that Softlayer doesn't directly support iPerf, so it's up to you install and play around with. It's such a versatile and easy to use little piece of software that it's become invaluable to me, and I think it will become invaluable to you as well!

-Andrew

December 12, 2011

Web Development - JavaScript Optimization

So you have a fast website. You've optimized your database queries and you're using the most efficient page caching mechanisms. The users of your site are pretty satisfied, but you want just a little bit more. How can you push your site to the next level by making sure you've included all the optimizations you can to get the fastest speed possible?

Optimize your JavaScript.

You might not be concerned with optimizing you code because newer browsers have gotten significantly faster at processing JavaScript. But like all other programming languages, there are minor tweaks you can make that provide a significant performance gain. Let's take a look at a few of the ways you can be certain that you're getting the most out of your client-side JavaScript code.

JavaScript in External Files
The first optimization is to write your JavaScript in external files. By using external files, your browser is able to cache your code. This prevents users from needing to re-download your JavaScript during every page load and potentially during every AJAX request. When a browser visits any website, it checks the src attribute in the <script> tags and then determines if it already has a copy of that file on your computer. If it does, it won't spend wasteful time re-downloading the exact same code. If you include your code directly in your HTML within <script> tags, it will download that same code each time so that it can be processed. Save those precious bytes!

Minification
Now that you're putting your code in external files, your next goal is to reduce the amount of time it takes to initially download that code so that it can be cached by the browser. This is where minification comes into play. Minification (or minimization) is the process of removing all unnecessary characters from source code (without changing its functionality). Essentially this means you'll remove whitespace characters and comments.

Unless you like to torture yourself (and those who follow your work), you probably write code that's pretty readable. This includes splitting code up between multiple lines, indenting your code with spaces or tabs, and writing comments that explain some esoteric portion of code. All these items are not important to the JavaScript engine because it will just ignore them, but it still takes time to download these extra bytes that will never get used.

Luckily for you, you don't have to spend time creating the tool that will remove these unneeded characters. There are several good tools out there that will minify your code, and I recommend YUI Compressor. This tool can reduce your code by up to 60% (depending upon how efficiently you write your code). In addition to removing all the unnecessary characters, YUI Compressor will rewrite your code to use smaller variable names. If you have a local variable in a function called var myDescriptiveVariable, the compressor will rename it to var a. We just took our 21-character variable name down to 1 character! If this variable is used in 10 places, we've trimmed 200 characters with this one variable, and this happens for every local variable in your script! That's a lot of bytes saved.

If you're paying close attention, the key takeaway you'll notice is that using local variables (compared to global variable that don't get minified) is one great way to reduce the size of your code after minification occurs.

Initial Page Load Operations
Now that your code has been minified, let's take a look at initial page load operations. Since some JavaScript operations are synchronous and will halt other browser processing, we want to make sure we don't slow down the page when the user is trying to perform an action. There are two things we can do to improve initial page loading performance. First, reduce the amount of code you actually invoke on page load (or when the DOM is ready for interaction using Mootools' domready event, jQuery's document ready event or your favorite framework's equivalent). Second, implement lazy-loading techniques. For example, instead of adding all your events to all the elements on the page initially, wait for user interaction or some other process to add the events. Let's look at a few specific examples so you can see what I mean. Note: the code samples are using Mootools syntax.

Before:

var comments = $('.articles > div.comments');
$('showComments').addEvent('click', function() { comments.show(); });

After:

$('showComments').addEvent('click', function() { $('.articles > div.comments').show(); });

While this may improve initial page load, each time we click on showComments, we have to parse the DOM (Document Object Model) to find all the comments, and this is not a cheap operation. The key here is test your own code and determine with each scenario which way is faster and which will keep your users the happiest. Just realize that on demand operations may be more acceptable to users than having to wait for the page to load. You be the judge!

Code Caching
We discussed file caching before, but what about code caching? We need to determine which operations will benefit the most from caching. The two items I will focus on are loops and DOM interactions. Loops can be costly because you may be performing unnecessary actions within each iteration.

Before:

for (var i=0; i<$$('.toggle').length; i++) {
 var el = new Element('div.something', {
 html: $$('.toggle')[i].get('text')
 });
 el.inject($('content'));
}

After:

var i = 0,
 els = $$('.toggle'),
 length = els.length,
 container = new Element('div');
 
for (i=0; i<length; i++) {
 new Element('div.something', {
 html: els[i].get('text')
 }).inject(container);
}
 
container.inject($('content'));

The "Before" loop is inefficient because we are querying the DOM three separate times to get the elements we need (twice for .toggle elements and once for the #content element), and we are having to determine the length property of that collection of elements during each iteration. In our case, the length won't change, so we only need to find it one time. Finally, during each iteration, we add a new element to the #content div, and this causes a redraw of the page. DOM manipulation and redrawing can be expensive, so let's look at how the second method is so much more efficient.

We start by caching our DOM elements so we only have to pull them once and get the length property of those elements only once. We've also created an element which will be used to contain all our new elements. Since the container has not been added to the DOM yet, we can add and remove from it without having to worry about the expensive redraw of the page. Once all our elements have been added to the container, we then inject the container into the #content div. Since this is only happening once, we have significantly improved performance.

Selector Efficiency
The last optimization technique I'll review is selector efficiency. Selectors are used to grab specific elements from the document in order to interact with them in your code. It's very possible to write poor-performing selectors. Some selectors to avoid:

$$('*')

This will grab all the elements in your document. If you have a huge document, this could take a while. You better not be using this selector in a loop!

$$('[data-some-attribute]')

This selector is inefficient because it has to look for this data attribute within every element in the document. If there are thousands of elements and each has several data attributes, you should just go get some coffee.

$$('body .person:nth-child(3n+1) .category p span.title')

This selector is fairly complicated. The reason this can be inefficient is because it may have to unnecessarily inspect many elements to get to the one(s) it needs. Determine if you can simplify the selector by being more specific and using an id to get to elements or even consider slightly modifying your HTML so that it's easier to create efficient selectors.

There are plenty of other techniques that will help improve the speed and efficiency of your JavaScript, but these are a good start and should help make you and your users happy. Remember that DOM interactions are slow and expensive, so do what you can to reduce chatting back and forth with the document. Check your loops and minify your code, and your users will surely return.

Happy coding!

-Philip

October 11, 2011

Working on the SoftLayer Dev Team

This post is somewhat of a continuation of a post I made here a little over three years ago: What It's Like to be a Data Center Technician. My career at SoftLayer has been a great journey. We have gone from four thousand customers at the time of my last post to over twenty five thousand, and it's funny to look back at my previous post where I mentioned how SoftLayer Data Center Technicians can perform the job of three different departments in any given ticket ... Well I managed to find another department where I have to include all of the previous jobs plus one!

Recently I took on a new position on the Development Support team. My job is to make sure our customers' and employees' interaction with development is a good one. As my previous post stated, working at SoftLayer in general can be pretty crazy, and the development team is no exception. We work on and release code frequently to keep up with our customers' and employees' demands, and that is where my team comes in.

We schedule and coordinate all of our portal code updates and perform front-line support for any development issues that can be addressed without the necessity for code changes. Our team will jump on and fix everything from the layout of your portal to why your bandwidth graphs aren't showing.

Our largest project as of late is completely new portal (https://beta.softlayer.com/) for our customers. It is the culmination of everything our customers have requested in their management interface, and we really appreciate the feedback we've gotten in our forums, tickets and when we've met customers in person. If you haven't taken the portal beta for a spin yet, take a few minutes to check it out!

SoftLayer Portal

The transition from exclusively providing customer support to supporting both customers and employees has been phenomenal. I've been able to address a lot of the issues I came across when I was a CSA, and the results have been everything I have expected and more. SoftLayer is a well-oiled machine now, and with our global expansion, solid procedures and execution is absolutely necessary. Our customers expect flawless performance, and we strive to deliver it on a daily basis.

One of the old funny tag lines we used was, "Do it faster, Do it better, Do it in Private," and with our latest developments, we'd be remiss if we didn't add, "Do it Worldwide," in there somewhere. If there's anything I can do to help make your customer experience better from a dev standpoint, please let me know!

-Romeo

July 25, 2011

Under the Hood of 'The Cloud'

When we designed the CloudLayer Computing platform, our goal was to create an offering where customers would be able to customize and build cloud computing instances that specifically meet their needs: If you go to our site, you're even presented with an opportunity to "Build Your Own Cloud." The idea was to let users choose where they wanted their instance to reside as well as their own perfect mix of processor power, RAM and storage. Today, we're taking the BYOC mantra one step farther by unveiling the local disk storage option for CloudLayer computing instances!

Local Disk

For those of you familiar with the CloudLayer platform, you might already understand the value of a local disk storage option, but for the uninitiated, this news presents a perfect opportunity to talk about the dynamics of the cloud and how we approach the cloud around here.

As the resident "tech guy" in my social circle, I often find myself helping friends and family understand everything from why their printer isn't working to what value they can get from the latest and greatest buzzed-about technology. As you'd probably guess, the majority of the questions I've been getting recently revolve around 'the cloud' (thanks especially to huge marketing campaigns out of Redmond and Cupertino). That abstract term effectively conveys the intentional sentiment that users shouldn't have to worry about the mechanics of how the cloud works ... just that it works. The problem is that as the world of technology has pursued that sentiment, the generalization of the cloud has abstracted it to the point where this is how large companies are depicting the cloud:

Cloud

As it turns out, that image doesn't exactly illicit the, "Aha! Now I get it!" epiphany of users actually understanding how clouds (in the technology sense) work. See how I pluralized "clouds" in that last sentence? 'The Cloud' at SoftLayer isn't the same as 'The Cloud' in Redmond or 'The Cloud' in Cupertino. They may all be similar in the sense that each cloud technology incorporates hardware abstraction, on-demand scalability and utility billing, but they're not created in the same way.

If only there were a cloud-specific Declaration of Independence ...

We hold these truths to be self-evident, that all clouds are not equal, that they are endowed by their creators with certain distinct characteristics, that among these are storage, processing power and the ability to serve content. That to secure these characteristics, information should be given to users, expressed clearly to meet the the cloud's users;

The Ability to Serve Content
Let's unpack that Jeffersonian statement a little by looking at the distinct characteristics of every cloud, starting with the third ("the ability to serve content") and working backwards. Every cloud lives on hardware. The extent to which a given cloud relies on that hardware can vary, but at the end of the day, you &nash; as a user – are not simply connecting to water droplets in the ether. I'll use SoftLayer's CloudLayer platform as a specific example of that a cloud actually looks like: We have racks of uniform servers – designated as part of our cloud infrastructure – installed in rows in our data centers. All of those servers are networked together, and we worked with our friends at Citrix to use the XenServer platform to tie all of those servers together and virtualize the resources (or more simply: to make each piece of hardware accessible independently of the rest of the physical server it might be built into). With that infrastructure as a foundation, ordering a cloud server on the CloudLayer platform simply involves reserving a small piece of that cloud where you can install your own operating system and manage it like an independent server or instance to serve your content.

Processing Power
Understanding the hardware architecture upon which a cloud is built, the second distinct characteristic of every cloud ("processing power") is fairly logical: The more powerful the hardware used for a given cloud, the better processing performance you'll get in an instance using a piece of that hardware.

You can argue about what software uses the least resources in the process of virtualizing, but apples-to-apples, processing power is going to be determined by the power of the underlying hardware. Some providers try to obfuscate the types of servers/processors available to their cloud users (sometimes because they are using legacy hardware that they wouldn't be able to sell/rent otherwise), but because we know how important consistent power is to users, we guarantee that CloudLayer instances are based on 2.0GHz (or faster) processors.

Storage
We walked backward through the distinct characteristics included in my cloud-specific Declaration of Independence because of today's CloudLayer Computing storage announcement, but before I get into the details of that new option, let's talk about storage in general.

If the primary goal of a cloud platform is to give users the ability to scale instantly from 1 CPU of power to 16 CPUs of power, the underlying architecture has to be as flexible as possible. Let's say your cloud computing instance resides on a server with only 10 CPUs available, so when you upgrade to a 16-CPU instance, your instance will be moved to a server with enough available resources to meet your need. To make that kind of quick change possible, most cloud platforms are connected to a SAN (storage area network) or other storage device via a back-end network to the cloud servers. The biggest pro of having this setup is that upgrading and downgrading CPU and RAM for a given cloud instance is relatively easy, but it introduces a challenge: The data lives on another device that is connected via switches and cables and is being used by other customers as well. Because your data has to be moved to your server to be processed when you call it, it's a little slower than if a hard disk was sitting in the same server as the instance's processor and RAM. For that reason, many users don't feel comfortable moving to the cloud.

In response to the call for better-performing storage, there has been a push toward incorporating local disk storage for cloud computing instances. Because local disk storage is physically available to the CPU and RAM, the transfer of data is almost immediate and I/O (input/output) rates are generally much higher. The obvious benefit of this setup is that the storage will perform much better for I/O-intensive applications, while the tradeoff is that the setup loses the inherent redundancy of having the data replicated across multiple drives in a SAN (which, is almost like its own cloud ... but I won't confuse you with that right now).

The CloudLayer Computing platform has always been built to take advantage of the immediate scalability enabled by storing files in a network storage device. We heard from users who want to use the cloud for other applications that they wanted us to incorporate another option, so today we're happy to announce the availability of local disk storage for CloudLayer Computing! We're looking forward to seeing how our customers are going to incorporate cloud computing instances with local disk storage into their existing environments with dedicated servers and cloud computing instances using SAN storage.

If you have questions about whether the SAN or local disk storage option would fit your application best, click the Live Chat icon on SoftLayer.com and consult with one of our sales reps about the benefits and trade-offs of each.

We want you to know exactly what you're getting from SoftLayer, so we try to be as transparent as we can when rolling out new products. If you have any questions about CloudLayer or any of our other offerings, please let us know!

-@nday91

May 3, 2011

SoftLayer's Android Client Gets an Extreme Makeover

One of the things you expect when you merge two organizations in the same vertical space is for your talent pool to get deeper. SoftLayer had a seriously talented bunch of developers before the merger - I should know, I consider myself one of them - and as I was promised would be the case, after the merger, we were joined by an equally talented group of engineers from The Planet. Where we had two low-level developers, now we have four. Where we had a dozen guys with .NET experience, now we have twenty. It's better for us employees, and better for our customers too.

What I didn't expect as part of the merger was that our talent pool would get wider. No, I don't mean we now employ an army of body builders and Siamese twins. I mean as result of the merger, we ended up with an entirely new group of folks here unlike any SL previously had on the payroll. This new and exotic breed of folks - new and exotic at least from my perspective - are collectively known as "user experience" engineers.

I admit (and I suspect most software engineers will concur) when I develop something, it becomes my baby. Each software engineer has his or her own method for inciting that spark of genius ... I start out with some ideas on a yellow pad, refine them until I can whip up an actual spec, code some unit tests and wait to see if my baby takes its first step or falls flat on its digital face. Either way, over time with gentle nudging and TLC, eventually an application grows. And like any loving parent I'm certain that my application can do no wrong.

So when I was told a "usability study" would be done on one of my babies by the user experience, team you can imagine what went through my mind. After all, I was there when the first API call succeeded. I was the one who got up in the middle of the night when the application got cranky and decided to throw an unhandled exception. Who the heck are these user interface specialist and what do they know that I don't?

In retrospect, I couldn't have been more wrong. I am a professional coder with more than a decade of experience under my belt. But I'm often more interested in how I can squeeze a few more CPU cycles out of a sub-routine than how much easier it would be for the user if I rearranged the order of the GUI's a little bit. The user interface review I received really got me thinking from a user's perspective and excited about the application in a way I hadn't been since the early days when I banged out those first few lines of code.

Two weeks ago, we released a new, radically different looking Android client. If you are a current user of the application, you've undoubtedly received an OTA update by now, and I hope you are as pleasantly surprised by the result as I am. For those of you with Android phones who have not installed the SoftLayer client, I encourage you to do so. You can get more info by visiting http://www.softlayer.com/resources/mobile-apps/.

Before I let you go, what kind of father would I be if I didn't take out my wallet and bore you to tears with pictures of my children? Without further ado, I present to you the latest and greatest Android Mobile Client:

SL Android App

SL Android App

SL Android App

SL Android App

-William

Categories: 
December 2, 2010

Once a Bug Killer, Now a Killer Setup

Not everyone enjoys or has the benefit of taking what they learn at work to apply at home in personal situations, but I consider myself lucky because the things I learn from work can often be very useful for hobbies in my own time. As an electronics and PC gaming fanatic, I always enjoy tips that would increase the performance of my technological equipment. Common among PC gaming enthusiasts is the obsession with making their gaming rig excel in every aspect by upgrading video card, ram, processor, etc. Before working at SoftLayer, I had only considered buying better hardware to improve performance but never really looked into the advantages of different types of setups for a computer.

This new area of exploration for me started shortly after my first days at SoftLayer when I was introduced to RAID (Redundant Array of Inexpensive Disks) for our servers. In the past, I had heard mention of the term but never had any idea of what that entailed and was only familiar with our good ole bug killer brand Raid. You can imagine my excitement as I learned more about its intricacies and how the different types of RAID could benefit my computer’s performance.

Armed with this new knowledge, I was determined to reconfigure my gaming pc at home to reap the benefits. Upon looking at the different RAID setups, I decided to go with a RAID 0 because I did not want to sacrifice storage space and my data was not critical enough that I would need a mirror such as provided with RAID 1.

One thing led to another as I became occupied for a good amount of time with benchmarking drive performance in my old setup versus my new setup. In the end, I was happy to report a significant performance gain in what I now refer to as my “killer setup”. Applications would launch noticeably faster and even in games where videos were stored locally on hard drives, the cinematic scenes would come up faster than before.

To add to the hype, a coworker was also building a new computer in anticipation of a new game called Final Fantasy XIV. It felt like a competition to exceed each other with better scores. I’m already planning ahead for future upgrades since this time around I had only used SATA drives. For my next upgrade I would love to run a RAID 0 with two SSD drives to see what kind of boost I would get.

So for business or pleasure, have you ever considered the benefits of setting up a RAID system?

-Danny

Subscribe to performance