Posts Tagged 'Coding'

March 28, 2012

SoftLayer Mobile on WP7 - Live Tiles and Notifications

In the past couple of months we've added some really cool Windows Phone 7.1 (Mango) features to the Softlayer Mobile application, including Lives Tiles and Notifications. While a basic Live Tile implementation is relatively easy, there's a fair amount of coding and architecture requirements to facilitate cooler Live Tile functionality and Notifications ... And we're all about doing things cooler.

Live Tiles is a such great feature of Windows Phone 7 largely because it gives the developer much more control over the device's user experience when compared to other mobile OSes. Live Tile functionality in its simplest form can be just 'Pinning' the Tile to the Start Menu with a deep link to a specific location within the application so that when clicked the user is taken to that location within the app. This can save the user a lot of time in having to navigate deep into an app if they know where they want to go. More advanced features of Live Tiles include programmatically giving the Tile a custom background image and displaying a notification message on the background when the Tile flips.

Adding a Live Tile

To add a Live Tile, a user simply clicks and holds the module they'd like to pin to the start menu. When the context menu appears, the user can select 'pin as tile,' and he or she will be taken to the Start page where the new Tile is displayed:

SoftLayer on Windows Phone 7

The Magic Behind Sending Notifications

We really wanted to be able to notify a user when a notable event happens on his or her account (new ticket is created/updated, when a bill is overdue, etc.), and Windows Phone 7 provides some pretty phenomenal functionality in that area ... I wouldn't be surprised if other big mobile OSes copy Windows Phone 7's notifications in the future. When it comes to implementing notifications in SoftLayer Mobile, we needed to handle a few things:

  1. Get a Unique App+User Channel URI from Windows Push Notification Server
  2. Register URI & Channel Name with the Softlayer Registration Service (WCF we created)
  3. Store this URI, Channel Name and the user's Account in a DB
  4. Periodically poll for new tickets or updates (since we don't have a mechanism yet that can 'push' this alert when any notification event is triggered)
  5. Send Notification (whether it's a Toast or Tile notification) to device via the unique URI & Channel name.

I was going to include the architecture diagram here showing this relationship and process, but the designer sitting next to me told that nobody wants to see that.

What do the Numbers on the Tiles Mean?

We wanted to make our Tiles show information that the user would find useful, so we send the account's total unread ticket count to the main app's Tile, and we display the account's unread ticket update count on the "Ticket" Tile we pinned to the Start screen:

SoftLayer on Windows Phone 7

Why is the Tile Flipping?

We also have the ability to have the Tiles flip over and show an image or text on the TileBack, so we use that to explain the number shown on the Tile (so you don't have to remember):

SoftLayer on Windows Phone 7

What is a Toast Notification?

A Toast Notification is a message that pops up on the screen for 10 seconds. If the user clicks on it, he or she is taken to the application, but if the notification is not clicked, it will disappear. Here is the Toast Notification that is sent to a user when a ticket is updated if they subscribe to Toast Notifications:

SoftLayer on Windows Phone 7

How do I Enable Notifications in SoftLayer Mobile?

To enable Live Tiles, all you have to do is turn on the 'Use Push Notifications' option on the Settings view.

SoftLayer on Windows Phone 7

You'll be asked if you'd like to receive Toast Notifications, and if you click 'OK,' you'll start getting them:

SoftLayer on Windows Phone 7

We Love Feedback and Requests!

Now that you have Live Tiles & Notifications in Softlayer Mobile for WP7 (and coming soon for iPhone & Android), what else would you like to see in the mobile clients?

-Erik

March 27, 2012

Tips and Tricks - How to Secure WordPress

As a hobby, I dabble in WordPress, so I thought I'd share a few security features I use to secure my WordPress blogs as soon as they're installed. Nothing in this blog will be earth-shattering, but because security is such a priority, I have no doubt that it will be useful to many of our customers. Often, the answer to the question, "How much security do I need on my site?" is simply, "More," so even if you have a solid foundation of security, you might learn a new trick or two that you can incorporate into your next (or current) WordPress site.

Move wp-config.php

The first thing I do is change the location of my wp-config.php. By default, it's installed in the WordPress parent directory. If the config file is in the parent directory, it can be viewed and accessed by Apache, so I move it out of web/root. Because you're changing the default location of a pretty significant file, you need to tell WordPress how to find it in wp-load.php. Let's say my WordPress runs out of /webroot on my host ... I'd need to make a change around Line 26:

if ( file_exists( ABSPATH . 'wp-config.php') ) {
 
        /** The config file resides in ABSPATH */
        require_once( ABSPATH . 'wp-config.php' );
 
} elseif ( file_exists( dirname(ABSPATH) . '/wp-config.php' ) && ! file_exists( dirname(ABSPATH) . '/wp-settings.php' ) ) {
 
        /** The config file resides one level above ABSPATH but is not part of another install*/
        require_once( dirname(ABSPATH) . '/wp-config.php' );

The code above is the default setup, and the code below is the version with my subtle update incorporated.

if ( file_exists( ABSPATH . 'wp-config.php') ) {
 
        /** The config file resides in ABSPATH */
        require_once( ABSPATH . '../wp-config.php' );
 
} elseif ( file_exists( dirname(ABSPATH) . '..//wp-config.php' ) && ! file_exists( dirname(ABSPATH) . '/wp-settings.php' ) ) {
 
        /** The config file resides one level above ABSPATH but is not part of another install*/
        require_once( dirname(ABSPATH) . '../wp-config.php' );

All we're doing is telling the application that the wp-config.php file is one directory higher. By making this simple change, you ensure that only the application can see your wp-config.php script.

Turn Down Access to /wp-admin

After I make that change, I want to turn down access to /wp-admin. I allow users to contribute on some of my blogs, but I don't want them to do so from /wp-admin; only users with admin rights should be able to access that panel. To limit access to /wp-admin, I recommend the plugin uCan Post. This plugin creates a page that allows users to write posts and submit them within your theme.

But won't a user just be able to navigate to http://site.com/wp-admin? Yes ... Until we add a simple function to our theme's functions.php file to limit that access. At the bottom of your functions.php file, add this:

############ Disable admin access for users ############

add_action('admin_init', 'no_more_dashboard');
function no_more_dashboard() {
  if (!current_user_can('manage_options') && $_SERVER['DOING_AJAX'] != '/wp-admin/admin-ajax.php') {
  wp_redirect(site_url()); exit;
  }
}
 
###########################################################

Log in as a non-admin user, and you'll get redirected to the blog's home page if you try to access the admin panel. Voila!

Start Securing the WordPress Database

Before you go any further, you need to look at WordPress database security. This is the most important piece in my opinion, and it's not just because I'm a DBA. WordPress never needs all permissions. The only permissions WordPress needs to function are ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, INDEX, INSERT, LOCK TABLES, SELECT and UPDATE.

If you run WordPress and MySQL on the same server the permissions grant would look something like:

GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, INDEX, INSERT, LOCK TABLES, SELECT, UPDATE ON <DATABASE>.* TO <USER>@'localhost' IDENTIFIED BY '<PASSWORD>';

If you have a separate database server, make sure the host of the webserver is allowed to connect to the database server:

GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, INDEX, INSERT, LOCK TABLES, SELECT, UPDATE ON <DATABASE>.* TO <USER>@'<ip of web server' IDENTIFIED BY '<PASSWORD>';

The password you use should be random, and you should not need to change this. DO NOT USE THE SAME PASSWORD AS YOUR ADMIN ACCOUNT.

By taking those quick steps, we're able to go a long way to securing a default WordPress installation. There are other plugins out there that are great tools to enhance your blog's security, and once you've got the fundamental security updates in place, you might want to check some of them out. Login LockDown is designed to stop brute force login attempts, and Secure WordPress has some great additional features.

What else do you do to secure your WordPress sites?

-Lee

March 5, 2012

iptables Tips and Tricks - Not Locking Yourself Out

The iptables tool is one of the simplest, most powerful tools you can use to protect your server. We've covered port redirection, rule processing and troubleshooting in previous installments to this "Tips and Tricks" series, but what happens when iptables turns against you and locks you out of your own system?

Getting locked out of a production server can cost both time and money, so it's worth your time to avoid this. If you follow the correct procedures, you can safeguard yourself from being firewalled off of your server. Here are seven helpful tips to help you keep your sanity and prevent you from locking yourself out.

Tip 1: Keep a safe ruleset handy.

If you are starting with a working ruleset, or even if you are trying to troubleshoot an existing ruleset, take a backup of your iptables configuration before you ever start working on it.

iptables-save > /root/iptables-safe

Then if you do something that prevents your website from working, you can quickly restore it.

iptables-restore

Tip 2: Create a cron script that will reload to your safe ruleset every minute during testing.

This was pointed out to my by a friend who swears by this method. Just write a quick bash script and set a cron entry that will reload it back to the safe set every minute. You'll have to test quickly, but it will keep you from getting locked out.

Tip 3: Have the IPMI KVM ready.

SoftLayer-pod servers* are equipped with some sort of remote access device. Most of them have a KVM console. You will want to have your VPN connection set up, connected and the KVM window up. You can't paste to and from the KVM, so SSH is typically easier to work with, but it will definitely cut down on the downtime if something does go wrong.

*This may not apply to servers that were originally provisioned under another company name.

Tip 4: Try to avoid generic rules.

The more criteria you specify in the rule, the less chance you will have of locking yourself out. I would liken this to a pie. A specific rule is a very thin slice of the pie.

iptables -A INPUT -p tcp --dport 22 -s 10.0.0.0/8 -d 123.123.123.123 -j DROP

But if you block port 22 from any to any, it's a very large slice.

iptables -A INPUT -p tcp --dport 22 -j DROP

There are plenty of ways that you can be more specific. For example, using "-i eth0" will limit the processing to a single NIC in your server. This way, it will not apply the rule to eth1.

Tip 5: Whitelist your IP address at the top of your ruleset.

This may make testing more difficult unless you have a secondary offsite test server, but this is a very effective method of not getting locked out.

iptables -I INPUT -s <your IP> -j ACCEPT

You need to put this as the FIRST rule in order for it to work properly ("-I" inserts it as the first rule, whereas "-A" appends it to the end of the list).

Tip 6: Know and understand all of the rules in your current configuration.

Not making the mistake in the first place is half the battle. If you understand the inner workings behind your iptables ruleset, it will make your life easier. Draw a flow chart if you must.

Tip 7: Understand the way that iptables processes rules.

Remember, the rules start at the top of the chain and go down, unless specified otherwise. Crack open the iptables man page and learn about the options you are using.

-Mark

January 17, 2012

Web Development - HTML5 - Web Fonts

All but gone are the days of plain, static webpages flowered with horrible repeating neon backgrounds and covered with nauseating animated GIFs created by amateur designers that would make your mother cry and induce seizures in your grandpa. Needless to say, we have come a long way since Al Gore first "created the intarwebs" in the early '90's. For those of you born in this century, that's the 1990's ... Yes, the World Wide Web is still very new. Luckily for the seven billion people on this lovely planet, many advancements have been introduced into our web browsers that make our lives as designers and developers just a little bit more tolerable.

Welcome to the third installment in Web Development series. If you're just joining us, the first posts in the series covered JavaScript Optimization and HTML5 Custom Data Attributes ... If you haven't read those yet, take a few minutes to catch up and head back to this blog where we'll be looking at how custom web fonts can add a little spice to your already-fantastic website.

If you're like me, you've probably used the same three or four fonts on most sites you've designed in the past: Arial, Courier New, Trebuchet MS and Verdana. You know that pretty much all browsers will have support for these "core" fonts, so you never ventured beyond them because you wanted the experience to remain the same for everyone, no matter what browser a user was using to surf. If you were adventurous and wanted to throw in a little typographical deviation, you might have created a custom image of the text in whatever font Photoshop would allow, but those days are in the past (or at least they should be).

Why is using an image instead of plain text unfriendly?

  1. Lack of Flexibility - Creating an image is time-consuming. Even if you have really fast fingers and know your way around Photoshop, it will never be as fast as simply typing that text into your favorite editor. Also, you can't change the styles (font-size, color, text-decoration, etc.) of an image using CSS like you can with text.
  2. Lack of Accessibility – Not everyone is alike. Some of your readers or clients may have impairments that require screen readers or a really large font. Using an image – especially one that doesn't contain a good long description – prevents those users from getting the full experience. Also, some people use text-only browsers that don't display any images. Think about your whole audience!
  3. More to Download – Plain text doesn't require the same number of bytes as an image of that same text. By not having another image, you are saving on the amount of time it takes to load your page.

Now that we're on the same page about the downsides of the "old way" of doing things, let's look at some cool HTML5-powered methods for displaying custom fonts. Before we get started, we need to have some custom fonts to use. Google has a nice interface for downloading custom fonts (http://www.google.com/webfonts), and there are plenty of other sites that provide free and non-free fonts that can suit your taste/needs. You can pick and choose which ones you'd like to use (remembering to always follow copyright guidelines), and once you've created and downloaded your collection of fonts, you'll need to setup your CSS to read them.

For simplicity, my file structure will be setup with the HTML and CSS files in the same root directory. I will have a fonts directory where I will keep all my custom fonts.

/fonts.html
/fonts.css
/styles.css
/fonts/MyCustomFont/MyCustomFont-Regular.ttf
/fonts/MyCustomFont/MyCustomFont-Bold.ttf
/fonts/...

My fonts.html file will include the two CSS files in the head section. The order in which you include the CSS files does not matter.

<link rel="stylesheet" type="text/css" href="fonts.css" />
<link rel="stylesheet" type="text/css" href="styles.css" />

The fonts.css file will include the definitions for all of our custom fonts. The styles.css file will be our main CSS file for our website. Defining our custom fonts (in fonts.css) is really simple:

@font-face {
    font-family: 'MyCustomFont';
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.ttf') format('truetype');
}

It's almost too easy thanks to HTML5!

Let's break this down into its components to better understand what's going on here. The @font-face declaration will be ignored by older browsers that don't understand it, so this standards-compliant definition degrades nicely. The font-family descriptor is the name that you'll use to reference this font family in your other CSS file(s). The src descriptor contains the location of where your font is stored and the format of the font.

There are several things to note here. The quotes around MyCustomFont in the font-family descriptor are optional. If it were My Custom Font instead (in fonts.css and styles.css), it would still be successfully read. The quotes around the url portion are also optional. However, the quotes around the format portion are not optional. To keep things consistent, I have a habit of adding quotes around all of these items.

An alternative way to define the same font would be to leave off the format portion of the src descriptor. Browsers don't need the format portion if it's a standard font format (described below).

@font-face {
    font-family: 'MyCustomFont';
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.ttf');
}

Like standard url inclusions in other CSS definitions, the URL item is relative to the location of the definition file (fonts.css). The URL may also be an absolute location or point to a different website altogether. If using the Google web fonts site mentioned earlier (or similar site), you may simply point the URL to the location suggested instead of downloading the actual font.

If you've dealt with web fonts before, you may already be familiar with the multiple formats: WOFF (Web Open Font Format, .woff), TrueType (.ttf), OpenType (.ttf, .otf), Embedded Open Type (.eot) and SVG Font (.svg, .svgz). I won't go into great detail here about these, but if you're interested in learning more, Google and W3C are great resources.

It should be noted that all browsers are not alike (no shock there) and some may not render some font formats correctly or at all. You can get around this by including multiple src descriptors in your @font-face declaration to try and support all the browsers.

@font-face {
    font-family: 'MyCustomFont';
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.eot'); /* Old IE */
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.ttf'); /* Cool browsers */
}

Now that we have our font definition setup, we have to include our new custom font in our styles.css. You've done this plenty of times:

h1, p {
    font-family: MyCustomFont, Arial;
}

There you go! For some reason if MyCustomFont is not understood, the browser will default to Arial. This degrades gracefully and is really simple to use. One thing to note is that even though your fonts.css file may define twenty custom fonts, only the fonts that are included and used in your styles.css file will be downloaded. This is very smart of the browser – it only downloads what it's going to use.

So now you have one more tool to add to your development box. As more users adopt newer, standards-compliant browsers, it's easier to give your site some spice without the headaches of creating unnecessary images. Go forth and impress your friends with your new web font knowledge!

Happy Coding!

-Philip

P.S. As a bonus, you can check out the in-line style declaration in the source of this post to see how "Happy Coding!" is coded to use the Monofett font family.

January 10, 2012

Web Development - HTML5 - Custom Data Attributes

I recently worked on a project that involved creating promotion codes for our clients. I wanted to make this tool as simple as possible to use and because this involved dealing with thousands of our products in dozens of categories with custom pricing for each of these products, I had to find a generic way to deal with client-side form validation. I didn't want to write custom JavaScript functions for each of the required inputs, so I decided to use custom data attributes.

Last month, we started a series focusing on web development tips and tricks with a post about JavaScript optimization. In this installment, we're cover how to use HTML5 custom data attributes to assist you in validating forms.

Custom data attributes for elements are "[attributes] in no namespace whose name starts with the string 'data-', has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z)." Thanks W3C. That definition is bookish, so let's break it down and look at some examples.

Valid:

<div data-name="Philip">Mr. Thompson is an okay guy.</div>
<a href="softlayer.com" data-company-name="SoftLayer" data-company-state="TX">SoftLayer</a>
<li data-color="blue">Smurfs</li>

Invalid:

// This attribute is not prefixed with 'data-'
    <h2 database-id="244">Food</h2>
 
// These 2 attributes contain capital letters in the attribute names
    <p data-firstName="Ashley" data-lastName="Thompson">...</p>
 
// This attribute does not have any valid characters following 'data-'
    <img src="/images/pizza.png" data-="Sausage" />

Now that you know what custom data attributes are, why would we use them? Custom attributes allow us to relate specific information to particular elements. This information is hidden to the end user, so we don't have to worry about the data cluttering screen space and we don't have to create separate hidden elements whose purpose is to hold custom data (which is just bad practice). This data can be used by a JavaScript programmer to many different ends. Among the most common use cases are to manipulate elements, provide custom styles (using CSS) and perform form validation. In this post, we'll focus on form validation.

Let's start out with a simple form with two radio inputs. Since I like food, our labels will be food items!

<input type="radio" name="food" value="pizza" data-sl-required="food" data-sl-show=".pizza" /> Pizza
<input type="radio" name="food" value="sandwich" data-sl-required="food" data-sl-show="#sandwich" /> Sandwich
<div class="hidden required" data-sl-error-food="1">A food item must be selected</div>

Here we have two standard radio inputs with some custom data attributes and a hidden div (using CSS) that contains our error message. The first input has a name of food and a value of pizza – these will be used on the server side once our form is submitted. There are two custom data attributes for this input: data-sl-required and data-sl-show. I am defining the data attribute data-sl-required to indicate that this radio button is required in the form and one of them must be selected to pass validation. Note that I have prefixed required with sl- to namespace my data attribute. required is generic and I don't want to have any conflicts with any other attributes – especially ones written for other projects! The value of data-sl-required is food, which I have tied to the div with the attribute data-sl-error-food (more on this later).

The second custom attribute is data-sl-show with a selector of .pizza. (To review selectors, jump back to the JavaScript optimization post.) This data attribute will be used to show a hidden div that contains the class pizza when the radio button is clicked. The sandwich radio input has the same data attributes but with slightly different values.

Now we can move on to our HTML with the hidden text inputs:

<div class="hidden pizza">
    Pizza type: <input type="text" name="pizza" data-sl-required="pizza" data-sl-depends="{&quot;type&quot;:&quot;radio&quot;,&quot;name&quot;:&quot;food&quot;,&quot;value&quot;:&quot;sandwich&quot;}" />
    <div class="hidden required" data-sl-error-pizza="1">The pizza type must not be empty</div>
</div>
 
<div class="hidden" id="sandwich">
    Sandwich: <input type="text" name="sandwich" data-sl-required="sandwich" data-sl-depends="{&quot;type&quot;:&quot;radio&quot;,&quot;name&quot;:&quot;food&quot;,&quot;value&quot;:&quot;sandwich&quot;}" />
    <div class="hidden required" data-sl-error-sandwich="1">The sandwich must not be empty</div>
</div>

These are hidden divs that contain text inputs that will be used to input more data depending on the radio input selected. Notice that the first outer div has a class of pizza and the second outer div has an id of sandwich. These values tie back to the data-sl-show selectors for the radio inputs. These text inputs also contain the data-sl-required attribute like the previous radio inputs. The data-sl-depends data attribute is a fun attribute that contains a JSON-encoded object that's been run through PHP's htmlentities() function. For readability, the data-sl-depends values contain:

{"type":"radio","name":"food","value":"pizza"}
{"type":"radio","name":"food","value":"sandwich"}

This first attribute says that our text input “depends” on the radio input with the name food and value pizza to be visible and selected in order to be processed as required. You can just imagine the possibilities of combinations you can create to make very custom functionality with very generic JavaScript.

Now we can examine our JavaScript to make sense of all these custom data attributes. Note that I'll be using Mootools' syntax, but the same can fairly easily be accomplished using jQuery or your favorite JavaScript framework. I'm going to start by creating a DataForm class. This will be generic enough so that you can use it in multiple forms and it's not tied to this specific instance. Reuse is good! To have it fit our needs, we're going to pass some options when instantiating it.

new DataForm({
    ...,
    dataAttributes: {
        required: 'data-sl-required',
        errorPrefix: 'data-sl-error',
        depends: 'data-sl-depends',
        show: 'data-sl-show'
    }
});

As you can see, I'm using the data attribute names from our form as the options – this will allow you to create your own data attribute names depending on your uses. We first need to make our hidden divs visible whenever our radio buttons are clicked – we'll use our initData() method for that.

initData: function() {
    var attrs = this.options.dataAttributes,
        divs = [];
 
    $$('input[type=radio]['+attrs.show+']').each(function(input) {
        var div = $$(input.get(attrs.show));
        divs.push(div);
        input.addEvent('change', function(e) {
            divs.each(function(div) { div.hide(); });
            div.show();
        });
    });
}

This method grabs all the radio inputs with the show attribute (data-sl-show) and adds an onchange event to each of them. When a radio input is checked, it first hides all the divs, and then shows the div that's associated with that radio input.

Great! Now we have our text inputs showing and hiding depending on which radio button is selected. Now onto the actual validation. First, we'll make sure our required radio inputs are checked:

$$('input[type=radio]['+attrs.required+']:not(:checked)').each(function(input) {
    var name = input.get('name');
    checkError(name, isRadioWithNameSelected(name))
});

This grabs all the unchecked radio inputs with the required attribute (data-sl-required) and checks to see if any radio with that same name is selected using the function isRadioWithNameSelected(). The checkError() function will show or hide the error div (data-sl-error-*) depending if the radio is checked or not. Don't worry - you'll see how these functions are implemented in the JSFiddle below.

Now we can check our text inputs:

$$('input[type=text]['+attrs.required+']:visible').each(function(input) {
    var name = input.get('name'),
        depends = input.get(attrs.depends),
        value = input.get('value').trim();
 
    if (depends) {
        depends = JSON.encode(depends);
        switch (depends.type) {
            case 'radio':
                if (depends.name &amp;&amp; depends.value) {
                    var radio = $$('input[type=radio][name="'+depends.name+'"][value="'+depends.value+'"]:checked');
                    if (!radio) {
                        checkError(input.get(attrs.required), true);
                        return;
                    }
                }
                break;
        }
    }
 
    checkError(name, value!='');
});

This obtains the required and visible text inputs and determines if they are empty or not. Here we look at our depends object to grab the associated radio inputs. If that radio is checked and the text input is empty, the error message appears. If that radio is not checked, it doesn't even evaluate that text input. The depends.type is in a switch statement to allow for easy expansion of types. There are other cases for evaluating relationships ... I'll let you come up with more for yourself.

That concludes our usage of custom data attributes in form validation. This really only touched upon the very tip of the iceberg. Data attributes allow you – the creative developer – to interact in a new, HTML-valid way with your web pages.

Check out the working example of the above code at jsfiddle.net. To read more on custom data attributes, see what Google has to say. If you want to see really cool functionality that uses data attributes plus so much more, check out Aaron Newton's Behavior module over at clientcide.com.

Happy coding!

-Philip

January 9, 2012

iptables Tips and Tricks – Troubleshooting Rulesets

One of the most time consuming tasks with iptables is troubleshooting a problematic ruleset. That will not change no matter how much experience you have with it. However, with the right mindset, this task becomes considerably easier.

If you missed my last installment about iptables rule processing, here's a crash course:

  1. The rules start at the top, and proceed down, one by one unless otherwise directed.
  2. A rule must match exactly.
  3. Once iptables has accepted, rejected, or dropped a packet, it will not process any further rules on it.

There are essentially two things that you will be troubleshooting with iptables ... Either it's not accepting traffic and it should be OR it's accepting traffic and it shouldn't be. If the server is intermittently blocking or accepting traffic, that may take some additional troubleshooting, and it may not even be related to iptables.

Keep in mind what you are looking for, and don't jump to any conclusions. Troubleshooting iptables takes patience and time, and there shouldn't be any guesswork involved. If you have a configuration of 800 rules, you should expect to need to look through every single rule until you find the rule that is causing your problems.

Before you begin troubleshooting, you first need to know some information about the traffic:

  1. What is the source IP address or range that is having difficulty connecting?
  2. What is the destination IP address or website IP?
  3. What is the port or port range affected, or what type of traffic is it (TCP, ICMP, etc.)?
  4. Is it supposed to be accepted or blocked?

Those bits of information should be all you need to begin troubleshooting a buggy ruleset, except in some rare cases that are outside the scope of this article.

Here are some things to keep in mind (especially if you did not program every rule by hand):

  • iptables has three built in chains. These are for INPUT – the traffic coming in to the server, OUTPUT – the traffic coming out of the server, and FORWARD – traffic that is not destined to or coming from the server (usually only used when iptable is acting as a firewall for other servers). You will start your troubleshooting at the top of one of these three chains, depending on the type of traffic.
  • The "target" is the action that is taken when the rule matches. This may be another custom chain, so if you see a rule with another chain as the target that matches exactly, be sure to step through every rule in that chain as well. In the following example, you will see the BLACKLIST2 sub-chain that applies to traffic on port 80. If traffic comes through on port 80, it will be diverted to this other chain.
  • The RETURN target indicates that you should return to the parent chain. If you see a rule that matches with a RETURN target, stop all your troubleshooting on the current chain, and return the rule directly after the rule that referenced the custom chain.
  • If there are no matching rules, the chain policy is applied.
  • There may be rules in the "nat," "mangle" or "raw" tables that are blocking or diverting your traffic. Typically, all the rules will be in the "filter" table, but you might run into situations where this is not the case. Try running this to check: iptables -t mangle -nL ; iptables -t nat -nL ; iptables -t raw -nL
  • Be cognisant of the policy. If the policy is ACCEPT, all traffic that does not match a rule will be accepted. Conversely, if the policy is DROP or REJECT, all traffic that does not match a rule will be blocked.
  • My goal with this article is to introduce you to the algorithm by which you can troubleshoot a more complex ruleset. It is intentionally left simple, but you should still follow through even when the answer may be obvious.

Here is an example ruleset that I will be using for an example:

Chain INPUT (policy DROP)
target prot opt source destination
BLACKLIST2 tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:50
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1010

Chain BLACKLIST2 (1 references)
target prot opt source destination
REJECT * -- 123.123.123.123 0.0.0.0/0
REJECT * -- 45.34.234.234 0.0.0.0/0
ACCEPT * -- 0.0.0.0/0 0.0.0.0/0

Here is the problem: Your server is accepting SSH traffic to anyone, and you wish to only allow SSH to your IP – 111.111.111.111. We know that this is inbound traffic, so this will affect the INPUT chain.

We are looking for:

source IP: any
destination IP: any
protocol: tcp
port: 22

Step 1: The first rule denotes any source IP and and destination IP on destination port 80. Since this is regarding port 22, this rule does not match, so we'll continue to the next rule. If the traffic here was on port 80, it would invoke the BLACKLIST2 sub chain.
Step 2: The second rule denotes any source IP and any destination IP on destination port 50. Since this is regarding port 22, this rule does not match, so let's continue on.
Step 3: The third rule denotes any source IP and any destination IP on destination port 53. Since this is regarding port 22, this rule does not match, so let's continue on.
Step 4: The fourth rule denotes any source IP and any destination IP on destination port 22. Since this is regarding port 22, this rule matches exactly. The target ACCEPT is applied to the traffic. We found the problem, and now we need to construct a solution. I will be showing you the Redhat method of doing this.

Do this to save the running ruleset as a file:

iptables-save > current-iptables-rules

Then edit the current-iptables-rules file in your favorite editor, and find the rule that looks like this:

-A INPUT -p tcp --dport 22 -j ACCEPT

Then you can modify this to only apply to your IP address (the source, or "-s", IP address).

-A INPUT -p tcp -s 111.111.111.111 --dport 22 -j ACCEPT

Once you have this line, you will need to load the iptables configuration from this file for testing.

iptables-restore < current-iptables-rules

Don't directly edit the /etc/sysconfig/iptables file as this might lock you out of your server. It is good practice to test a configuration before saving to the system configuration files. This way, if you do get locked out, you can reboot your server and it will be working. The ruleset should look like this now:

Chain INPUT (policy DROP)
target prot opt source destination
BLACKLIST2 tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:50
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT tcp -- 111.111.111.111 0.0.0.0/0 tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1010

Chain BLACKLIST2 (1 references)
target prot opt source destination
REJECT * -- 123.123.123.123 0.0.0.0/0
REJECT * -- 45.34.234.234 0.0.0.0/0
ACCEPT * -- 0.0.0.0/0 0.0.0.0/0

The policy of "DROP" will now block any other connection on port 22. Remember, the rule must match exactly, so the rule on port 22 now *ONLY* applies if the IP address is 111.111.111.111.

Once you have confirmed that the rule is behaving properly (be sure to test from another IP address to confirm that you are not able to connect), you can write the system configuration:

service iptables save

If this troubleshooting sounds boring and repetitive, you are right. However, this is the secret to solid iptables troubleshooting. As I said earlier, there is no guesswork involved. Just take it step by step, make sure the rule matches exactly, and follow it through till you find the rule that is causing the problem. This method may not be fast, but it's reliable. You'll look like an expert in no time.

-Mark

December 12, 2011

Web Development - JavaScript Optimization

So you have a fast website. You've optimized your database queries and you're using the most efficient page caching mechanisms. The users of your site are pretty satisfied, but you want just a little bit more. How can you push your site to the next level by making sure you've included all the optimizations you can to get the fastest speed possible?

Optimize your JavaScript.

You might not be concerned with optimizing you code because newer browsers have gotten significantly faster at processing JavaScript. But like all other programming languages, there are minor tweaks you can make that provide a significant performance gain. Let's take a look at a few of the ways you can be certain that you're getting the most out of your client-side JavaScript code.

JavaScript in External Files
The first optimization is to write your JavaScript in external files. By using external files, your browser is able to cache your code. This prevents users from needing to re-download your JavaScript during every page load and potentially during every AJAX request. When a browser visits any website, it checks the src attribute in the <script> tags and then determines if it already has a copy of that file on your computer. If it does, it won't spend wasteful time re-downloading the exact same code. If you include your code directly in your HTML within <script> tags, it will download that same code each time so that it can be processed. Save those precious bytes!

Minification
Now that you're putting your code in external files, your next goal is to reduce the amount of time it takes to initially download that code so that it can be cached by the browser. This is where minification comes into play. Minification (or minimization) is the process of removing all unnecessary characters from source code (without changing its functionality). Essentially this means you'll remove whitespace characters and comments.

Unless you like to torture yourself (and those who follow your work), you probably write code that's pretty readable. This includes splitting code up between multiple lines, indenting your code with spaces or tabs, and writing comments that explain some esoteric portion of code. All these items are not important to the JavaScript engine because it will just ignore them, but it still takes time to download these extra bytes that will never get used.

Luckily for you, you don't have to spend time creating the tool that will remove these unneeded characters. There are several good tools out there that will minify your code, and I recommend YUI Compressor. This tool can reduce your code by up to 60% (depending upon how efficiently you write your code). In addition to removing all the unnecessary characters, YUI Compressor will rewrite your code to use smaller variable names. If you have a local variable in a function called var myDescriptiveVariable, the compressor will rename it to var a. We just took our 21-character variable name down to 1 character! If this variable is used in 10 places, we've trimmed 200 characters with this one variable, and this happens for every local variable in your script! That's a lot of bytes saved.

If you're paying close attention, the key takeaway you'll notice is that using local variables (compared to global variable that don't get minified) is one great way to reduce the size of your code after minification occurs.

Initial Page Load Operations
Now that your code has been minified, let's take a look at initial page load operations. Since some JavaScript operations are synchronous and will halt other browser processing, we want to make sure we don't slow down the page when the user is trying to perform an action. There are two things we can do to improve initial page loading performance. First, reduce the amount of code you actually invoke on page load (or when the DOM is ready for interaction using Mootools' domready event, jQuery's document ready event or your favorite framework's equivalent). Second, implement lazy-loading techniques. For example, instead of adding all your events to all the elements on the page initially, wait for user interaction or some other process to add the events. Let's look at a few specific examples so you can see what I mean. Note: the code samples are using Mootools syntax.

Before:

var comments = $('.articles > div.comments');
$('showComments').addEvent('click', function() { comments.show(); });

After:

$('showComments').addEvent('click', function() { $('.articles > div.comments').show(); });

While this may improve initial page load, each time we click on showComments, we have to parse the DOM (Document Object Model) to find all the comments, and this is not a cheap operation. The key here is test your own code and determine with each scenario which way is faster and which will keep your users the happiest. Just realize that on demand operations may be more acceptable to users than having to wait for the page to load. You be the judge!

Code Caching
We discussed file caching before, but what about code caching? We need to determine which operations will benefit the most from caching. The two items I will focus on are loops and DOM interactions. Loops can be costly because you may be performing unnecessary actions within each iteration.

Before:

for (var i=0; i<$$('.toggle').length; i++) {
 var el = new Element('div.something', {
 html: $$('.toggle')[i].get('text')
 });
 el.inject($('content'));
}

After:

var i = 0,
 els = $$('.toggle'),
 length = els.length,
 container = new Element('div');
 
for (i=0; i<length; i++) {
 new Element('div.something', {
 html: els[i].get('text')
 }).inject(container);
}
 
container.inject($('content'));

The "Before" loop is inefficient because we are querying the DOM three separate times to get the elements we need (twice for .toggle elements and once for the #content element), and we are having to determine the length property of that collection of elements during each iteration. In our case, the length won't change, so we only need to find it one time. Finally, during each iteration, we add a new element to the #content div, and this causes a redraw of the page. DOM manipulation and redrawing can be expensive, so let's look at how the second method is so much more efficient.

We start by caching our DOM elements so we only have to pull them once and get the length property of those elements only once. We've also created an element which will be used to contain all our new elements. Since the container has not been added to the DOM yet, we can add and remove from it without having to worry about the expensive redraw of the page. Once all our elements have been added to the container, we then inject the container into the #content div. Since this is only happening once, we have significantly improved performance.

Selector Efficiency
The last optimization technique I'll review is selector efficiency. Selectors are used to grab specific elements from the document in order to interact with them in your code. It's very possible to write poor-performing selectors. Some selectors to avoid:

$$('*')

This will grab all the elements in your document. If you have a huge document, this could take a while. You better not be using this selector in a loop!

$$('[data-some-attribute]')

This selector is inefficient because it has to look for this data attribute within every element in the document. If there are thousands of elements and each has several data attributes, you should just go get some coffee.

$$('body .person:nth-child(3n+1) .category p span.title')

This selector is fairly complicated. The reason this can be inefficient is because it may have to unnecessarily inspect many elements to get to the one(s) it needs. Determine if you can simplify the selector by being more specific and using an id to get to elements or even consider slightly modifying your HTML so that it's easier to create efficient selectors.

There are plenty of other techniques that will help improve the speed and efficiency of your JavaScript, but these are a good start and should help make you and your users happy. Remember that DOM interactions are slow and expensive, so do what you can to reduce chatting back and forth with the document. Check your loops and minify your code, and your users will surely return.

Happy coding!

-Philip

December 8, 2011

UNIX Sysadmin Boot Camp: bash - Keyboard Shortcuts

On the support team, we're jumping in and out of shells constantly. At any time during my work day, I'll see at least four instances of PuTTY in my task bar, so one thing I learned quickly was that efficiency and accuracy in accessing ultimately make life easier for our customers and for us as well. Spending too much time rewriting paths, commands, VI navigation, and history cycling can really bring you to a crawl. So now that you have had some time to study bash and practice a little, I thought I'd share some of the keyboard shortcuts that help us work as effectively and as expediently as we do. I won't be able to cover all of the shortcuts, but these are the ones I use most:

Tab

[Tab] is one of the first keyboard shortcuts that most people learn, and it's ever-so-convenient. Let's say you just downloaded pckg54andahalf-5.2.17-v54-2-x86-686-Debian.tar.gz, but a quick listing of the directory shows you ALSO downloaded 5.1.11, 4.8.6 and 1.2.3 at some point in the past. What was that file name again? Fret not. You know you downloaded 5.2.something, so you just start with, say, pckg, and hit [Tab]. This autocompletes everything that it can match to a unique file name, so if there are no other files that start with "pckg," it will populate the whole file name (and this can occur at any point in a command).

In this case, we've got four different files that are similar:
pckg54andahalf-5.2.17-v54-2-x86-686-Debian.tar.gz pckg54andahalf-5.1.11-v54-2-x86-686-Debian.tar.gz
pckg54andahalf-4.8.6-v54-2-x86-686-Debian.tar.gz
pckg54andahalf-1.2.3-v54-2-x86-686-Debian.tar.gz

So typing "pckg" and hitting [Tab] brings up:
pckg54andahalf-

NOW, what you could do, knowing what files are there already, is type "5.2" and hit [Tab] again to fill out the rest. However, if you didn't know what the potential matches were, you could double-tap [Tab]. This displays all matching file names with that string.

Another fun fact: This trick also works in Windows. ;)

CTRL+R

[CTRL+R] is a very underrated shortcut in my humble opinion. When you've been working in the shell for untold hours parsing logs, moving files and editing configs, your bash history can get pretty immense. Often you'll come across a situation where you want to reproduce a command or series of commands that were run regarding a specific file or circumstance. You could type "history" and pore through the commands line by line, but I propose something more efficient: a reverse search.

Example: I've just hopped on my system and discovered that my SVN server isn't doing what it's supposed to. I want to take a look at any SVN related commands that were executed from bash, so I can make sure there were no errors. I'd simply hit [CTRL+R], which would pull up the following prompt:

(reverse-i-search)`':

Typing "s" at this point would immediately return the first command with the letter "s" in it in the history ... Keep in mind that's not just starting with s, it's containing an s. Finishing that out to "svn" brings up any command executed with those letters in that order. Pressing [CTRL+R] again at this point will cycle through the commands one by one.

In the search, I find the command that was run incorrectly ... There was a typo in it. I can edit the command within the search prompt before hitting enter and committing it to the command prompt. Pretty handy, right? This can quickly become one of your most used shortcuts.

CTRL+W & CTRL+Y

This pair of shortcuts is the one I find myself using the most. [CTRL+W] will basically take the word before your cursor and "cut" it, just like you would with [CTRL+X] in Windows if you highlighted a word. A "word" doesn't really describe what it cuts in bash, though ... It uses whitespace as a delimiter, so if you have an ultra long file path that you'll probably be using multiple times down the road, you can [CTRL+W] that sucker and keep it stowed away.

Example: I'm typing nano /etc/httpd/conf/httpd.conf (Related: The redundancy of this path always irked me just a little).
Before hitting [ENTER] I tap [CTRL+W], which chops that path right back out and stores it to memory. Because I want to run that command right now as well, I hit [CTRL+Y] to paste it back into the line. When I'm done with that and I'm out referencing other logs or doing work on other files and need to come back to it, I can simply type "nano " and hit [CTRL+Y] to go right back into that file.

CTRL+C

For the sake of covering most of my bases, I want to make sure that [CTRL+C] is covered. Not only is it useful, but it's absolutely essential for standard shell usage. This little shortcut performs the most invaluable act of killing whatever process you were running at that point. This can go for most anything, aside from the programs that have their own interfaces and kill commands (vi, nano, etc). If you start something, there's a pretty good chance you're going to want to stop it eventually.

I should be clear that this will terminate a process unless that process is otherwise instructed to trap [CTRL+C] and perform a different function. If you're compiling something or running a database command, generally you won't want to use this shortcut unless you know what you're doing. But, when it comes to everyday usage such as running a "top" and then quitting, it's essential.

Repeating a Command

There are four simple ways you can easily repeat a command with a keyboard shortcut, so I thought I'd run through them here before wrapping up:

  1. The [UP] arrow will display the previously executed command.
  2. [CTRL+P] will do the exact same thing as the [UP] arrow.
  3. Typing "!!" and hitting [Enter] will execute the previous command. Note that this actually runs it. The previous two options only display the command, giving you the option to hit [ENTER].
  4. Typing "!-1" will do the same thing as "!!", though I want to point out how it does this: When you type "history", you see a numbered list of commands executed in the past -1 being the most recent. What "!-1" does is instructs the shell to execute (!) the first item on the history (-1). This same concept can be applied for any command in the history at all ... This can be useful for scripting.

Start Practicing

What it really comes down to is finding what works for you and what suits your work style. There are a number of other shortcuts that are definitely worthwhile to take a look at. There are plenty of cheat sheets on the internet available to print out while you're learning, and I'd highly recommend checking them out. Trust me on this: You'll never regret honing your mastery of bash shortcuts, particularly once you've seen the lightning speed at which you start flying through the command line. The tedium goes away, and the shell becomes a much more friendly, dare I say inviting, place to be.

Quick reference for these shortcuts:

  • [TAB] - Autocomplete to furthest point in a unique matching file name or path.
  • [CTRL+R] - Reverse search through your bash history
  • [CTRL+W] - Cut one "word" back, or until whitespace encountered.
  • [CTRL+Y] - Paste a previously cut string
  • [CTRL+P] - Display previously run command
  • [UP] - Display previously run command

-Ryan

October 26, 2011

MODX: Tech Partner Spotlight

This is a guest blog from the MODX team. MODX offers an intuitive, feature-rich, open source content management platform that can easily integrate with other applications as the heart of your Customer Experience Management solution.

Company Website: http://modx.com/
Tech Partners Marketplace: http://www.softlayer.com/marketplace/modx

Free your Website with MODX CMS

Just having a website or a blog is no longer a viable online strategy for smart businesses. Today's interconnected world requires engaging customers — from the first impression, to developing leads, educating, selling, empowering customer service and beyond. This key shift in online interaction is known as Customer Experience Management, or CXM.

For businesses to have success with CXM, they need an efficient way to connect all facets of their communications and information together with a modern and consistent look and feel, and without long learning curves or frustrating user experiences. You don't want a Content Management System (CMS) that restricts your ability to meet brand standards, that lives in isolation from your other systems and data, or that fails to fulfil your businesses needs.

MODX is a content management platform that gives you the creative freedom to build custom websites limited only by your imagination. It certainly can play the central role in managing your customer experience.

Freedom from Hassle & Frustration
The most productive tools are those that simply allow you get your work done. To make life easy for content editors MODX uses familiar concepts like a hierarchical tree – similar to the folders and files on your computer. This allows content editors to relate their content to the overall website structure. But, like everything else in MODX, you aren't limited to hierarchical content and can easily employ taxonomy-, list- or category-based structures.

Similarly, editing documents should be easy. With MODX, anyone who can open a web browser and send email has the skillset to create and edit content in MODX. Most tasks are a matter of filling out simple form fields into which content is placed and is accompanied by a sensible MS Word-like editor for your main content. Furthermore, site builders and developers are able to create custom fields for custom content types and custom data allowing non-technical employees to work in an intuitive, tailored environment.

Total Creative Freedom
Your website is one of the most visible parts of your brand and you certainly don't want it limited by your CMS. MODX makes it possible to do anything that's on the modern web now — you don't have to wait for a year or hack the core to launch an HTML5 or mobile optimized site. MODX can do it all now, and even what's coming next. It outputs exactly and only what you or your site builder dictate.

MODX uses a brilliantly simple template engine that allows web designers to work with what they already know, like HTML, CSS and any JavaScript library they chose. MODX can even output things not typically associated with most content management platforms like XML, JSON or even Comma Separated Value (CSV) files that automatically download to your desktop.

Freedom to Extend
MODX provides all the requisite tools for CMS, but it also functions as a fully capable web development platform upon which you can extend functionality, employ custom applications and do just about anything you can dream up. In fact, the "X" in MODX comes from the word "extensible". Whether you want to build a Member-only website, Client Extranet, Resort Booking and Reservations system or private Social Network, you can do it on MODX.

For developers the fully-documented Object Oriented API and xPDO, MODXs database layer, provide all you need to build almost anything with MODX, even extending or overriding its core functionality. Critically, you can do all this using the API and retain a painless upgrade path without hacking the core. The MODX API architecture provides all the flexibility you or your developer might need to make MODX your own without painting your self into a corner.

Freedom from Bottlenecks
Modern web pages are made up of many component parts – site-wide headers and footers, navigation menus, articles, products and more. At some point, all these pieces need to be put together and delivered to the visitor as a single page that users expect to load quickly or they'll leave your site.

To deliver pages fast, top-performing sites use server-side caching to take all those pieces and pre-process them for fast delivery to a browser. The problem with many CMS applications is that they manually rebuild pages every single time someone visits your site. That's fine if you only have a few visitors, but your site can bog down or even fail under moderate traffic. In these circumstances, it would be disastrous if your website is featured on an industry magazine or website, national media or on a popular TV show. Your site could literally grind to a halt, costing you customers, damaging your reputation and ultimately making a bad first impression.

MODX's native page caching delivers your site quickly by default. Additionally, MODX can use high-end caching like memcache to further improve performance under load. To handle millions of pageviews daily, you need robust servers and you need to optimize your environment ... That's where scaling across multiple servers and replication with SoftLayer works perfectly with MODX.

Free Your Legacy Systems
Keeping your data, content and business information in disconnected silos is ineffective and costly. Accessing existing systems, like an Active Directory or Enterprise Content repository, makes huge difference in getting your work done headache-free. You don't have to worry about data duplication across systems, significant extra work to make everything work or synchronization issues. A new website platform should increase your productivity and enable your employees, customers and everyone else surrounding your business to find what they need and to interact efficiently and effectively.

MODX works with the tools and technology that organizations already have in place. It can easily interact with external web services or data feeds and can drive other applications via RESTful web services.

Security and Freedom to Rest Easy
Website Security is a topic that rarely surfaces during the early stages of a web project and often never comes up until your site has been compromised.

A high-quality hosting environment like those from SoftLayer are the foundation of website security. Your web CMS and its add-ons, plugin-ins or modules should not be a liability. MODX is designed with security at its core to protect your valuable website from malicious attacks. Every input is filtered, and every database query using the API eliminates the possibility of SQL injection compromises. Most importantly, the development team rigorously and continuously audits MODX to make sure its up to date and patching any new issues that may arise.

Freedom in the Community
With MODX and the MODX Community you're not alone. There are hundreds of thousands of websites built on MODX and we have a friendly, active and growing community of raving fans over 37,000 strong to whom you can look for assistance, support, education and camaraderie.

In fact, the MODX Community is one of our greatest assets.

They provide mentorship, assistance and help make MODX software better through active reporting of issues and feature requests and contributing improvements for integration by the core team.

If you're not a site builder or developer, but you want your website powered by MODX, one of the best places to start is with a MODX Solution Partner. Our network of 90+ global Solution Partners enables you to get the right-fit expertise for your project and in many cases work locally. Solution Partners are experts at MODX and know how to do things right.

Get Free
There really is a cure for the all too often restrictive, unintuitive and frustrating experience of putting content on the web. Get on the road to content management freedom with MODX. It's easy to start since MODX Revolution itself is free to download and use.

Learn more at http://modx.com/.

-Jay Gilmore, MODX

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
October 11, 2011

Building a True Real-Time Multiplayer Gaming Platform

Some of the most innovative developments on the Internet are coming from online game developers looking to push the boundaries of realism and interactivity. Developing an online gaming platform that can support a wide range of applications, including private chat, avatar chats, turn-based multiplayer games, first-person shooters, and MMORPGs, is no small feat.

Our high speed, global network significantly minimizes reliability, access, latency, lag and bandwidth issues that commonly challenge online gaming. Once users begin to experience issues of latency, reliability, they are gone and likely never to return. Our cloud, dedicated, and managed hosting solutions enable game developers to rapidly test, deploy and manage rich interactive media on a secure platform.

Consider the success of one of our partners — Electrotank Inc. They’ve been able to support as many as 6,500 concurrent users on just ONE server in a realistic simulation of a first-person shooter game, and up to 330,000 concurrent users for a turn-based multiplayer game. Talk about server density.

This is just scratching the surface because we're continuing to build our global footprint to reduce latency for users around the world. This means no awkward pauses, jumping around, but rather a smooth, seamless, interactive online gaming experience. The combined efforts of SoftLayer’s infrastructure and Electrotank’s performant software have produced a high-performance networking platform that delivers a highly scalable, low latency user experience to both gamers and game developers.

Electrotank

You can read more about how Electrotank is leveraging SoftLayer’s unique network platform in today's press release or in the fantastic white paper they published with details about their load testing methodology and results.

We always like to hear our customers opinions so let us know what you think.

-@nday91

Subscribe to coding