Posts Tagged 'Coding'

March 5, 2012

iptables Tips and Tricks - Not Locking Yourself Out

The iptables tool is one of the simplest, most powerful tools you can use to protect your server. We've covered port redirection, rule processing and troubleshooting in previous installments to this "Tips and Tricks" series, but what happens when iptables turns against you and locks you out of your own system?

Getting locked out of a production server can cost both time and money, so it's worth your time to avoid this. If you follow the correct procedures, you can safeguard yourself from being firewalled off of your server. Here are seven helpful tips to help you keep your sanity and prevent you from locking yourself out.

Tip 1: Keep a safe ruleset handy.

If you are starting with a working ruleset, or even if you are trying to troubleshoot an existing ruleset, take a backup of your iptables configuration before you ever start working on it.

iptables-save > /root/iptables-safe

Then if you do something that prevents your website from working, you can quickly restore it.

iptables-restore

Tip 2: Create a cron script that will reload to your safe ruleset every minute during testing.

This was pointed out to my by a friend who swears by this method. Just write a quick bash script and set a cron entry that will reload it back to the safe set every minute. You'll have to test quickly, but it will keep you from getting locked out.

Tip 3: Have the IPMI KVM ready.

SoftLayer-pod servers* are equipped with some sort of remote access device. Most of them have a KVM console. You will want to have your VPN connection set up, connected and the KVM window up. You can't paste to and from the KVM, so SSH is typically easier to work with, but it will definitely cut down on the downtime if something does go wrong.

*This may not apply to servers that were originally provisioned under another company name.

Tip 4: Try to avoid generic rules.

The more criteria you specify in the rule, the less chance you will have of locking yourself out. I would liken this to a pie. A specific rule is a very thin slice of the pie.

iptables -A INPUT -p tcp --dport 22 -s 10.0.0.0/8 -d 123.123.123.123 -j DROP

But if you block port 22 from any to any, it's a very large slice.

iptables -A INPUT -p tcp --dport 22 -j DROP

There are plenty of ways that you can be more specific. For example, using "-i eth0" will limit the processing to a single NIC in your server. This way, it will not apply the rule to eth1.

Tip 5: Whitelist your IP address at the top of your ruleset.

This may make testing more difficult unless you have a secondary offsite test server, but this is a very effective method of not getting locked out.

iptables -I INPUT -s <your IP> -j ACCEPT

You need to put this as the FIRST rule in order for it to work properly ("-I" inserts it as the first rule, whereas "-A" appends it to the end of the list).

Tip 6: Know and understand all of the rules in your current configuration.

Not making the mistake in the first place is half the battle. If you understand the inner workings behind your iptables ruleset, it will make your life easier. Draw a flow chart if you must.

Tip 7: Understand the way that iptables processes rules.

Remember, the rules start at the top of the chain and go down, unless specified otherwise. Crack open the iptables man page and learn about the options you are using.

-Mark

January 17, 2012

Web Development - HTML5 - Web Fonts

All but gone are the days of plain, static webpages flowered with horrible repeating neon backgrounds and covered with nauseating animated GIFs created by amateur designers that would make your mother cry and induce seizures in your grandpa. Needless to say, we have come a long way since Al Gore first "created the intarwebs" in the early '90's. For those of you born in this century, that's the 1990's ... Yes, the World Wide Web is still very new. Luckily for the seven billion people on this lovely planet, many advancements have been introduced into our web browsers that make our lives as designers and developers just a little bit more tolerable.

Welcome to the third installment in Web Development series. If you're just joining us, the first posts in the series covered JavaScript Optimization and HTML5 Custom Data Attributes ... If you haven't read those yet, take a few minutes to catch up and head back to this blog where we'll be looking at how custom web fonts can add a little spice to your already-fantastic website.

If you're like me, you've probably used the same three or four fonts on most sites you've designed in the past: Arial, Courier New, Trebuchet MS and Verdana. You know that pretty much all browsers will have support for these "core" fonts, so you never ventured beyond them because you wanted the experience to remain the same for everyone, no matter what browser a user was using to surf. If you were adventurous and wanted to throw in a little typographical deviation, you might have created a custom image of the text in whatever font Photoshop would allow, but those days are in the past (or at least they should be).

Why is using an image instead of plain text unfriendly?

  1. Lack of Flexibility - Creating an image is time-consuming. Even if you have really fast fingers and know your way around Photoshop, it will never be as fast as simply typing that text into your favorite editor. Also, you can't change the styles (font-size, color, text-decoration, etc.) of an image using CSS like you can with text.
  2. Lack of Accessibility – Not everyone is alike. Some of your readers or clients may have impairments that require screen readers or a really large font. Using an image – especially one that doesn't contain a good long description – prevents those users from getting the full experience. Also, some people use text-only browsers that don't display any images. Think about your whole audience!
  3. More to Download – Plain text doesn't require the same number of bytes as an image of that same text. By not having another image, you are saving on the amount of time it takes to load your page.

Now that we're on the same page about the downsides of the "old way" of doing things, let's look at some cool HTML5-powered methods for displaying custom fonts. Before we get started, we need to have some custom fonts to use. Google has a nice interface for downloading custom fonts (http://www.google.com/webfonts), and there are plenty of other sites that provide free and non-free fonts that can suit your taste/needs. You can pick and choose which ones you'd like to use (remembering to always follow copyright guidelines), and once you've created and downloaded your collection of fonts, you'll need to setup your CSS to read them.

For simplicity, my file structure will be setup with the HTML and CSS files in the same root directory. I will have a fonts directory where I will keep all my custom fonts.

/fonts.html
/fonts.css
/styles.css
/fonts/MyCustomFont/MyCustomFont-Regular.ttf
/fonts/MyCustomFont/MyCustomFont-Bold.ttf
/fonts/...

My fonts.html file will include the two CSS files in the head section. The order in which you include the CSS files does not matter.

<link rel="stylesheet" type="text/css" href="fonts.css" />
<link rel="stylesheet" type="text/css" href="styles.css" />

The fonts.css file will include the definitions for all of our custom fonts. The styles.css file will be our main CSS file for our website. Defining our custom fonts (in fonts.css) is really simple:

@font-face {
    font-family: 'MyCustomFont';
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.ttf') format('truetype');
}

It's almost too easy thanks to HTML5!

Let's break this down into its components to better understand what's going on here. The @font-face declaration will be ignored by older browsers that don't understand it, so this standards-compliant definition degrades nicely. The font-family descriptor is the name that you'll use to reference this font family in your other CSS file(s). The src descriptor contains the location of where your font is stored and the format of the font.

There are several things to note here. The quotes around MyCustomFont in the font-family descriptor are optional. If it were My Custom Font instead (in fonts.css and styles.css), it would still be successfully read. The quotes around the url portion are also optional. However, the quotes around the format portion are not optional. To keep things consistent, I have a habit of adding quotes around all of these items.

An alternative way to define the same font would be to leave off the format portion of the src descriptor. Browsers don't need the format portion if it's a standard font format (described below).

@font-face {
    font-family: 'MyCustomFont';
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.ttf');
}

Like standard url inclusions in other CSS definitions, the URL item is relative to the location of the definition file (fonts.css). The URL may also be an absolute location or point to a different website altogether. If using the Google web fonts site mentioned earlier (or similar site), you may simply point the URL to the location suggested instead of downloading the actual font.

If you've dealt with web fonts before, you may already be familiar with the multiple formats: WOFF (Web Open Font Format, .woff), TrueType (.ttf), OpenType (.ttf, .otf), Embedded Open Type (.eot) and SVG Font (.svg, .svgz). I won't go into great detail here about these, but if you're interested in learning more, Google and W3C are great resources.

It should be noted that all browsers are not alike (no shock there) and some may not render some font formats correctly or at all. You can get around this by including multiple src descriptors in your @font-face declaration to try and support all the browsers.

@font-face {
    font-family: 'MyCustomFont';
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.eot'); /* Old IE */
    src: url('fonts/MyCustomFont/MyCustomFont-Regular.ttf'); /* Cool browsers */
}

Now that we have our font definition setup, we have to include our new custom font in our styles.css. You've done this plenty of times:

h1, p {
    font-family: MyCustomFont, Arial;
}

There you go! For some reason if MyCustomFont is not understood, the browser will default to Arial. This degrades gracefully and is really simple to use. One thing to note is that even though your fonts.css file may define twenty custom fonts, only the fonts that are included and used in your styles.css file will be downloaded. This is very smart of the browser – it only downloads what it's going to use.

So now you have one more tool to add to your development box. As more users adopt newer, standards-compliant browsers, it's easier to give your site some spice without the headaches of creating unnecessary images. Go forth and impress your friends with your new web font knowledge!

Happy Coding!

-Philip

P.S. As a bonus, you can check out the in-line style declaration in the source of this post to see how "Happy Coding!" is coded to use the Monofett font family.

January 10, 2012

Web Development - HTML5 - Custom Data Attributes

I recently worked on a project that involved creating promotion codes for our clients. I wanted to make this tool as simple as possible to use and because this involved dealing with thousands of our products in dozens of categories with custom pricing for each of these products, I had to find a generic way to deal with client-side form validation. I didn't want to write custom JavaScript functions for each of the required inputs, so I decided to use custom data attributes.

Last month, we started a series focusing on web development tips and tricks with a post about JavaScript optimization. In this installment, we're cover how to use HTML5 custom data attributes to assist you in validating forms.

Custom data attributes for elements are "[attributes] in no namespace whose name starts with the string 'data-', has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z)." Thanks W3C. That definition is bookish, so let's break it down and look at some examples.

Valid:

<div data-name="Philip">Mr. Thompson is an okay guy.</div>
<a href="softlayer.com" data-company-name="SoftLayer" data-company-state="TX">SoftLayer</a>
<li data-color="blue">Smurfs</li>

Invalid:

// This attribute is not prefixed with 'data-'
    <h2 database-id="244">Food</h2>
 
// These 2 attributes contain capital letters in the attribute names
    <p data-firstName="Ashley" data-lastName="Thompson">...</p>
 
// This attribute does not have any valid characters following 'data-'
    <img src="/images/pizza.png" data-="Sausage" />

Now that you know what custom data attributes are, why would we use them? Custom attributes allow us to relate specific information to particular elements. This information is hidden to the end user, so we don't have to worry about the data cluttering screen space and we don't have to create separate hidden elements whose purpose is to hold custom data (which is just bad practice). This data can be used by a JavaScript programmer to many different ends. Among the most common use cases are to manipulate elements, provide custom styles (using CSS) and perform form validation. In this post, we'll focus on form validation.

Let's start out with a simple form with two radio inputs. Since I like food, our labels will be food items!

<input type="radio" name="food" value="pizza" data-sl-required="food" data-sl-show=".pizza" /> Pizza
<input type="radio" name="food" value="sandwich" data-sl-required="food" data-sl-show="#sandwich" /> Sandwich
<div class="hidden required" data-sl-error-food="1">A food item must be selected</div>

Here we have two standard radio inputs with some custom data attributes and a hidden div (using CSS) that contains our error message. The first input has a name of food and a value of pizza – these will be used on the server side once our form is submitted. There are two custom data attributes for this input: data-sl-required and data-sl-show. I am defining the data attribute data-sl-required to indicate that this radio button is required in the form and one of them must be selected to pass validation. Note that I have prefixed required with sl- to namespace my data attribute. required is generic and I don't want to have any conflicts with any other attributes – especially ones written for other projects! The value of data-sl-required is food, which I have tied to the div with the attribute data-sl-error-food (more on this later).

The second custom attribute is data-sl-show with a selector of .pizza. (To review selectors, jump back to the JavaScript optimization post.) This data attribute will be used to show a hidden div that contains the class pizza when the radio button is clicked. The sandwich radio input has the same data attributes but with slightly different values.

Now we can move on to our HTML with the hidden text inputs:

<div class="hidden pizza">
    Pizza type: <input type="text" name="pizza" data-sl-required="pizza" data-sl-depends="{&quot;type&quot;:&quot;radio&quot;,&quot;name&quot;:&quot;food&quot;,&quot;value&quot;:&quot;sandwich&quot;}" />
    <div class="hidden required" data-sl-error-pizza="1">The pizza type must not be empty</div>
</div>
 
<div class="hidden" id="sandwich">
    Sandwich: <input type="text" name="sandwich" data-sl-required="sandwich" data-sl-depends="{&quot;type&quot;:&quot;radio&quot;,&quot;name&quot;:&quot;food&quot;,&quot;value&quot;:&quot;sandwich&quot;}" />
    <div class="hidden required" data-sl-error-sandwich="1">The sandwich must not be empty</div>
</div>

These are hidden divs that contain text inputs that will be used to input more data depending on the radio input selected. Notice that the first outer div has a class of pizza and the second outer div has an id of sandwich. These values tie back to the data-sl-show selectors for the radio inputs. These text inputs also contain the data-sl-required attribute like the previous radio inputs. The data-sl-depends data attribute is a fun attribute that contains a JSON-encoded object that's been run through PHP's htmlentities() function. For readability, the data-sl-depends values contain:

{"type":"radio","name":"food","value":"pizza"}
{"type":"radio","name":"food","value":"sandwich"}

This first attribute says that our text input “depends” on the radio input with the name food and value pizza to be visible and selected in order to be processed as required. You can just imagine the possibilities of combinations you can create to make very custom functionality with very generic JavaScript.

Now we can examine our JavaScript to make sense of all these custom data attributes. Note that I'll be using Mootools' syntax, but the same can fairly easily be accomplished using jQuery or your favorite JavaScript framework. I'm going to start by creating a DataForm class. This will be generic enough so that you can use it in multiple forms and it's not tied to this specific instance. Reuse is good! To have it fit our needs, we're going to pass some options when instantiating it.

new DataForm({
    ...,
    dataAttributes: {
        required: 'data-sl-required',
        errorPrefix: 'data-sl-error',
        depends: 'data-sl-depends',
        show: 'data-sl-show'
    }
});

As you can see, I'm using the data attribute names from our form as the options – this will allow you to create your own data attribute names depending on your uses. We first need to make our hidden divs visible whenever our radio buttons are clicked – we'll use our initData() method for that.

initData: function() {
    var attrs = this.options.dataAttributes,
        divs = [];
 
    $$('input[type=radio]['+attrs.show+']').each(function(input) {
        var div = $$(input.get(attrs.show));
        divs.push(div);
        input.addEvent('change', function(e) {
            divs.each(function(div) { div.hide(); });
            div.show();
        });
    });
}

This method grabs all the radio inputs with the show attribute (data-sl-show) and adds an onchange event to each of them. When a radio input is checked, it first hides all the divs, and then shows the div that's associated with that radio input.

Great! Now we have our text inputs showing and hiding depending on which radio button is selected. Now onto the actual validation. First, we'll make sure our required radio inputs are checked:

$$('input[type=radio]['+attrs.required+']:not(:checked)').each(function(input) {
    var name = input.get('name');
    checkError(name, isRadioWithNameSelected(name))
});

This grabs all the unchecked radio inputs with the required attribute (data-sl-required) and checks to see if any radio with that same name is selected using the function isRadioWithNameSelected(). The checkError() function will show or hide the error div (data-sl-error-*) depending if the radio is checked or not. Don't worry - you'll see how these functions are implemented in the JSFiddle below.

Now we can check our text inputs:

$$('input[type=text]['+attrs.required+']:visible').each(function(input) {
    var name = input.get('name'),
        depends = input.get(attrs.depends),
        value = input.get('value').trim();
 
    if (depends) {
        depends = JSON.encode(depends);
        switch (depends.type) {
            case 'radio':
                if (depends.name &amp;&amp; depends.value) {
                    var radio = $$('input[type=radio][name="'+depends.name+'"][value="'+depends.value+'"]:checked');
                    if (!radio) {
                        checkError(input.get(attrs.required), true);
                        return;
                    }
                }
                break;
        }
    }
 
    checkError(name, value!='');
});

This obtains the required and visible text inputs and determines if they are empty or not. Here we look at our depends object to grab the associated radio inputs. If that radio is checked and the text input is empty, the error message appears. If that radio is not checked, it doesn't even evaluate that text input. The depends.type is in a switch statement to allow for easy expansion of types. There are other cases for evaluating relationships ... I'll let you come up with more for yourself.

That concludes our usage of custom data attributes in form validation. This really only touched upon the very tip of the iceberg. Data attributes allow you – the creative developer – to interact in a new, HTML-valid way with your web pages.

Check out the working example of the above code at jsfiddle.net. To read more on custom data attributes, see what Google has to say. If you want to see really cool functionality that uses data attributes plus so much more, check out Aaron Newton's Behavior module over at clientcide.com.

Happy coding!

-Philip

January 9, 2012

iptables Tips and Tricks – Troubleshooting Rulesets

One of the most time consuming tasks with iptables is troubleshooting a problematic ruleset. That will not change no matter how much experience you have with it. However, with the right mindset, this task becomes considerably easier.

If you missed my last installment about iptables rule processing, here's a crash course:

  1. The rules start at the top, and proceed down, one by one unless otherwise directed.
  2. A rule must match exactly.
  3. Once iptables has accepted, rejected, or dropped a packet, it will not process any further rules on it.

There are essentially two things that you will be troubleshooting with iptables ... Either it's not accepting traffic and it should be OR it's accepting traffic and it shouldn't be. If the server is intermittently blocking or accepting traffic, that may take some additional troubleshooting, and it may not even be related to iptables.

Keep in mind what you are looking for, and don't jump to any conclusions. Troubleshooting iptables takes patience and time, and there shouldn't be any guesswork involved. If you have a configuration of 800 rules, you should expect to need to look through every single rule until you find the rule that is causing your problems.

Before you begin troubleshooting, you first need to know some information about the traffic:

  1. What is the source IP address or range that is having difficulty connecting?
  2. What is the destination IP address or website IP?
  3. What is the port or port range affected, or what type of traffic is it (TCP, ICMP, etc.)?
  4. Is it supposed to be accepted or blocked?

Those bits of information should be all you need to begin troubleshooting a buggy ruleset, except in some rare cases that are outside the scope of this article.

Here are some things to keep in mind (especially if you did not program every rule by hand):

  • iptables has three built in chains. These are for INPUT – the traffic coming in to the server, OUTPUT – the traffic coming out of the server, and FORWARD – traffic that is not destined to or coming from the server (usually only used when iptable is acting as a firewall for other servers). You will start your troubleshooting at the top of one of these three chains, depending on the type of traffic.
  • The "target" is the action that is taken when the rule matches. This may be another custom chain, so if you see a rule with another chain as the target that matches exactly, be sure to step through every rule in that chain as well. In the following example, you will see the BLACKLIST2 sub-chain that applies to traffic on port 80. If traffic comes through on port 80, it will be diverted to this other chain.
  • The RETURN target indicates that you should return to the parent chain. If you see a rule that matches with a RETURN target, stop all your troubleshooting on the current chain, and return the rule directly after the rule that referenced the custom chain.
  • If there are no matching rules, the chain policy is applied.
  • There may be rules in the "nat," "mangle" or "raw" tables that are blocking or diverting your traffic. Typically, all the rules will be in the "filter" table, but you might run into situations where this is not the case. Try running this to check: iptables -t mangle -nL ; iptables -t nat -nL ; iptables -t raw -nL
  • Be cognisant of the policy. If the policy is ACCEPT, all traffic that does not match a rule will be accepted. Conversely, if the policy is DROP or REJECT, all traffic that does not match a rule will be blocked.
  • My goal with this article is to introduce you to the algorithm by which you can troubleshoot a more complex ruleset. It is intentionally left simple, but you should still follow through even when the answer may be obvious.

Here is an example ruleset that I will be using for an example:

Chain INPUT (policy DROP)
target prot opt source destination
BLACKLIST2 tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:50
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1010

Chain BLACKLIST2 (1 references)
target prot opt source destination
REJECT * -- 123.123.123.123 0.0.0.0/0
REJECT * -- 45.34.234.234 0.0.0.0/0
ACCEPT * -- 0.0.0.0/0 0.0.0.0/0

Here is the problem: Your server is accepting SSH traffic to anyone, and you wish to only allow SSH to your IP – 111.111.111.111. We know that this is inbound traffic, so this will affect the INPUT chain.

We are looking for:

source IP: any
destination IP: any
protocol: tcp
port: 22

Step 1: The first rule denotes any source IP and and destination IP on destination port 80. Since this is regarding port 22, this rule does not match, so we'll continue to the next rule. If the traffic here was on port 80, it would invoke the BLACKLIST2 sub chain.
Step 2: The second rule denotes any source IP and any destination IP on destination port 50. Since this is regarding port 22, this rule does not match, so let's continue on.
Step 3: The third rule denotes any source IP and any destination IP on destination port 53. Since this is regarding port 22, this rule does not match, so let's continue on.
Step 4: The fourth rule denotes any source IP and any destination IP on destination port 22. Since this is regarding port 22, this rule matches exactly. The target ACCEPT is applied to the traffic. We found the problem, and now we need to construct a solution. I will be showing you the Redhat method of doing this.

Do this to save the running ruleset as a file:

iptables-save > current-iptables-rules

Then edit the current-iptables-rules file in your favorite editor, and find the rule that looks like this:

-A INPUT -p tcp --dport 22 -j ACCEPT

Then you can modify this to only apply to your IP address (the source, or "-s", IP address).

-A INPUT -p tcp -s 111.111.111.111 --dport 22 -j ACCEPT

Once you have this line, you will need to load the iptables configuration from this file for testing.

iptables-restore < current-iptables-rules

Don't directly edit the /etc/sysconfig/iptables file as this might lock you out of your server. It is good practice to test a configuration before saving to the system configuration files. This way, if you do get locked out, you can reboot your server and it will be working. The ruleset should look like this now:

Chain INPUT (policy DROP)
target prot opt source destination
BLACKLIST2 tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:50
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT tcp -- 111.111.111.111 0.0.0.0/0 tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1010

Chain BLACKLIST2 (1 references)
target prot opt source destination
REJECT * -- 123.123.123.123 0.0.0.0/0
REJECT * -- 45.34.234.234 0.0.0.0/0
ACCEPT * -- 0.0.0.0/0 0.0.0.0/0

The policy of "DROP" will now block any other connection on port 22. Remember, the rule must match exactly, so the rule on port 22 now *ONLY* applies if the IP address is 111.111.111.111.

Once you have confirmed that the rule is behaving properly (be sure to test from another IP address to confirm that you are not able to connect), you can write the system configuration:

service iptables save

If this troubleshooting sounds boring and repetitive, you are right. However, this is the secret to solid iptables troubleshooting. As I said earlier, there is no guesswork involved. Just take it step by step, make sure the rule matches exactly, and follow it through till you find the rule that is causing the problem. This method may not be fast, but it's reliable. You'll look like an expert in no time.

-Mark

December 12, 2011

Web Development - JavaScript Optimization

So you have a fast website. You've optimized your database queries and you're using the most efficient page caching mechanisms. The users of your site are pretty satisfied, but you want just a little bit more. How can you push your site to the next level by making sure you've included all the optimizations you can to get the fastest speed possible?

Optimize your JavaScript.

You might not be concerned with optimizing you code because newer browsers have gotten significantly faster at processing JavaScript. But like all other programming languages, there are minor tweaks you can make that provide a significant performance gain. Let's take a look at a few of the ways you can be certain that you're getting the most out of your client-side JavaScript code.

JavaScript in External Files
The first optimization is to write your JavaScript in external files. By using external files, your browser is able to cache your code. This prevents users from needing to re-download your JavaScript during every page load and potentially during every AJAX request. When a browser visits any website, it checks the src attribute in the <script> tags and then determines if it already has a copy of that file on your computer. If it does, it won't spend wasteful time re-downloading the exact same code. If you include your code directly in your HTML within <script> tags, it will download that same code each time so that it can be processed. Save those precious bytes!

Minification
Now that you're putting your code in external files, your next goal is to reduce the amount of time it takes to initially download that code so that it can be cached by the browser. This is where minification comes into play. Minification (or minimization) is the process of removing all unnecessary characters from source code (without changing its functionality). Essentially this means you'll remove whitespace characters and comments.

Unless you like to torture yourself (and those who follow your work), you probably write code that's pretty readable. This includes splitting code up between multiple lines, indenting your code with spaces or tabs, and writing comments that explain some esoteric portion of code. All these items are not important to the JavaScript engine because it will just ignore them, but it still takes time to download these extra bytes that will never get used.

Luckily for you, you don't have to spend time creating the tool that will remove these unneeded characters. There are several good tools out there that will minify your code, and I recommend YUI Compressor. This tool can reduce your code by up to 60% (depending upon how efficiently you write your code). In addition to removing all the unnecessary characters, YUI Compressor will rewrite your code to use smaller variable names. If you have a local variable in a function called var myDescriptiveVariable, the compressor will rename it to var a. We just took our 21-character variable name down to 1 character! If this variable is used in 10 places, we've trimmed 200 characters with this one variable, and this happens for every local variable in your script! That's a lot of bytes saved.

If you're paying close attention, the key takeaway you'll notice is that using local variables (compared to global variable that don't get minified) is one great way to reduce the size of your code after minification occurs.

Initial Page Load Operations
Now that your code has been minified, let's take a look at initial page load operations. Since some JavaScript operations are synchronous and will halt other browser processing, we want to make sure we don't slow down the page when the user is trying to perform an action. There are two things we can do to improve initial page loading performance. First, reduce the amount of code you actually invoke on page load (or when the DOM is ready for interaction using Mootools' domready event, jQuery's document ready event or your favorite framework's equivalent). Second, implement lazy-loading techniques. For example, instead of adding all your events to all the elements on the page initially, wait for user interaction or some other process to add the events. Let's look at a few specific examples so you can see what I mean. Note: the code samples are using Mootools syntax.

Before:

var comments = $('.articles > div.comments');
$('showComments').addEvent('click', function() { comments.show(); });

After:

$('showComments').addEvent('click', function() { $('.articles > div.comments').show(); });

While this may improve initial page load, each time we click on showComments, we have to parse the DOM (Document Object Model) to find all the comments, and this is not a cheap operation. The key here is test your own code and determine with each scenario which way is faster and which will keep your users the happiest. Just realize that on demand operations may be more acceptable to users than having to wait for the page to load. You be the judge!

Code Caching
We discussed file caching before, but what about code caching? We need to determine which operations will benefit the most from caching. The two items I will focus on are loops and DOM interactions. Loops can be costly because you may be performing unnecessary actions within each iteration.

Before:

for (var i=0; i<$$('.toggle').length; i++) {
 var el = new Element('div.something', {
 html: $$('.toggle')[i].get('text')
 });
 el.inject($('content'));
}

After:

var i = 0,
 els = $$('.toggle'),
 length = els.length,
 container = new Element('div');
 
for (i=0; i<length; i++) {
 new Element('div.something', {
 html: els[i].get('text')
 }).inject(container);
}
 
container.inject($('content'));

The "Before" loop is inefficient because we are querying the DOM three separate times to get the elements we need (twice for .toggle elements and once for the #content element), and we are having to determine the length property of that collection of elements during each iteration. In our case, the length won't change, so we only need to find it one time. Finally, during each iteration, we add a new element to the #content div, and this causes a redraw of the page. DOM manipulation and redrawing can be expensive, so let's look at how the second method is so much more efficient.

We start by caching our DOM elements so we only have to pull them once and get the length property of those elements only once. We've also created an element which will be used to contain all our new elements. Since the container has not been added to the DOM yet, we can add and remove from it without having to worry about the expensive redraw of the page. Once all our elements have been added to the container, we then inject the container into the #content div. Since this is only happening once, we have significantly improved performance.

Selector Efficiency
The last optimization technique I'll review is selector efficiency. Selectors are used to grab specific elements from the document in order to interact with them in your code. It's very possible to write poor-performing selectors. Some selectors to avoid:

$$('*')

This will grab all the elements in your document. If you have a huge document, this could take a while. You better not be using this selector in a loop!

$$('[data-some-attribute]')

This selector is inefficient because it has to look for this data attribute within every element in the document. If there are thousands of elements and each has several data attributes, you should just go get some coffee.

$$('body .person:nth-child(3n+1) .category p span.title')

This selector is fairly complicated. The reason this can be inefficient is because it may have to unnecessarily inspect many elements to get to the one(s) it needs. Determine if you can simplify the selector by being more specific and using an id to get to elements or even consider slightly modifying your HTML so that it's easier to create efficient selectors.

There are plenty of other techniques that will help improve the speed and efficiency of your JavaScript, but these are a good start and should help make you and your users happy. Remember that DOM interactions are slow and expensive, so do what you can to reduce chatting back and forth with the document. Check your loops and minify your code, and your users will surely return.

Happy coding!

-Philip

December 8, 2011

UNIX Sysadmin Boot Camp: bash - Keyboard Shortcuts

On the support team, we're jumping in and out of shells constantly. At any time during my work day, I'll see at least four instances of PuTTY in my task bar, so one thing I learned quickly was that efficiency and accuracy in accessing ultimately make life easier for our customers and for us as well. Spending too much time rewriting paths, commands, VI navigation, and history cycling can really bring you to a crawl. So now that you have had some time to study bash and practice a little, I thought I'd share some of the keyboard shortcuts that help us work as effectively and as expediently as we do. I won't be able to cover all of the shortcuts, but these are the ones I use most:

Tab

[Tab] is one of the first keyboard shortcuts that most people learn, and it's ever-so-convenient. Let's say you just downloaded pckg54andahalf-5.2.17-v54-2-x86-686-Debian.tar.gz, but a quick listing of the directory shows you ALSO downloaded 5.1.11, 4.8.6 and 1.2.3 at some point in the past. What was that file name again? Fret not. You know you downloaded 5.2.something, so you just start with, say, pckg, and hit [Tab]. This autocompletes everything that it can match to a unique file name, so if there are no other files that start with "pckg," it will populate the whole file name (and this can occur at any point in a command).

In this case, we've got four different files that are similar:
pckg54andahalf-5.2.17-v54-2-x86-686-Debian.tar.gz pckg54andahalf-5.1.11-v54-2-x86-686-Debian.tar.gz
pckg54andahalf-4.8.6-v54-2-x86-686-Debian.tar.gz
pckg54andahalf-1.2.3-v54-2-x86-686-Debian.tar.gz

So typing "pckg" and hitting [Tab] brings up:
pckg54andahalf-

NOW, what you could do, knowing what files are there already, is type "5.2" and hit [Tab] again to fill out the rest. However, if you didn't know what the potential matches were, you could double-tap [Tab]. This displays all matching file names with that string.

Another fun fact: This trick also works in Windows. ;)

CTRL+R

[CTRL+R] is a very underrated shortcut in my humble opinion. When you've been working in the shell for untold hours parsing logs, moving files and editing configs, your bash history can get pretty immense. Often you'll come across a situation where you want to reproduce a command or series of commands that were run regarding a specific file or circumstance. You could type "history" and pore through the commands line by line, but I propose something more efficient: a reverse search.

Example: I've just hopped on my system and discovered that my SVN server isn't doing what it's supposed to. I want to take a look at any SVN related commands that were executed from bash, so I can make sure there were no errors. I'd simply hit [CTRL+R], which would pull up the following prompt:

(reverse-i-search)`':

Typing "s" at this point would immediately return the first command with the letter "s" in it in the history ... Keep in mind that's not just starting with s, it's containing an s. Finishing that out to "svn" brings up any command executed with those letters in that order. Pressing [CTRL+R] again at this point will cycle through the commands one by one.

In the search, I find the command that was run incorrectly ... There was a typo in it. I can edit the command within the search prompt before hitting enter and committing it to the command prompt. Pretty handy, right? This can quickly become one of your most used shortcuts.

CTRL+W & CTRL+Y

This pair of shortcuts is the one I find myself using the most. [CTRL+W] will basically take the word before your cursor and "cut" it, just like you would with [CTRL+X] in Windows if you highlighted a word. A "word" doesn't really describe what it cuts in bash, though ... It uses whitespace as a delimiter, so if you have an ultra long file path that you'll probably be using multiple times down the road, you can [CTRL+W] that sucker and keep it stowed away.

Example: I'm typing nano /etc/httpd/conf/httpd.conf (Related: The redundancy of this path always irked me just a little).
Before hitting [ENTER] I tap [CTRL+W], which chops that path right back out and stores it to memory. Because I want to run that command right now as well, I hit [CTRL+Y] to paste it back into the line. When I'm done with that and I'm out referencing other logs or doing work on other files and need to come back to it, I can simply type "nano " and hit [CTRL+Y] to go right back into that file.

CTRL+C

For the sake of covering most of my bases, I want to make sure that [CTRL+C] is covered. Not only is it useful, but it's absolutely essential for standard shell usage. This little shortcut performs the most invaluable act of killing whatever process you were running at that point. This can go for most anything, aside from the programs that have their own interfaces and kill commands (vi, nano, etc). If you start something, there's a pretty good chance you're going to want to stop it eventually.

I should be clear that this will terminate a process unless that process is otherwise instructed to trap [CTRL+C] and perform a different function. If you're compiling something or running a database command, generally you won't want to use this shortcut unless you know what you're doing. But, when it comes to everyday usage such as running a "top" and then quitting, it's essential.

Repeating a Command

There are four simple ways you can easily repeat a command with a keyboard shortcut, so I thought I'd run through them here before wrapping up:

  1. The [UP] arrow will display the previously executed command.
  2. [CTRL+P] will do the exact same thing as the [UP] arrow.
  3. Typing "!!" and hitting [Enter] will execute the previous command. Note that this actually runs it. The previous two options only display the command, giving you the option to hit [ENTER].
  4. Typing "!-1" will do the same thing as "!!", though I want to point out how it does this: When you type "history", you see a numbered list of commands executed in the past -1 being the most recent. What "!-1" does is instructs the shell to execute (!) the first item on the history (-1). This same concept can be applied for any command in the history at all ... This can be useful for scripting.

Start Practicing

What it really comes down to is finding what works for you and what suits your work style. There are a number of other shortcuts that are definitely worthwhile to take a look at. There are plenty of cheat sheets on the internet available to print out while you're learning, and I'd highly recommend checking them out. Trust me on this: You'll never regret honing your mastery of bash shortcuts, particularly once you've seen the lightning speed at which you start flying through the command line. The tedium goes away, and the shell becomes a much more friendly, dare I say inviting, place to be.

Quick reference for these shortcuts:

  • [TAB] - Autocomplete to furthest point in a unique matching file name or path.
  • [CTRL+R] - Reverse search through your bash history
  • [CTRL+W] - Cut one "word" back, or until whitespace encountered.
  • [CTRL+Y] - Paste a previously cut string
  • [CTRL+P] - Display previously run command
  • [UP] - Display previously run command

-Ryan

October 26, 2011

MODX: Tech Partner Spotlight

This is a guest blog from the MODX team. MODX offers an intuitive, feature-rich, open source content management platform that can easily integrate with other applications as the heart of your Customer Experience Management solution.

Company Website: http://modx.com/
Tech Partners Marketplace: http://www.softlayer.com/marketplace/modx

Free your Website with MODX CMS

Just having a website or a blog is no longer a viable online strategy for smart businesses. Today's interconnected world requires engaging customers — from the first impression, to developing leads, educating, selling, empowering customer service and beyond. This key shift in online interaction is known as Customer Experience Management, or CXM.

For businesses to have success with CXM, they need an efficient way to connect all facets of their communications and information together with a modern and consistent look and feel, and without long learning curves or frustrating user experiences. You don't want a Content Management System (CMS) that restricts your ability to meet brand standards, that lives in isolation from your other systems and data, or that fails to fulfil your businesses needs.

MODX is a content management platform that gives you the creative freedom to build custom websites limited only by your imagination. It certainly can play the central role in managing your customer experience.

Freedom from Hassle & Frustration
The most productive tools are those that simply allow you get your work done. To make life easy for content editors MODX uses familiar concepts like a hierarchical tree – similar to the folders and files on your computer. This allows content editors to relate their content to the overall website structure. But, like everything else in MODX, you aren't limited to hierarchical content and can easily employ taxonomy-, list- or category-based structures.

Similarly, editing documents should be easy. With MODX, anyone who can open a web browser and send email has the skillset to create and edit content in MODX. Most tasks are a matter of filling out simple form fields into which content is placed and is accompanied by a sensible MS Word-like editor for your main content. Furthermore, site builders and developers are able to create custom fields for custom content types and custom data allowing non-technical employees to work in an intuitive, tailored environment.

Total Creative Freedom
Your website is one of the most visible parts of your brand and you certainly don't want it limited by your CMS. MODX makes it possible to do anything that's on the modern web now — you don't have to wait for a year or hack the core to launch an HTML5 or mobile optimized site. MODX can do it all now, and even what's coming next. It outputs exactly and only what you or your site builder dictate.

MODX uses a brilliantly simple template engine that allows web designers to work with what they already know, like HTML, CSS and any JavaScript library they chose. MODX can even output things not typically associated with most content management platforms like XML, JSON or even Comma Separated Value (CSV) files that automatically download to your desktop.

Freedom to Extend
MODX provides all the requisite tools for CMS, but it also functions as a fully capable web development platform upon which you can extend functionality, employ custom applications and do just about anything you can dream up. In fact, the "X" in MODX comes from the word "extensible". Whether you want to build a Member-only website, Client Extranet, Resort Booking and Reservations system or private Social Network, you can do it on MODX.

For developers the fully-documented Object Oriented API and xPDO, MODXs database layer, provide all you need to build almost anything with MODX, even extending or overriding its core functionality. Critically, you can do all this using the API and retain a painless upgrade path without hacking the core. The MODX API architecture provides all the flexibility you or your developer might need to make MODX your own without painting your self into a corner.

Freedom from Bottlenecks
Modern web pages are made up of many component parts – site-wide headers and footers, navigation menus, articles, products and more. At some point, all these pieces need to be put together and delivered to the visitor as a single page that users expect to load quickly or they'll leave your site.

To deliver pages fast, top-performing sites use server-side caching to take all those pieces and pre-process them for fast delivery to a browser. The problem with many CMS applications is that they manually rebuild pages every single time someone visits your site. That's fine if you only have a few visitors, but your site can bog down or even fail under moderate traffic. In these circumstances, it would be disastrous if your website is featured on an industry magazine or website, national media or on a popular TV show. Your site could literally grind to a halt, costing you customers, damaging your reputation and ultimately making a bad first impression.

MODX's native page caching delivers your site quickly by default. Additionally, MODX can use high-end caching like memcache to further improve performance under load. To handle millions of pageviews daily, you need robust servers and you need to optimize your environment ... That's where scaling across multiple servers and replication with SoftLayer works perfectly with MODX.

Free Your Legacy Systems
Keeping your data, content and business information in disconnected silos is ineffective and costly. Accessing existing systems, like an Active Directory or Enterprise Content repository, makes huge difference in getting your work done headache-free. You don't have to worry about data duplication across systems, significant extra work to make everything work or synchronization issues. A new website platform should increase your productivity and enable your employees, customers and everyone else surrounding your business to find what they need and to interact efficiently and effectively.

MODX works with the tools and technology that organizations already have in place. It can easily interact with external web services or data feeds and can drive other applications via RESTful web services.

Security and Freedom to Rest Easy
Website Security is a topic that rarely surfaces during the early stages of a web project and often never comes up until your site has been compromised.

A high-quality hosting environment like those from SoftLayer are the foundation of website security. Your web CMS and its add-ons, plugin-ins or modules should not be a liability. MODX is designed with security at its core to protect your valuable website from malicious attacks. Every input is filtered, and every database query using the API eliminates the possibility of SQL injection compromises. Most importantly, the development team rigorously and continuously audits MODX to make sure its up to date and patching any new issues that may arise.

Freedom in the Community
With MODX and the MODX Community you're not alone. There are hundreds of thousands of websites built on MODX and we have a friendly, active and growing community of raving fans over 37,000 strong to whom you can look for assistance, support, education and camaraderie.

In fact, the MODX Community is one of our greatest assets.

They provide mentorship, assistance and help make MODX software better through active reporting of issues and feature requests and contributing improvements for integration by the core team.

If you're not a site builder or developer, but you want your website powered by MODX, one of the best places to start is with a MODX Solution Partner. Our network of 90+ global Solution Partners enables you to get the right-fit expertise for your project and in many cases work locally. Solution Partners are experts at MODX and know how to do things right.

Get Free
There really is a cure for the all too often restrictive, unintuitive and frustrating experience of putting content on the web. Get on the road to content management freedom with MODX. It's easy to start since MODX Revolution itself is free to download and use.

Learn more at http://modx.com/.

-Jay Gilmore, MODX

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
October 11, 2011

Building a True Real-Time Multiplayer Gaming Platform

Some of the most innovative developments on the Internet are coming from online game developers looking to push the boundaries of realism and interactivity. Developing an online gaming platform that can support a wide range of applications, including private chat, avatar chats, turn-based multiplayer games, first-person shooters, and MMORPGs, is no small feat.

Our high speed, global network significantly minimizes reliability, access, latency, lag and bandwidth issues that commonly challenge online gaming. Once users begin to experience issues of latency, reliability, they are gone and likely never to return. Our cloud, dedicated, and managed hosting solutions enable game developers to rapidly test, deploy and manage rich interactive media on a secure platform.

Consider the success of one of our partners — Electrotank Inc. They’ve been able to support as many as 6,500 concurrent users on just ONE server in a realistic simulation of a first-person shooter game, and up to 330,000 concurrent users for a turn-based multiplayer game. Talk about server density.

This is just scratching the surface because we're continuing to build our global footprint to reduce latency for users around the world. This means no awkward pauses, jumping around, but rather a smooth, seamless, interactive online gaming experience. The combined efforts of SoftLayer’s infrastructure and Electrotank’s performant software have produced a high-performance networking platform that delivers a highly scalable, low latency user experience to both gamers and game developers.

Electrotank

You can read more about how Electrotank is leveraging SoftLayer’s unique network platform in today's press release or in the fantastic white paper they published with details about their load testing methodology and results.

We always like to hear our customers opinions so let us know what you think.

-@nday91

October 4, 2011

An Introduction to Redis

I recently had the opportunity to get re-acquainted with Redis while evaluating solutions for a project on the Product Innovation team here at SoftLayer. I'd actually played with it a couple of times before, but this time it "clicked." Or my brain broke. Either way, I see a lot of potential for Redis now.

No one product is a perfect fit for all of your data storage needs, of course. There are such fundamental tradeoffs to be made in designing storage architectures that you should be immediately suspicious of any product that claims to fit every need.

The best solutions tend to be products that actually embrace these tradeoffs. Redis, for instance, has sacrificed a small amount of data durability in exchange for being awesome.

What is it?

Redis is a key/value store, but describing it that way is sort of like calling a helicopter a "vehicle." It's a technically correct description, but it leaves out some important stuff.

You can think of it like a sophisticated older brother of Memcached. It presents a flat keyspace, and you can set those keys to string values. Another feature of Memcached is the ability to perform remote atomic operations, like "incr" and "append." These are really handy, because you have the ability to modify remote data without fetching, and you have an assurance that you're the only one performing that operation at that instant.

Redis takes this concept of remote commands on data and goes completely nuts with it. The database is aware of data structures like hashes, lists and sets in addition to simple string values. You can sort, union, intersect, slice and dice to your heart's content without fetching any data. Redis is a data structure server. You can treat it like remote memory, and this has an awesome immediate benefit for a programmer: your code and brain are already optimized for these data types.

But it's not just about making storage simpler. It's fast, too. Crazy fast. If you make intelligent use of its data structures, it's possible to serve a lot of traffic from relatively modest hardware. Redis 2.4 can easily handle ~50k list appends a second on my notebook. With batching, it can append 2 million items to a list on a remote host in about 1.28 seconds.

It allows the remote, atomic and performant manipulation of data structures. It took me a little while to realize exactly how useful that is.

What's wrong with it?

Nothing. Move along.

OK, it's a little short on durability. Redis uses memory as its primary store and periodically flushes to disk. A common configuration is to do so every second.

That sounds pretty reasonable. If a server goes down, you could lose a second of data. Keep in mind, however, how many operations Redis can perform in a second. If you're in a high-volume environment, that could be a lot of data. It's not for your financial transactions.

It also supports relatively limited availability options. Currently, it only supports master/slave replication. Clustering support is planned for an upcoming release. It's looking pretty powerful, but it will take some real-world testing to know its performance impact.

These challenges should be taken into consideration, and it's probably clear if you're in a situation where the current tradeoffs aren't a good fit.

In my experience, a lot of developers seriously overestimate the consequences of their application losing small amounts of data. Also consider whether or not the chance of losing a second (or less) of data genuinely represents a bigger threat to your application than any other compromises you might have made.

More Information
You can check out the slightly aging docs or browse the impressively simple source. There are probably already bindings for your language of choice as well.

-Tim

August 23, 2011

SOAP API Application Development 101

Simple Object Access Protocol (SOAP) is built on server-to-server remote procedure calls over HTTP. The data is formatted as XML; this means secure, well formatted data will be sent and received from SoftLayer's API. This may take a little more time to set up than the REST API but it can be more scalable as you programmatically interface with it. SOAP's ability to tunnel through existing protocols such as HTTP and innate ability to work in an object-oriented structure make it an excellent choice for interaction with the SoftLayer API.

This post gets pretty technical and detailed, so it might not appeal to our entire audience. If you've always wondered how to get started with SOAP API development, this post might be a good jumping-off point.

Authentication
Before you start playing with the SoftLayer SOAP API, you will need to find your API authentication token. Go into your portal account, and click the "Manage API Access" link from the API page under the Support tab. At the bottom of the page you'll see a drop down menu for you to "Generate a new API access key" for a user. After you select a user and click the "Generate API Key" button, you will see your username and your API key. Copy this API key, as you'll need it to send commands to SoftLayer's API.

PHP
In PHP 5.0+ there are built in classes to deal with SOAP calls. This allows us to quickly create an object oriented, server side application for handling SOAP requests to SoftLayer's API. This tutorial is going to focus on PHP 5.1+ as the server side language for making SOAP function calls. If you haven’t already, you will need to install the soap client for php, here is a link with directions.

Model View Controller

Model-View-Controller or MVC is a software architecture commonly used in web development. This architecture simply provides separation between a data abstraction layer (model), the business logic (controller), and the resulting output and user interface (view). Below, I will describe each part of our MVC "hello world" web application and dissect the code so that you can understand each line.

To keep this entry a little smaller, the code snippits I reference will be posted on their own page: SOAP API Code Examples. Protip: Open the code snippit page in another window so you can seamlessly jump between this page and the code it's referencing.

Model
The first entry on the API Code Examples page is "The Call Class," a custom class for making basic SOAP calls to SoftLayer's API. This class represents our model: The SOAP API Call. When building a model, you need to think about what properties that model has, for instance, a model of a person might have the properties: first name, height, weight, etc. Once you have properties, you need to create methods that use those properties.

Methods are verbs; they describe what a model can do. Our "person" model might have the methods: run, walk, stand, etc. Models need to be self-sustaining, that means we need to be able to set and get a property from multiple places without them getting jumbled up, so each model will have a "set" and "get" method for each of its properties. A model is a template for an object, and when you store a model in a variable you are instantiating an instance of that model, and the variable is the instantiated object.

  • Properties and Permissions
    Our model has these properties: username, password (apiKey), service, method, initialization parameters, the service's WSDL, SoftLayer's type namespace, the SOAP API client object, options for instantiating that client, and a response value. The SOAP API client object is built into php 5.1+ (take a look at the “PHP” section above), as such, our model will instantiate a SOAP API object and use it to communicate to SoftLayer's SOAP API.

    Each of our methods and properties are declared with certain permissions (protected, private, or public), these set whether or not outside functions or extended classes can have access to these properties or methods. I "set" things using the "$this" variable, $this represents the immediate class that the method belongs to. I also use the arrow operator (->), which accesses a property or method (to the right of the arrow) that belongs to $this (or anything to the left of the arrow). I gave as many of the properties default values as I could, this way when we instantiate our model we have a fully fleshed out object without much work, this comes in handy if you are instantiating many different objects at once.

  • Methods
    I like to separate my methods into 4 different groups: Constructors, Actions, Sets, and Gets:
    • Sets and Gets
      Sets and Gets simply provide a place within the model to set and get properties of that model. This is a standard of object oriented programing and provides the model with a good bit of scalability. Rather than accessing the property itself, always refer to the function that gets or sets the property. This can prevent you from accidentally changing value of the property when you are trying to access it. Lines 99 to the end of our call are where the sets and gets are located.

    • Constructors
      Constructors are methods dedicated to setting options in the model, lines 23-62 of the call model are our constructors. The beauty of these three functions is that they can be copied into any model to perform the same function, just make sure you keep to the Zend coding standards.

      First, let’s take a look at the __construct method on line 24. This is a special magic php method that always runs immediately when the model is instantiated. We don’t want to actually process anything in this method because if we want to use the default object we will not be passing any options to it, and unnecessary processing will slow response times. We pass the options in an array called Setup, notice that I am using type hinting and default parameters when declaring the function, this way I don’t have to pass anything to model when instantiating. If values were passed in the $Setup variable (which must be an array), then we will run the “setOptions” method.

      Now take a look at the setOptions method on line 31. This method will search the model for a set method which matches the option passed in the $setup variable using the built in get_class_methods function. It then passes the value and name of that option to another magic method, the __set method.

      Finally, let’s take a look at the __set and __get methods on lines 45 and 54. These methods are used to create a kind of shorthand access to properties within the model, this is called overloading. Overloading allows the controller to access properties quicker and more efficiently.

    • Actions
      Actions are the traditional verbs that I mentioned earlier; they are the “run”, “walk”, “jump”, and “climb” of our person model. We have 2 actions in our model, the response action and the createHeaders action.

      The createHeaders action creates the SOAP headers that we will pass to the SoftLayer API; this is the most complicated method in the model. Understanding how SOAP is formed and how to get the correct output from php is the key to access SoftLayer’s API. On line 77, you will see an array called Headers, this will store the headers that we are about to make so that we can easily pass them along to the API Client.

      First we will need to create the initial headers to communicate with SoftLayer’s API. This is what they should look like:

      <authenticate xsi:type="slt:authenticate" xmlns:slt="http://api.service.softlayer.com/soap/v3/SLTypes/">
          <username xsi:type="xsd:string">MY_USERNAME</username>
          <apiKey xsi:type="xsd:string">MY_API_ACCESS_KEY</apiKey>
      </authenticate>
      <SoftLayer_API_METHODInitParameters xsi:type="v3:SoftLayer_API_METHODInitParameters" >
          <id xsi:type="xsd:int">INIT_PERAMETER</id>
      </SoftLayer_API_METHODInitParameters>

      In order to build this we will need a few saved properties from our instantiated object: our api username, api key, the service, initialization parameters, and the SoftLayer API type namespace. The api username and key will need to be set by the controller, or you can add in yours to the model to use as a default. I will store mine in a separate file and include it in the controller, but on a production server you might want to store this info in a database and create a "user" model.

      First, we instantiate SoapVar objects for each authentication node that we need. Then we store the SoapVar objects in an array and create a new SoapVar object for the "authenticate" node. The data for the "authenticate" node is the array, and the encoding is type SOAP_ENC_OBJECT. Understanding how to nest SoapVar objects is the key to creating well formed SOAP in PHP. Finally, we instantiate a new SoapHeader object and append that to the Headers array. The second header we create and add to the Headers array is for initialization parameters. These are needed to run certain methods within SoftLayer’s API; they essentially identify objects within your account. The final command in this method (__setSoapHeaders) is the magical PHP method that saves the headers into our SoapClient object. Now take a look at how I access the method; because I have stored the SoapClient object as a property of the current class I can use the arrow operator to access methods of that class through the $_client property of our class, or the getClient() method of our class which returns the client.

      The Response method is the action which actually contacts SoftLayer’s API and sends our SOAP request. Take a look at how I tell PHP that the string stored in our $_method property is actually a method of our $_client property by adding parenthesis to the end of the $Method variable on line 71.

View
The view is what the user interprets, this is where we present our information and create a basic layout for the web page. Take a look at "The View" section on SOAP API Code Examples. Here I create a basic webpage layout, display output information from the controller, and create a form for sending requests to the controller. Notice that the View is a mixture of HTML and PHP, so make sure to name it view.php that way the server knows to process the php before sending it to the client.

Controller
The controller separates user interaction from business logic. It accepts information from the user and formats it for the model. It also receives information from the model and sends it to the view. Take a look at "The Controller" section on SOAP API Code Examples. I accept variables posted from the view and store them in an array to send to the model on lines 6-11. I then instantiate the $Call object with the parameters specified in the $Setup array, and store the response from the Response method as $Result in line 17 for use by the view.

Have Fun!
Although this tutorial seems to cover many different things, this just opens up the basic utilities of SoftLayer's API. You should now have a working View to enter information and see what kind of data you will receive. The first service and method you should try is the SoftLayer_Account service and the getObject method. This will return your account information. Then try the SoftLayer_Account service and the getHardware method; it will return all of the information for all of your servers. Take the IDs from those servers and try out the SoftLayer_Hardware_Server service and the getObject method with that id as the Init property.

More examples to try: SoftLayer Account, SoftLayer DNS Domain, SoftLayer Hardware Server. Once you get the hang of it, try adding Object Masks and Result Limits to your model.

Have Fun!

-Kevin

Subscribe to coding