Posts Tagged 'Application'

January 19, 2011

AJAX Without XML HTTP Requests

What is AJAX?

Asynchronous JavaScript and XML - AJAX - is what you use to create truly dynamic websites. Ajax is the bridge between application and presentation layers, facilitating lightning fast, instant application of data from the end user to the host and back to the end user. It dynamically changes the data displayed on the page without disrupting the end user or bogging down the client. Although the name is misleading, it is used as a term for any process that can change the content of a web page without unnecessarily reloading other parts of the page.

What are XML HTTP requests?

Passing information from your server to your end user's browser is handled over HTTP in the form of HTML. The browser then takes that info and formats it in a way the end user can view it easily. What if we want to change some of the data in the HTML without loading a whole new HTML document? That's where XML comes in. Your web page needs to tell the browser to ask for the XML from the server; luckily, all browsers have a function called XmlHttpRequest(). Once it's called, it will poll the server for XML data.

Why shouldn't you use XML HTTP requests?

A long time ago, in a galaxy far, far away, Microsoft invented the XmlHttpRequest() object for Microsoft Exchange Server 2000. As with all first generation technologies, everyone wanted to use it, and some people implemented it differently. IE didn't even have native support until 2006, and there are still some discrepancies in various browsers when studying the OnReadyStateChange event listener. There is also an issue with cross-domain requests. When the internet was young, JavaScript hackers would steal users' identity by pulling information from secure websites and posting it to their own, stealing bank account numbers, credit cards, etc. Now that the internet has grown up a bit, people with large networks and many servers have found use for sending data across domains, but it's still not possible with XML HTTP requests.

What's an Alternative?

Using JavaScript, you can create client side scripts whose source is built with server side scripts, passing variables in the URL. Here's an example of a basic web page with local JavaScript, a few checkboxes for human interaction, and a table with some information that we want to change. View source on the page below to see the outline.

Looking at the three JavaScript functions, the first (clearTags) automatically clears out the checkboxes on load, the second (check(box)) makes sure that only one box is checked at a time, the third (createScript) is the interesting one; it uses the createElement() function to create an external JavaScript, the source of which is written in PHP. I have provided a sample script below to explain what I mean. First, we get the variable from the URL using the $_GET super global. Then, we process the variable with a switch, but you might use this opportunity to grab info from a database or other program. Finally, we print code which the browser will translate to JavaScript and execute.

//First we get the variable from the URL
//Here's the switch to process the variable
switch ($foo){
case 'foo' : print "var E=document.getElementById('data'); E.innerHTML='bar'; "; break;
case 'fooo' : print "var E=document.getElementById('data'); E.innerHTML='barr'; "; break;
case 'ffoo' : print "var E=document.getElementById('data'); E.innerHTML='baar'; "; break;
case 'ffooo' : print "var E=document.getElementById('data'); E.innerHTML='baarr'; "; break;
default : print "var E=document.getElementById('data');
E.innerHTML='unknown'; ";


December 17, 2010

Capacity Planning and the Cloud

Cloud computing is changing the landscape for technology projects and initiatives in many ways, but today I wanted to take a look at how cloud computing can help reduce risks when doing server capacity planning for a project.

Traditionally, server capacity planning has consisted of gathering application/database/processing specs, talking to the business about growth projections, and balancing initial cost vs. future capacity needs. One constant in many projects is that the initial processing needs are smaller than what is predicted for the future. However, there are many times that it doesn’t make sense to buy the full capacity needed to support the long term growth upfront.

Back In 1999, capacity planning was easy: Take your largest growth estimates, multiply by 3, then disregard the results and buy the biggest server you can find. Cost consciousness was not an issue (I remember having to track down several $500k servers because there was no justification or documentation needed to get a P.O. cut), but after 2000-2001, this all changed.

Capacity planning evolved into — and still largely is to this day — the act of balancing the initial project investment with the ability to incrementally scale to meet future growth plans.

There are two basic methodologies for scaling: vertical and horizontal. Vertical scaling is done by adding additional resources (CPU, RAM, hard drives, etc.) into an existing server to handle growth. Horizontal scaling is accomplished by adding more distinct servers to the processing mix. Capacity planning for an app that requires vertical scaling tends to be carry more financial risk and is the focus of this post.

An example of vertical capacity planning: For a project that has 1,000 users in year one with growth projections of reaching 1,000,000 users in three years, a common vertical capacity planning methodology would be to buy a server that has the capacity to handle 1,000,000 users when fully loaded, while initially only configuring the server a minimal base configuration. As the usage grows, you'd pay an incremental cost to add more capacity to the server to support the increased resource demands.

When you use this approach, one of the main decisions comes down to how big you want your server to be when it is fully loaded. While the cost per additional CPUs or RAM is about the same for any given server family, the upfront cost of buying a server with greater scalability is substantially more (i.e. Buying base configuration server scalable to 8 CPUs is more expensive than the base config of a server scalable to 4 CPUs).

To buy this additional potential capacity, you pay a premium. In our above example, if you determined a long term need for a 4-CPU server, you'd be paying a "scalability premium" for the time you don't have 4 CPUs installed. And even then, the success of your strategy will depend on the growth predictions you had and their actual CPU/RAM consumption. Not only would you pay somewhere between 700% and 1600% over the base configuration, you increase your risk exposure if your capacity or growth numbers are off. If your capacity numbers were high or the business did not meet growth projections, you have spent more money than you should have. If you missed low or the product surpassed expectations, you now might have to buy an even bigger server that would make your initial investment obsolete.

The value of cloud computing is that it changes this scenario. Customers don't pay the scalability premium they would run into if they were buying their own servers and hosting them in-house. If you want to start small, you pay for a small cloud server with fewer resources. As you grow, you pay for the additional capacity you need. Another benefit of cloud computing on vertical capacity planning is that there is no prolonged turnaround time for ordering and installing of new capacity. Your server is virtualized, so if you decide you needed 8 more CPUs and 8GB RAM in your cloud instance you can schedule those upgrades with a few clicks.

While these are just a couple of the ways cloud computing can benefit vertical capacity planning and reduce risk, horizontal capacity planning with cloud computing is even easier. Think: Click a button to make a new copy of a cloud computing instance in a hurry ... but maybe that will be in another post.


September 16, 2010

I'm Your Blackberry!

Although there are many different brands of smart phones, I’m pleased that SoftLayer has chosen Blackberry for our essential operations form of communication. In the last two and a half years my Blackberry has become my right hand. Any tasks involving information technology that I have needed to complete have been accomplished without much hassle. In both personal and work related situations my Curve has proven it can do the jobs of many tools. Email alone has its crucial place in IT and is managed flawlessly through my phone. My notepad, calculator, browser, calendar, camera, SSH device, VNC/SSH device SFTP device, organizer and alarm clock are many names I could give this little six shooter.

Reliable in the battle as well! Just recently I was on hold in a very important phone call (Important phone call meaning I had just won Washington Redskins tickets on a radio station ;D ) when I started to get a warning about my Curve’s battery running low. I was so worried because I had abused the phone that day and I had no way of recharging where I was. My Blackberry held on for over 30+ minutes until it had to drop radio use. I got my tickets and was still able to call the NOC and let everyone know (In a high pitched voice that is). Looking back I cannot think of a moment where I wished for another choice in a phone. My Curve will continue to stay holstered at my side. What type of smart phone do you prefer to use in everyday tasks?

Jonathan M.

SoftLayer Server Engineer in WDC

June 30, 2010

Does Everything I Need it to Do!

So for those of you who have been following SoftLayer’s recent push into the mobile application space, you might be aware that we recently released a native application for devices running Google’s Android operating system. As the principal software engineer of the application, one of the exciting parts of my job post launch is monitoring the number of times the application gets downloaded, the ratings it gets in the market place, and of course, reading the user submitted comments.

This morning when I came into work and pulled up Google’s Android Developer Console, I saw that we had just passed 100 downloads of the application. Not too shabby considering the formal press release has not yet been made so those 100 lucky Android owners who found the application heard about it via word of mouth, following SoftLayer on Facebook, or reading our forums.

As the developer of the application even more thrilling than seeing the number of downloads, was for me to see that two users had rated the application—five out of five stars. And one of those users even left a comment. Does everything I need it to do. That’s what the post said. Then I scrolled down to see which of our customers was so pleased with the initial feature set of the app.

What I found caused me to burst out laughing (and get a few strange looks from the guy who sits in the cube across from mine). The comment, does everything I need it to do, was left by my eleven year old son. True, he does have an Android phone, and apparently it’s also true that he downloaded the app. What he doesn’t have is an account with SoftLayer, so the only thing the app can do for him is show him a title screen and direct him to the SoftLayer corporate website for help. Apparently that’s everything he needs it to do!

At any rate, while I am tickled to see my son being so supportive, I’d love to hear comments from users who need the application for something other than to show their friends at band camp their dad has written a program that can be installed on a phone. While I’m admittedly biased, I think the app is pretty cool. Browsing tickets on the phone works particularly well and checking bandwidth and rebooting servers on the go is pretty darn handy.

Alright, its back to work for me. I’m looking forward to hearing from all you Android owners out there though. Download the app. Tell us what you think. And most of all, let us know what you’d like to see in future releases. At SoftLayer, we are all about making things that make your life easier. Help us build an app that does everything YOU need it to do!

June 9, 2010

DNS from All Angles

Serving up content on the internet can be a tricky business. It isn’t just about running a web or app server(s) in an efficient and reliable manner. One of the other critical factors is DNS. You have to understand and optimize how the name the content is advertised under gets translated to the IP address of the content. I don’t want to turn this into a DNS primer, but the two ends of the line of communication are the authoritative DNS server controlled by the domain owner which stores the official translation of the name to the number and the resolving DNS server which acts as a cache and is where the end-user connects to directly. Both ends of the chain have their own idiosyncrasies which can affect how quickly and reliably your content gets delivered.

On the end-user side, I just read an article about how public DNS providers like OpenDNS and Google are breaking the internet. OK, maybe not breaking the internet, but the public DNS providers are confusing CDN location-based algorithms. The article is here: and I recommend strongly that both content providers and content consumers read it.

The summary is that some CDN algorithms use the ip address (and location) of the DNS server making the request and if that DNS server is nowhere near the end-user on the internet, the end-user will get served content from farther away and will get that content slower than desired. The conclusion is that an end-user should always use a DNS server located as close as possible network-wise, usually that ends up being a DNS server of the network provider.

That is good advice for the end-user, but what about the content provider? If you flip this around and come at DNS from the content provider’s point of view who doesn’t use CDN, you want to make sure that when a DNS request is made, that your authoritative DNS server gets the ip address as fast and reliably as possible back to the end-user.

SoftLayer has built out authoritative DNS farms in all our Datacenters and Network POPs and anycasted the ip addresses for the name servers. What that means is that SoftLayer customers – who get to use our DNS for free – can have their authoritative domain services hosted at all 10 points in North America and through the routing optimization inherent in the internet, the name to number conversion for those domains will happen as close as possible to the end-user and the results will be delivered as quickly as possible.

One very important goal of every content provider is to get the end-user the best experience as possible. Understanding how the internet works from the end-user and well as the server-side is critical. It doesn’t matter how good your content or app is if the end-user has a poor experience.


June 4, 2010

The Conception and Design of the SoftLayer Mobile Client for iPhone

A few short months ago, SoftLayer began a new application initiative, the Mobile Client. Our overarching goal is straightforward, take the powerful capabilities of the SoftLayer web portal, and put them in the palm of your hand. As is often the case, however, the things that are easiest to say, are not so easily done.

The fundamental problem we face in designing the mobile portal is the sheer volume of functionality available. On the web, the SoftLayer portal keeps the customer in control of their server environment. To offer that level of control the portal offers access to both a broad spectrum of information and a host of useful functionality. With the bar set that high, a mobile device with its comparatively sparse resources and small screen presents something of a challenge.

When computer scientists face a difficult problem, the first step is to narrow that problem down to a manageable size. There are some things you can do the vast, open range of a browser’s web page that are simply impractical on the small screen of a mobile device. Moreover, there are tasks you would perform when sitting at your computer in the office that you would likely never need to do from a mobile device when you are on-the-go. These two criteria helped us set aside some of the functionality found in the Web Portal as being not well suited for implementation on a mobile device.

Of course, a monkey wrench was thrown into this evaluation right in the middle of development. While we were working on the first version of the Mobile Client, Apple released the iPad. Suddenly things that would not have worked well on the small screen of a smart phone, were practical for a mobile device. Unfortunately (since happened in the middle of our development effort) we were unable to fully change our plans to incorporate the iPad, but it does offer an attractive avenue for future consideration.

In the end, what we decided was that the best way to focus our efforts, the best way to ensure that customers get the tools they need at their disposal as quickly as possible, was to make the customers a part of the design process. Our strategy would be to create a small application, one which could be developed quickly, and get that into our customer’s hands. From there we would let the customers help guide us to the additional functionality they desired the most.

Working with the body of experience at hand, we narrowed down the functionality of the vast web portal to a small seed, a set of features that are absolutely crucial for our customers. We focused on that small set of core functionality and planned out an application that would both be an asset to our customers and meet our goal of putting it in their hands quickly. The result is the Mobile Client we offer today.

At SoftLayer we are committed to providing customers with building complete access, control, security, and scalability into all of our portals. For the Mobile Client, however, we have intentionally started with a small, focused subset. As we grow the Mobile Client, we will do so in response to customer feedback to help ensure that the tool focuses on providing the functionality our customers need the most as soon as possible. The Mobile Client team invites you to try our application on your favorite mobile device and add your voice to helping it grow.

April 28, 2010

A Review of the Opera Mini for the iPhone

Opera Mini for the iPhone

Opera’s new mobile browser for the iPhone has finally been approved by Apple to be included on the App Store. Read the official announcement.

I’ve played around with the browser for the past 30 minutes. My impressions are as follows:


  • It’s a wicked fast mobile browser. No doubts about that. A definite improvement over the other browser options on the iPhone.
  • The Dashboard is a very welcome addition.
  • Zooming in and out of the web page to read different portions of the web page was something I didn’t like at first. After browser a few pages, it grew on me. You can turn on “mobile view” in the settings to force the content to narrow to the view screen.
  • Opera’s version of tabbed browsing is remarkable!
  • Opera has great offline support through “Saved Pages”.


  • Bookmarks were a little difficult to find at first. It’s located under “Settings” which seems to be the wrong place in my opinion. Trivial, I know.
  • You can NOT set the Opera mini as the “default” browser. Though this is directed more towards a failing of the iPhone OS than the Opera browser itself.
  • Text heavy pages tend to have some text overlapping issues.
  • Unlike its PC brother, the Opera Mini does not pass the ACID 2 or ACID 3 tests.
    • On this note, Safari on the iPhone does pass both the ACID 2 and ACID 3 tests.
  • My overall impression of the new Opera Mini for the iPhone is good. For me, ease of use is a major clincher for mobile internet browsing and the Opera Mini hits the target.

March 24, 2010

Location, Location, Location

South by Southwest (“SXSW”) Interactive wrapped up last week, and one of the recurring themes was how location-based services (LBS) are changing the landscape of social media. When you port social media apps to the mobile phone, a world of LBSs are opened to you.

There are many use cases for LBS, many for social media, and the intersection of the two are even more interesting.

As seen with foursquare and Gowalla, bringing in LBS into a social application that lets you add tips/comments to restaurants, bars, etc. instantly turns it into a quick way to see where the “hot” places are currently in your area. Adding game mechanics (like badges) only makes foursquare even more addictive.

This is the new hotness.

The New Hotness

The intersect between Location-based services and social media.

Is it any surprise that twitter started supporting location-based tweets this week? They’re simply keeping up with the trend. I expect to see location-appropriate contextual ads in applications on mobile phones more now. If you’re walking down 5th street, and you’ve given your application access to GPS information, advertisers would love to be able to tell you to drop by their shop on your way to wherever you’re headed.

ShopSavvy, for instance, could push notifications to customers using that app letting them know where deals are in their proximity.

There are detractors. Plenty of people still want to keep their location private. If you’re an at-risk person (in an abusive relationship, for instance) you should think twice before turning on location-based services. More and more websites/applications these days are starting to set very “open” defaults rather than restrictive defaults. As Danah Boyd recently said, we were once a people who kept information private and decided what to make public. Now we are more and more making data about ourselves public by default, and take more effort to decide what to make private.

Edit: A day after I posted this, I found an article by Kevin Nakao which provides more detail on location-based services. It is a great reader and can be found here.

January 15, 2010

API in Real Life

An API (application programming interface) is an interface that allows software programs to communicate with each other. The communication barrier between programs has become thinner as APIs have evolved over the recent decades, like our languages have over the years. At SoftLayer, we have plenty of opportunities to interact with many different APIs from various companies. Some of us work with a driver API, some work with SOAP, or some work with XML-RPC for some projects. If you’re our customer, I bet you can easily imagine the number of APIs we use by looking at the products and services we offer. Not only are we a large API consumer, but we also provide a great number of APIs to our own customers. It seems that the interaction between software programs evolves just like our lives.

It’s hard to survive alone in this world. We are social beings, and we need others for interaction. A software program pretty much works the same way. There is no program that is a know-it-all or do-it-all. If there were one like that, I would not have a job. Software can expand its capabilities by working with other programs just like we, as humans, help each other. APIs act as a communication tool like our languages; and, by the way, there are many dialects too.

When a program starts to interact with another through API, it can be compared to a marriage. They are stuck together. However, programs can marry many others. When two programs start to interact, one cannot change its API without the other knowing. It would be as if your wife started talking to you in Danish all of a sudden. Even a small change in an API can cause a very bad outcome. Imagine that your wife told you to throw your socks in the laundry basket and you have been following this rule for years. Can you imagine what would happen if you left your socks by the bed one day? No, it simply wouldn’t work. If you really need to change the rule, it’s time to consider a divorce, in other words, API version 2. As I mentioned, a program can have multiple partners and you can’t expect them to follow new rules all at once. Your best bet would be to write a version 2 and keep the original version for old times’ sake. Trust me, people are very hesitant when it comes to changing their routine, including me. (Why should I touch a working program just because you updated YOUR API?)

Most APIs that I have used and seen are wonderful. I have seen APIs that work like a jack-of-all-trades, trying to do everything for me, but I didn’t like it. I would not like a BLT with onions, eggs and mustard. I just wanted a B.L.T, period! I have also seen APIs that require too many prerequisite steps (invocations) to get a simple result. How many times must you get transferred until you finally get someone to help with your phone bill? Jeez!

Ok, enough of these funny comparisons. I, a biased user, have listed below what I think is a good API:

  • A good API should not change often. If change is inevitable, it should give you plenty of notice and allow backward compatibility.
  • A good API should explain why it couldn’t work instead of the infamous “Error: -1”.
  • A good API should have good documentation, so you’re not left scratching your head.
  • A good API is accessible by different platforms.
  • A good API should be stable.
  • A good API should be simple and comprehensive. It should do what it says it does and it should do it well. Prefer “powerOn()” over “powerOnWhenIdleAndStartServices()”.

A good API implies the readiness of communication with other programs and other companies. It will broaden opportunities for your programs and organization to work with others, just like a person with good communication skills has a better chance of fitting in our society.

January 12, 2010

SLXXXXX Twitter Log

8/24/2009 1:00PM – Just ordered 3 more servers from SL. Man I love how easy it is to order, and the provisioning time is incredible.

8/24/2009 11:45PM – Got the new servers setup; now I have redundancy for my app. G’nite.

9/04/2009 8:00AM – Suhweet, just passed 50K users for my app. Hitting the pool.

9/21/2009 6:42PM – Oops, app crashed too many users. Recovering now. Thank goodness for monitoring alerts.

9.21/2009 8:13PM – Sorry all, app back up. SL CloudLayer really helped. Their portal makes it all easy.

9/22/2009 3:13AM – Ok stayed up late tonight and added new functionality to the app and added a new app server, geographic load balancing baby!

10/6/2009 2:45PM – Thanks for all the support on the app, keep the new ideas coming. 450K users and growing.

10/31/2009 5:50PM – Happy Halloween! 627K users. Thank you!!

11/14/2009 6:02AM – Getting close 989K users. Party at 1 Million. Just added 2 new front end servers in each DC, adding cloud storage now for Data replication/protection.

11/21/2009 7:31AM– It’s finally here 1 Mil. Party time! Isn’t ad revenue the greatest. The in game pay to play money is fun too. Thanks all!

12/10/2009 4:42PM – Still growing. I was alerted that one server crashed. No users affected. Technology is cool.

12/18/2009 9:16PM– ‘Bout to go silent for the Holidays. Hope you all have good ones. See you at 1.5 million when I return.

12/19/2009 7:00AM – Decided to add a couple more cloud instances for good measure. App is smoking fast.

12/31/2009 10:45PM – Monitoring just hit my phone, at party will check asap.

12/31/2009 11:00PM – Found a netbook at the party. App is crashed. Looking.

12/31/2009 11:07 PM – WT? All servers down, hard down. SL up and friend app good on SL network. Investigating, sorry for outage.

12/31/2009 11:10 PM – Hackers? Not sure all servers affected. Ping only. Had very secure. No problem before.

12/31/2009 11:29PM – Portal password got hacked. Intruders OS reloaded every server with RedHat, turned off all CCI.

1/04/2009 6:00AM – Happy New Year, mine sucked – app back – 5000 daily users. Sad day.

While the above is completely fictional, it could happen to just about anyone. Don’t let it happen to you. No matter how long and how secure you think your password is, there is someone out there who can crack it. It is one thing keeping a server secure and most technical geniuses are very adept at doing just that. With all the time and effort it takes to keep your servers secure, you might find that you have slipped in other areas. SoftLayer is here to help in VIP Style.

The cutting edge SoftLayer portal now has optional Two Factor Authentication support using VeriSign’s Identity Protection. First, what is Two Factor Authentication? It is defined as, “something you know (password) and something you HAVE (pin number of sorts).” Here is how it works:

You buy a physical device in the form of a keychain token or a credit card token; or in the cool age of technology, you can simply get one of the free phone apps that do the same thing for you without the extra piece of equipment to carry. Once you get the device/app you would go to the portal and register the token’s unique ID and attach it to a username on the account. The master user gets this FREE and then if you want other users on your account to have this functionality it is $3 per user per month. If the master user does turn on this functionality no one else will be allowed into the system without using two factor authentication. Once this is setup, the user will login using their “known” password and then they will also have to enter the “code” (the thing you have) on the token device or phone app to gain access. The code changes on a fast schedule so this is extremely secure. This would have made the New Year’s celebration for the person above much more fun.

One last thing, since we partnered with VeriSign you can use the token device or phone app for different sites that use the VeriSign product. PayPal is one example. Here is a complete list.

Now that you know about it, and now that we offer it, don’t be the guy that doesn’t keep the portal secure and misses out on a Happy New Year!

Subscribe to application