Author Archive: Daniel McAloon

December 14, 2009

‘Tis the Season to Get Things Done

It’s the holiday season, and that means everyone is getting busier. On top of all the existing responsibilities, millions of people are going shopping for gifts, decorating their houses, and navigating the bad weather. On top of all that, many people take their time off during the holiday season!

With this kind of time crunch, it’s best for your business to lie low until after the new year, right? Not so! With all this buying, selling, and giving going on, there’s a lot of extra retail data to process. Plus, it’s the end of the calendar year, many businesses have to get their finances in order too. ALSO, all these newly purchased electronic devices are soon going to be turned on and hooked up to the Internet, where they will almost surely put a new load on your servers.

Systems and network administrators need to be prepared for this influx of new traffic. Sometimes, this means purchasing new servers. However, it’s inefficient to buy the servers so far in advance when you don’t yet know what you will need. It’s best to wait until you’re sure you will need more servers and how many to order. At another hosting company, that would be a problem. People in our industry take the holidays off, too. Lowering the number of sales people and technicians and raising the number of new server requests would normally result in a disaster.

Luckily, SoftLayer does automatic provisioning. As soon as you order your server, it will be provisioned in two to four hours. Day or night, June 3rd or December 31st, if we have it, you can have control over it in two to four hours.

And therein lies the beauty of the SoftLayer system. You don’t have to wait for US to scale your business. If you need another server, get it. When it’s ready, it will automatically be added to your account’s private network and be available to you. You can even automate your server configuration and setup. Depending on the amount of data you need to transfer to a new server, you can have another server up and running your website less than 5 hours from the time you realized you needed it.

In fact, by using the SoftLayer API (and some clever configuration scripts on your servers) you can do live scaling on your website. Using the API, you can provision new servers exactly like the ones you already have. Once they’re available, a script can mirror the configurations from an existing machine to the new machine. Use the SoftLayer API once more to add the new servers to your load balancer rotation, and you’re in business! All without relying on any humans, even yourself! Treat yourself to some R&R this holiday season, while your website continues to get things done for you.

July 1, 2009

Pre-configuration and Upgrades

I recently bought a new computer for my wife. Being a developer, and a former hardware engineering student, I opted the buy the parts and assemble the machine ourselves. Actually assembling a computer these days doesn't take too long, it's the software that really gets you. Windows security updates, driver packs, incompatibilities, inconsistencies, broken websites, and just plain bad code plagued me for most of the night. The video card, in particular, has a “known issue” where it just “uh-oh” turns off the monitor when Windows starts. The issue was first reported in March of 2006, and has yet to be fixed.

This is why SoftLayer always tests and verifies the configurations we offer. We don't make the end user discover on their own that Debian doesn't work on Nehalems, we install it first to be sure. This is also why our order forms prevent customers from ordering pre-installed software that are incompatible with any of the rest of the order. We want to make sure that customers avoid the frustration of ordering things only to find out later that they don't work together.

The problem with desktop computers, especially for people who are particular about their configurations, is that you cannot buy a pre-configured machine where all the parts are exactly what you want. We attempted to get a computer from Dell, and HP, but neither company would even display all the specifications we were interested in, nevermind actually having the parts we desired. Usually pre-built systems skimp on important things like the motherboard or the power supply, giving you very little room to upgrade.

At SoftLayer, we don't cut corners on our systems, and we ensure that each customer can upgrade as high as they possibly can. Each machine type can support more RAM and hard drives than the default level, and we normally have spare machines handy at all levels so that once you outgrow the expansion capabilities of your current box, you can move to a new system type. If you're thinking of getting a dedicated server, but you're worried about the cost, visit the SoftLayer Outlet Store and start small. We have single-core Pentium Ds in the outlet store, and you can upgrade from there until you're running a 24-core Xeon system.

March 18, 2009

Code Performance Matters Again

With the advent of cloud computing, processing power is coming under the microscope more and more. Last year, you could just buy a 16-core system and be done with it, for the most part. If your code was a little inefficient, the load would be high, there really wasn't a problem. For most developers, it's not like you're writing digg and need to make sure you can handle a million page requests a day. So what if your site is a little inefficient, right?

Well think again. Now you're putting your site on "the cloud" that you've heard so much about. On the cloud, each processor cycle costs money. Google AppEngine charges by the CPU core hour, as does Mosso. The more wasted cycles in your code, the more it will cost to run it per operation. If your code uses a custom sorting function, and you went with bubble sort because "it was only 50 milliseconds slower than merge sort and I can't be bothered to write merge sort by hand" then be prepared for the added cost over a month's worth of page requests. Each second of extraneous CPU time at 50,000 page views per day costs 417 HOURS of CPU time per month.

Big-O notation hasn't really been important for the majority of programmers for the last 10 to 15 years or so. Loop unrolling, extra checks, junk variables floating around in your code, all of that stuff would just average out to "good enough" speeds once the final product was in place. Unless you're working on the Quake engine, any change that would shave off less than 200ms probably isn't worth the time it would take to re-engineer the code. Now, though, you have to think a lot harder about the cost of your inefficient code.

Developers who have been used to having a near-infinite supply of open CPU cycles need to re-think their approach to programming large or complex systems. You've been paying for public bandwidth for a long time, and it's time to think about CPU the same manner. You have a limited amount of "total CPU" that you can use per month before the AppEngine's limits kick in and you begin getting charged for it. If you're using a different host, your bill will simply go up. You need to treat this sort of thing like you would bandwidth. Minimize your access to the CPU just like you'd minimize access to the public internet, and keep your memory profiles low.

The problem with this approach is that the entire programming profession has been moving away from concentrating on individual CPU cycles. Helper classes, template libraries, enormous include files with rarely-used functions; they all contribute to the CPU and memory glut of the modern application. We, as an industry, are going to need to cut back on that. You see some strides toward this with the advent of dynamic include functions and libraries that wait to parse an include file until that object or function is actually used by the execution of the program for the first time. However, that's only the first step. If you're going to be living on the cloud, cutting down on the number of times you access your libraries isn't good enough. You need to cut down on the computational complexities of the libraries themselves. No more complex database queries to find a unique ID before you insert. No more custom hashing functions that take 300 cycles per character. No more rolling your own sorting functions. And certainly no more doing things in code that should be done in a database query.

Really good programmers are going to become more valuable than they already are once management realizes that they're paying for CPU cycles, not just "a server." When you can monetize your code efficiency, you'll have that much more leverage with managers and in job interviews. I wouldn't be surprised if, in the near future, an interviewer asked about cost algorithms as an analogy for efficiency. I also wouldn't be surprised if database strategy changed in the face of charging per CPU cycle. We've all (hopefully) been trying for third normal form on our databases, but JOINs take up a lot of CPU cycles. You may see websites in the near future that run off large denormalized tables that are updated every evening.

So take advantage of the cloud for your computing needs, but remember that it's an entirely different beast. Code efficiency is more important in these new times. Luckily, "web 2.0" has given us one good tool to decrease our CPU times. AJAX, combined with client-side JavaScript, allows a web developer to generate a web tool where the server does little more than fetch the proper data and return it. Searching, sorting, and paging can all be done on the client side given a well designed application. By moving a lot of the "busy work" to the client, you can save a lot of CPU cycles on the server.

For all those application developers out there, who don't have a client to execute some code for you, you're just going to have to learn to write more efficiently I guess. Sorry.


February 2, 2009

It’s OK to Let Go

There are a lot of companies that think they couldn’t possibly outsource their hosting needs to a third party. They make all kinds of excuses about why their particular organization cannot possibly move the servers more than 6ft from the sysadmin’s desk. I wanted to attempt to catalog the reasons most companies have, and explain why they’re just plain wrong.

We need direct access to the servers.

Why? So you can power cycle them in case they’re completely frozen? So you can re-install an OS on your own terms? So you can walk over to the rack and log in using a mouse and keyboard plugged directly into the machine? We can do all those things for you. Our power strip control and IPMI reboot can restart a server even if it’s completely locked up. Our standard KVM over IP means you can always have direct mouse and keyboard access to your system, and our automatic operating system installs mean you can switch from Windows to Unix at 4am on Christmas Eve and have your server ready to go before breakfast.

We need to be there in case something breaks.

Our datacenter techs will be there, 24/7, in case anything goes wrong. It’s infeasible to hire someone to sit in a server room with only 15 servers in it waiting for an alarm to go off. With the money you’re spending on 24/7 technicians to sit and do nothing, you could have multiple dedicated racks at SoftLayer with an entire team of specialists on the edges of their seats, waiting for something to go wrong so they can spring into action. In addition to just the human resources cost, you also have the spare parts cost. We have entire spare servers that we can use in the event of a complete and catastrophic meltdown. Some companies would have a hard time finding an extra SCSI controller or IPMI card; I doubt many medium-sized companies have the resources to keep spare machines handy.

It’s too expensive to outsource.

If this were true, this entire industry wouldn’t exist. I know it seems that the purchase price for your server is less over the course of a few years, when compared to the monthly rent of a similar SoftLayer server, but you’re forgetting the incidental charges. The amount of money you’re putting into your small datacenter every day in terms of cooling, electricity, and bandwidth has to add up. The cost of upgrades, repairs, and outages sneak up on you also. You also need to remember that you’re paying for the real-estate that your servers are in. Some companies can fit upwards of 100 people in the space their servers are taking up. Figure out how much you pay per month per square foot of office space, I bet the results will shock you.

You also have to put into the equation the cost of the firewalls, back-end networking, hardware monitoring, intrusion detection hardware, network storage, and all the other great features that come standard on SoftLayer servers. Not to mention the possibility of utilizing our CDN service, Load Balancers, virtual servers, transcoding services, and many more services we offer here. If you attempt to build yourself a world-class data center for just your servers, your costs will be far higher than if you had just let the experts handle it from the start.

We like having all the control.

Everyone likes control, which is why you rarely have to open a ticket to have work done on a SoftLayer machine. Unless your request involves a human being physically opening the case, most of what you want to do can be done through the portal. You can reconfigure any of our services through the portal. You can purchase and allocate additional IP addresses, and you can even purchase entire servers and add them to existing load balancers or virtual dedicated racks without contacting anyone. The control is still in your hands, it just reaches across the country.

Our data is too sensitive to be in a shared location.

The SoftLayer private network is just that, private. Not private as in “members only” but private as in “you and only you.” When a SoftLayer customer VPNs to the private network, he or she is actually logging in to a private set of network routes dedicated to their account. Only servers on their account are accessible from their VPN entry point. Their servers, likewise, can only see the other servers on that same account. Your servers can never get to the servers on another account through the private network. The only access between servers on different accounts is through the public internet, which is true regardless of where the servers are.

We’re too large for outsourcing.

Our CEO, Lance, may answer this with a simple “oh yeah? Bring it!” However, a more verbose rebuttal is probably needed. We have the infrastructure to handle whatever you can throw at us. We handled streaming video of the presidential inauguration, and we have tens of thousands of servers in multiple data centers in multiple cities. If you need 500 servers spread across the United States, place your order on and they will be ready within 4 hours.

We’re comfortable with the way things are.

You may be comfortable now, but are you sure you have every disaster plan covered? Why not allow us to worry about the hardware, power, network, bandwidth, cooling, spare parts, floor space, expandability, and availability requirements, you focus just on keeping the software running and keeping your data safe. Once you have your servers comfortably in our state of the art datacenter, you can start thinking about global expansion. Why not put a web server in all 3 of our locations? You can use geographically-sensitive DNS or global load balancing to serve customers using the closest physical server, all while maintaining a virtual rack of servers across datacenters. All the benefits of keeping your racks in the next office can be yours, with the addition of all our services and geographic diversity.

Our system administrator won’t let us.

I’ve actually heard this more than once. System administrators don’t actually have mystical powers. They work for you, and they enjoy having enough money to pay their bills. They’ll survive the transition.

No matter what size your company is, we have the know-how and the equipment to give you the data center of your dreams. Your servers will be safe, secure, and isolated just like in a private data center, but you will have access to all our additional features as well as having our highly skilled team of round-the-clock technicians to assist you at any time, day or night. Plus, you will probably get more service for less money, and free up significant floorspace in your office. It’s a win-win scenario, and you should jump on it, especially in the current economy. Reducing your IT budget to a set monthly bill instead of a yearly or multi-yearly mega-account will make things easier to budget as well as justify. The outsourcing of IT these days is as common as the outsourcing of power or water a hundred years ago. IT has become a commodity, and all you have to do is call or go to our website and tell us how much of it you’d like.

January 16, 2009

Wizard Needs Food Badly... Wizard is About to Die.

Anyone familiar with arcade games from the 80s would recognize that line. It's from the arcade game Gauntlet, which was also featured on the NES and had a number of sequels on other systems. Wizard was always about to die, because Wizard was a big wimp. For those of you unfamiliar with Gauntlet in particular, it was one of the first games that allowed you to keep playing as long as you kept shoving your hard-earned quarters into the slot. In essence, Gauntlet was the first electronic subscription based service. You keep playing as long as you keep paying. However, Gauntlet never seemed to keep a customer for longer than about 20 minutes. Maybe that's because your characters went through health like a Hummer goes through gasoline...but I digress.

Since Gauntlet, the subscription based service model has really improved. SoftLayer employs it, as you know. However, unlike Gauntlet, we charge by the month. This is a key distinction. Whereas Gauntlet was always trying to kill you to get another quarter, SoftLayer focuses on protecting you. We do everything we can to make sure that your experience is a happy one, and that you are never "about to die." To that end we employ several awesome features to both improve your experience as a customer, and to ensure your servers' safety.

Some of the features that I think are the coolest are the ones that work passively in the background without user intervention. They just exist, sitting there and making SoftLayer the best place to have a server. For instance, our private network is always there, always on, ready for our customers to use. From secure server management to private file transfers and backup, the private network is one of our most awesome features, and everyone gets it immediately. It's always there, ready for you to use for your needs.

Our TippingPoint protection is another great feature that runs constantly in the background. If you are a current customer, you can look under "Public Network" for the "network ids" section to view the attacks on your servers in the last 24 hours. Scary stuff, huh? At the time of this writing, our TippingPoint servers have successfully blocked more than 200,000 attacks in the last 24 hours against our three data centers in Dallas, Seattle, and Washington DC. The internet is a dangerous place, but the SoftLayer network is insulated against the port scanners, botnets, and various other malicious activity.

There are other systems we have in place that customers never notice. We have redundant power supplies, multiple internet links, and 24/7 staff in place specifically so that the customer won't notice when something has gone awry. These systems, while very "active" from our perspective, run in the background as far as customers are concerned, constantly ensuring that the servers stay awake, cool, and dry.

In addition to these (and many more) passive features keeping you safe, there's also active features. We have firewalls, DDOS protection, and load balancers you can configure to make sure your servers stay up. If they go down, we have cross-datacenter backup solutions, monitoring servers, and portable IP addresses to make sure your downtime is as short as possible. Whatever could cause your server harm, we're trying to help you avoid it. In the case where it was unavoidable, we offer a multitude of solutions to minimize downtime and recover from disaster.

So if you're currently pumping quarters into something that's actively trying to kill you, why not save up and get a SoftLayer server for a month. We'll protect you.

January 1, 2009

Shake Your Money Maker

Ever since I installed a firefox add-on that would tell me the physical location of servers I visit on the web, I’ve begun noticing that there’s a lot of servers in Los Angeles. Now, at first glance this makes a lot of sense. LA is a sprawling city with millions of people and an enormous telecommunications infrastructure. The real estate is cheap, compared to other cities of similar size, and it’s relatively centrally located as far as the population of the West coast is concerned. I finally realized why it bothered me so much: Earthquakes.

Most tech-savvy users will realize that shaking a computer hard drive during a read or a write could potentially damage your data or even ruin the entire drive. Certain laptop manufacturers (Apple and Lenovo come easily to mind) even have laptops that can sense when they’ve been dropped so they can put an emergency break on the hard drive to prevent this damage. Server hardware doesn’t have that luxury, as servers aren’t generally designed to shake, rattle, or roll.

However, what happens to those LA data centers during an earthquake? Presumably they use the industry-standard solid steel racks on a raised tile floor. I haven’t seen (nor did I come across during a short Google search) any data centers with spring-loaded raised floors. It’s also safe to assume there’s no padding on the servers themselves, as that would exacerbate an already difficult temperature problem all data centers face.

So what happens, do you think? I’ve never worked on a data center on a fault line, but I imagine that, when an earthquake hits, the servers shake just like the rest of the building. And I also imagine that some of those servers are performing reads or writes to their hard drives during that time. I wonder how much data is lost due to earthquakes every year.

At SoftLayer we have intelligently located our main data center thousands of miles from the edge of our tectonic plate, leaving our Dallas customers safely unshaken. Yes, Dallas is at the bottom of Tornado Alley, but that’s where our second genius play comes in. We’ve chosen Dallas instead of Ft. Worth! For those of you not familiar with the DFW metroplex: due to the area’s geography, Ft. Worth receives the majority of the tornado attention for the area, leaving Dallas relatively unscathed.

Even SoftLayer’s coastal data centers are located far from active earthquake zones. Seattle gets far fewer earthquakes than the more Southerly major West coast cities, and our data center in DC hasn’t been shaken in a millennium at least. So to all those companies that put their data in LA-based data centers: why shake your money maker, so to speak?

November 3, 2008


What if I asked you to guess the name of a video game that came out within the last 10 years, and has sold more copies than the Halo series, the Half-Life series AND the Metal Gear series? No, it’s not Guitar Hero or Rock Band, and it’s not Pokemon. It’s not even made by one of the “serious” game development companies. The game that I’m talking about is Bejeweled (published by PopCap), a simple online flash game that has garnered 25 million purchases and more than 350 million free downloads.

The secret to PopCap’s success lies in creating simple, easy to use games that the average person finds fun. They’ve built an entire market segment from the simple beginnings of Bejewled, and now offer more than 50 games for sale, and even more in their free download section with almost a billion downloads between them. The “casual gaming” market is so large that the Nintendo Wii has almost been completely taken over by casual games.

By why has the industry taken off so much? Sure, casual games can be easy to make. I remember whipping up a version of Bejewled in a VBA form that I built as an Excel macro so I could play it in my “business software” class in high school. The real secret is that these games are easy to pick up and play, and in that sense they’re far better than their competition for people who are busy, inexperienced, or just plain tired.

People these days have less and less free time, which means they have less time to learn the function of the right trigger in crouch mode, run mode, driving mode, flying mode, stealth mode, raspberry jam mode, etc. The instructions for Bejeweled (“Swap adjacent gems to align sets of 3 or more”) are almost as simple as the original Pong’s instructions (“Avoid missing ball for high score”).

That’s what we try to do here at SoftLayer. Our portal is specifically designed to be used by people who just don’t have the time or inclination to perform menial repetitive tasks manually. From configuring a load balancer to rebooting your servers to performing notoriously difficult SWIP requests, the portal handles it all for you. Of course, the task we’re trying to help you accomplish is a lot more complex than “avoid missing ball for high score,” but we try our best to make the process as easy as possible. Maybe with the time saved you can come up with a new business segment to send more server orders our way, but I’m betting you’ll be playing Bejewled, or Peggle, or Zuma…


June 20, 2008

I Always Have a Backup Plan

It was the day of the big secret meeting. All my vice presidents were there except for the unix system administrator. He was a strange man, always wearing that robe, with the long beard and long hair. He considered himself some sort of wizard, and after the conflict last month when we decided to switch all our servers over to SoftLayer, I really didn’t want him involved in the meeting I called today. You see, I called it so I could announce my plan to switch our servers over to Windows. My goal was really to get rid of him; he’s the only one who ever managed to thwart my plans.

Just as I finished that thought, he burst through the door, trailing a long ribbon of old-fashioned printer paper behind him. “How dare you have a systems meeting without me!” he intoned, dropping his stack of papers on the conference table in front of me. A quick glance at the stack tells me that he has printed out operating statistics for every version of Unix and every version of Windows going back to 1985. I didn’t have time for this. Luckily, I always have a back up plan.

Turning away slightly, I quickly activated a program on my Blackberry. You see, yesterday I had written a few custom programs that utilize the SoftLayer API to control a variety of our services. Within moments, a confirmation had appeared on my screen. All of our web traffic had been redirected from our load balanced main servers to our tertiary backup server. In the middle of the work day, that means it was only a matter of minutes before our bandwidth would be exceeded on that server. I allowed the sysadmin to begin his presentation, confident that he would barely get past the 8086 before disaster stuck.

I was right! Within minutes, an email arrived notifying us that we were nearing the bandwidth cap on the hostname last_resort. Panicked, the sysadmin left the meeting. Quickly I summarized my plans to the other VPs, we all voted unanimously for Windows, and I retreated to my office. Shortly after sitting behind my desk, my door burst open. Framed in the light from the hallway, his long shadow washing over me, stood the sysadmin, slowly twirling his staff. “Do you think you can stop me with a simple change to our load balancer? I was configuring load balancers when you were still on dial-up! Now, you will listen, AOL user, and you will see why Unix is your only choice!” Of course, I had a backup plan for just such a situation.

I dove out the window next to my desk, landing nimbly next to my secretary’s bright pink LeBaron. I had made copies of all her keys months ago in order to utilize her unique vehicle for any necessary escapes. I quickly tapped out a text message to Michael in SoftLayer sales. We have a standing agreement that when he receives a message from me containing only the word DAWT, he is to send the best sale at his disposal to my sysadmin. As I drove past the front door of the building I saw him running toward the car. He pulled out his Blackberry in mid-stride and suddenly stopped dead. “Free double RAM AND double hard drives!? IMPOSSIBLE!” he screamed, and I managed to swerve around him and escape. As I drove away, I thought about my secretary. When she first started here, I had convinced her that if her car were ever stolen, the best plan of action would be to change the building security policies so that only my badge could open the doors. I hoped I didn’t need to make use of that plan, but the sysadmin has proved a worthy adversary.

Unbelievable! Even with my masterful backup plan, he was still following me. I saw his battered VW Bus merge into traffic behind me, his vulture-like shadow looming behind the wheel. I sped up until we were both racing down the road, weaving in and out of the other vehicles. Finally we passed a police car, and my next plan sprang into action. I knew that standard procedure was to radio in the vehicles you were pursuing, and I knew my friend Joe was on duty today. Joe knew that if he ever received a radio call about a business man in a pink LeBaron being chased down the highway by a wizard in a VW Bus, he was to call off the police and park a fire truck at a certain intersection. You see, I had hired an actor to pretend to be a corporate Psychiatrist, and learned that the Sysadmin had an irrational fear of fire trucks. Why? Because it always pays to have a backup plan.

I angled toward the intersection and managed to squeeze past the truck just as it pulled up to block the street. I heard the squeal of tires as the sysadmin slammed on his breaks and reversed wildly behind me. Now that I was free, however, I couldn’t return to the office. Luckily I was prepared for just such an eventuality. As I drove to my next location, I quickly used my Blackberry to shut down one of our production web servers. I knew that it would be 20 minutes before the monitoring system would officially declare the server “down,” so I had time.

I made it to my secret office above the video arcade not long after. Before leaving the car I collected the grappling hook and rope from a secret compartment in the door, then went inside. I walked in to the darkened room and immediately noticed something was wrong. My security system wasn’t beeping! The door slammed behind me and the sysadmin boomed out “NO PLAN CAN DEFEAT ME, MORTAL!”

“I’m ALWAYS prepared!” I shot back, and quickly glanced at my watch. It had been 19 minutes and 45 seconds since I shut down my server, the timing was perfect! The sysadmin walked toward me, twirling that staff. Just as he was about to reach me, his blackberry beeped. Pausing to check, he let out a stream of curses and then lunged at me, but I had already rappelled down the side of the building and made my escape.

As soon as I reached the car, my Blackberry alerted me that the server I shut down was back up. How!? The sysadmin must have his own API programs! I cringed as I activated my final backup plan: a program that constantly shut down all our servers. Let’s see him handle that! I took the direct route back to the office, past the still-idling fire truck. I threw Joe a wave, knowing that I’d owe him a big favor for this, and rocketed back to the office. I knew that he would be right behind me, but hopefully with all our servers offline he won’t beat me to my destination. Also, once I made it into the building, the security system wouldn’t allow anyone in behind me. I would be safe!

I raced into the building, looking frantically around for the sysadmin, but he was nowhere to be seen. Finally! I had defeated him! I walked calmly to my office and opened the door, only to see HIM, climbing in through my window. I had forgotten to close it when I escaped this morning! I quickly opened the secret panel in the wall next to the door and put my finger on the red button.

“WAIT!” cried the sysadmin. “We need to put our differences behind us. Our plans have almost destroyed our servers!”

“What do you mean?” I demanded. “They’re fine!”

“No, they’re not,” he said in a sad voice. “You see, I always have a backup plan, and I knew that eventually someone would attempt to power off our machines, so I wrote a script to constantly turn the machines on!”

“B-but…” I stammered, “but I wrote a script to constantly turn them OFF”

“I know” he said, “and the constant power cycling has corrupted our data base. We need to set aside this silly feud and fix it.”

“Don’t worry, dear end user” I proudly proclaimed, “I always have a backup-“

It was right then I realized that in all my planning, I had never actually created any backups.


May 31, 2008

Response to On Site Development

On May 14th my buddy Shawn wrote On Site Development. Aside from the ambiguous title (I originally thought it was an article on web site development, rather than the more appropriate on-site development), there were a number of things that I felt could be expanded upon. I started by simply commenting on his post, but the comment hit half a page and I had to admit to myself that I was, in fact, writing an entire new post.

Updating the computer systems in these restaurants is a question of scale. Sure, it seems cheap to update the software on the 6 computers in a local fast food restaurant. However, a certain “largest fast-food chain in the world” has 31,000+ locations (according to Wikipedia). Now I know how much I would charge to update greasy fast-food computers, and if you multiply that by 31,000, you get a whole lot of dollars. It just doesn’t scale well enough to make it worthwhile. The bottom line is, the companies do cost-benefit analysis on all projects, and the cost of re-doing the messed up orders is apparently less than the cost of patching the software on a quarter million little cash registers and kitchen computers.

It's the same logic that lead to Coke being sold for 5 cents for more than 60 years, spanning two world wars and the great depression without fluctuating in price. The vast majority of Coca-Cola during that time period was sold from vending machines. These vending machines only accepted nickels, and once a nickel was inserted, a Coke came out. That’s it. Nothing digital, no multi-coin receptacles, just insert nickel…receive Coke. The cost of replacing 100,000 vending machines was far higher than the profits they would get by increasing the price of coke slightly. Only after World War II, when industrialization and the suburb were really taking off, did Coca-Cola start to phase out their existing vending machine line and replace it with machines capable of charging more than 5 cents per bottle.

Of course, we all know how coke machines operate now. Computerized bill changers, many of them hooked up to the internet, allow Coke to charge upwards of $3 for a 20oz beverage on a hot day at a theme park. Coke even attempted (in 2005) to fluctuate the price of Coke based on local weather conditions. People would want a Coke more on a hot summer day, so why not charge more for it? (Because the public backlash was severe to the point where boycotts were suggested the very same day Coke announced their new plan, but that’s another story.)

The fast food problem Shawn mentioned, as well as the vending machine problem, is why so many companies are moving onto the web. Online retail is exploding at a rate that can be described as a “barely controlled Bubble.” To tie back in with my comments on the fast food restaurant, this means that all your customers see the exact same website, written by the exact same piece of code. Want to change the way orders are displayed? Well simply alter the order display page, and every customer in every country from now on will see that new display format.

This doesn’t just apply to retail, however. Many companies are moving towards web-based internal pages. When I got my mortgage, the load officer entered all my information into a web form on their intranet. This is brilliant, because it takes away all the cost of synchronizing the employee computers with the software, it removes the time needed for upgrades, and (most importantly) it means developers don’t have to come into the office at 4am to ensure that upgrades go smoothly before the start of the business day. So any of you business owners out there that have had to deal with the nightmare of upgrading antiquated POS software on dozens, hundreds, or hundreds of thousands of computers, consider making everything a web site.

SoftLayer has geographically diverse data centers, so your stores can always log in to a nearby servers to cut down on latency, and we allow for VPN access, distributed databases, and real-time backups, making a web-based solution preferable to even the hard coded local systems that many stores use now.


March 14, 2008

From the Outside Looking In

Recently, as you know, SoftLayer released the new API version 3. We have all been working very hard on it, and we've been completely immersed in it for weeks (months, for some of us). This means that, for the developers, we've been living and breathing API code for quite some time now. The time came to release the API, and as many of you know, it was a smashing success. However, we were lacking in examples for its use. Sure, we all had examples coming out our ears since the customer portal itself uses the API, but those were written by the same developers that developed the API itself, and therefore were still written from an insider's perspective.

So a call went out for examples. Many people jumped on the list, offering to write examples in a variety of languages. I thought I would tackle writing an API usage example in Perl. Perl, for those of you unfamiliar, is an infamous programming language. Flexible, confusing, fantastic and horrifying, it is the very embodiment of both "quick and dirty" and "elegance." It is well loved and well loathed in equal measure by the programming community. Nevertheless, I have some experience with Perl, and I decided to give it a try.

I will attempt to describe my thought process as I developed the small applications (which you should be able to locate shortly in the SLDN documentation wiki) throughout the work day.

9am: "Wow, I really don't remember as much Perl as I thought. This may be difficult."
10am: "I need to install SOAP::Lite, that shouldn't be hard."
11am: "Where the heck are they hiding SOAP::Lite? There are articles about it everywhere, but I can't actually find it or get it installed!"
12pm: "Ok, got SOAP::Lite installed, and my first test application works perfectly! Things are going to be ok! Wait…what's all this about authentication headers?"
1pm: "What have I done to deserve this? Why can't I pass my user information through to the API?"
2pm: "Aha! Another developer just wandered by and pointed out that I've been misspelling 'authentication' for 2 hours! Back on track, baby!" (Side note: another "feature" of Perl is how it never complains when you use variables that don't exist, it just assumes you never meant to type that. Of course, you could tell it to complain, but I forgot about that feature because I haven't used Perl in 4 years.)
3pm: I finally get example #1 working. It queries the API and shows a list of the hardware on your account.
3:30pm: Example #2 working, this shows the details for a single server, including datacenter and operating system
4pm: Combining examples #1 and #2, the third example shows all hardware on your account, plus the installed OS and datacenter, in a handy grid right on the command line. Success! I put Perl away, hopefully for another 4 years.

The whole experience, though, really gave me an insight into how fantastically awesome the API is. I was looking at it from an outsider's perspective. I was confused as to how everything worked, I was working with an unfamiliar language, and I was browsing through the API looking for anything that looked "cool and/or useful." Getting a list of all my account's hardware to show up in a custom built application that I wrote as if I knew nothing about the API was a great feeling. It showed that not only was the API perfectly suited to the tasks we expected of it, but even a novice developer could, with a little effort, make an API application like mine. Expanding on it to show more and more information, and all the possibilities that it opened up in my mind made me realize how useful this API is that we made. It's not just something that a small percentage of our customers will be using. It's something that is truly revolutionary, and that all clients can take advantage of. I'm assuming, of course, that all clients have at least rudimentary skill in at least one programming language, but given the level of success everyone has had with our other offerings, I can assume that assumption is accurate.

If you have been thinking recently "look at all the noise they've been making about this 'API' nonsense," I highly recommend dusting off an old programming book and at least looking at it once. Think of all the possibilities, all the custom reports that you can make for yourself, all the data that we have provided right at your fingertips to assemble in any way you wish. We try our best to make the portal useful to every customer, but we know that you can't please all the people all the time. But with the API, we may do just that. If you're the kind of customer that is only interested in outbound bandwidth by domain, write an API script that displays just that! If you want to know the current number of connections and CPU temperature of your load balanced servers, get that data and show it! The possibilities are endless, and we're improving the API all the time.


Subscribe to Author Archive: %