Author Archive: Nathan Day

May 5, 2010

Adjacent Synergies

The week of May 10, I’ll be heading off to San Francisco with a full complement of SoftLayer personnel to attend and present at Synergy (www.citrixsynergy.com), Citrix’s annual conference. We are heading out in force to deliver our message on the advantages of utilizing Infrastructure as a Service.

If you are familiar with SoftLayer, then you know our value proposition: we can provide network and compute infrastructure to our customers faster, better, and with a less financial burden than doing it on your own. I’ll be making a presentation on Wednesday May 12th highlighting the advantages of IaaS and examples of business getting more done more quickly for less by using a service provider like SoftLayer.

In addition, on Thursday the 13th, I’ll be discussing the managed vs. automated self-managed models of IaaS with Jon Greaves of Carpathia (http://www.carpathiahosting.com/blogs/carpathia-blog). It ought to be an interesting discussion that helps customers decide which model is right for them.

SoftLayer is a Gold Sponsor at the event and we will have some other management on site as well as members of the sales team discussing our service at our booth in the Solutions Expo.

I didn’t make up the phrase “Adjacent Synergies” but I think it counts as a double in buzzword bingo. I would have used “Synergistic Adjacencies” instead.

-@nday91

September 15, 2009

Managing Your Traffic in the Modern Era

Over the past 10 years, I’ve run or helped run all sizes of web sites and internet applications. I’ve seen everything from single-page brochure web sites to horizontally scaled interactive portals. And what I’ve learned is that it is all about the end-user experience.

I’m not a graphics specialist or a GUI designer. I just don’t have that in my DNA. I focus more on the technical side of things working on better ways to deliver content to the user. And in the purely technical area, the best thing to do to improve the user experience is to improve the delivery speed to the user.

There are a lot of tools out there that can be used to speed up delivery. CDN, for example, is an awesome way to get static content to an end user and is very scalable. But what about scaling out the application itself?

Traditionally, a simple Layer-4 Load Balancer has been a staple component of scalable applications. This type of Load Balancing can provide capacity during traffic peaks as well as increase availability. The application runs on several servers and the load balancer uses some simple methods (least connections, round robin, etc) to distribute the load. For a lot of applications this is sufficient to get content reliably and quickly to the end user. SoftLayer offers a relatively inexpensive load-balancing service for our customers that can provide this functionality.

There is another, more sophisticated, tool that can be used to manage internet application traffic. That is the “Application Delivery Controller” (obligatory Wikipedia link: http://en.wikipedia.org/wiki/Application_Delivery_Controller) or “Load Balancer on Steroids”. This class of traffic manager can act in Layer-7, the data layer. These devices can make decisions based on the actual content of the data packets, not just the source and destination.

And an ADC can do more than load balance. It can act as a Web Application Firewall to protect your data. It can speed up your application using SSL Offloading, Content Caching, TCP Optimization, and more. This type of device is very smart and very configurable and will help in the delivering the application to the end user.

At SoftLayer we have seen our customers achieve a lot of success with our Layer-4 Load Balancer product. But we are always looking for other tools to help our customers. We always have admired the advanced functionality in the appliance-based Application Delivery Controllers on the market. Finding a way to get this enterprise-grade technology to our customers in an affordable manner was problematic. When Citrix announced that they were going to create a version of their NetScaler product that didn’t require an appliance we were thrilled. With the announcement of the NetScaler VPX we finally thought we had found the right product that we could use to affordably provision this advanced technology on-demand to our customers.

SoftLayer is VERY excited to partner with Citrix to provide the NetScaler VPX Application Delivery Controller to our customers. Our customers can order a NetScaler VPX, and in a matter of minutes be managing the delivery of their online applications using one of the most sophisticated tools on the market. Citrix does a better job of promoting the product than I do, so here is the link to their site: http://citrix.com/English/ps2/products/product.asp?contentID=21679&ntref=hp_nav_US.

Remember, it’s all about the experience of the user at the other end of the wire. Find the right tools to manage that experience and you are most of the way there. Oh yeah, and find a good graphics designer too. That helps. So does good content.

-@nday91

February 19, 2009

Virtualized Datacenters

It shouldn’t be any surprise to people who know SoftLayer that we follow the "Virtual Datacenter" discussions quite closely. In fact, it is awesome to see people discussing what sounds a lot like what SoftLayer already is.

The concept of Virtual Datacenter is that you have all the power of a datacenter at your command without having to worry about the details of actually running a datacenter. Chad Sakac from EMC wrote an excellent post in his personal blog about the transformation to a Virtual Datacenter.

One of the points Chad makes is the abstraction of the physical infrastructure. Quoting Chad:

"Every Layer of the physical infrastructure (CPU, Memory, Network, Storage) need to be transparent. Transparency means 'invisible'. This implies a lot, and implies that the glue in the middle, like a general purpose OS, needs to provide the "API models" for those hardware elements to be transparent. "

I latched on to this point because that is what we have been building at SoftLayer for the last few years. We realize that the abstraction of the physical infrastructure not only means that end-users don’t need to know how to manage the physical infrastructure, but that the abstraction can make more efficient use of resources (= money!).

Let’s talk about the advantages of virtualized infrastructure. Without virtualization, provisioning a web-facing server on the network would involve obtaining rack space, a server, licensing and loading an OS, finding a switch port, physically connecting a cable or three, setting up the switch port (I hope you know IOS), getting IP Addresses (hopefully you don’t have to go get more from ARIN), and adding a firewall and/or load balancer (more procurement, cabling, and configuration). Adding storage could be just as complex – also involving procurement, racking, cabling, and configuration. This doesn’t sound very efficient. In fact, it sounds a lot like creating a “circular device that is capable of rotating on its axis, facilitating movement or transportation whilst supporting a load”. It's been done before and I'll bet it’s been done better by people other than you.

Using virtualized infrastructure you should be able to perform the task with a few clicks of a mouse or a few API calls and have the functionality you need set up in a few minutes instead of days, weeks, or months. No worrying about procurement, physical constraints, or learning the specifics of network and storage devices from different vendors. All you should have to focus on is the running of your particular application. You shouldn’t have to worry about configuring servers, networking, and storage any more than you should have to worry about chillers, HVAC, generators, and UPS batteries.

-@nday91

August 28, 2008

The Speed of Light is Your Enemy

One of my favorite sites is highscalability.com. As someone with an engineering background, reading about the ways other people solve a variety of problems is really quite interesting.

A recent article talks about the impact of latency on web site viewers. It sounds like common sense that the slower a site is, the more viewers you lose, but what is amazing is that even a latency measured in milliseconds can cost a web site viewers.

The article focuses mainly on application specific solutions to latency, and briefly mentions how to deliver static content like images, videos, documents, etc. There are a couple ways to solve the static content delivery problem such as making your web server as efficient as you can. But that can only help so much. Physics - the speed of light - starts to be your enemy. If you are truly worried about shaving milliseconds off your content delivery time, you have to get your content closer to your viewers.

You can do this yourself by getting servers in datacenters in multiple sites in different geographic locations. This isn't the easiest solution for everyone but does have its advantages such as keeping you in absolute control of your content. The much easier option is to use a CDN (Content Delivery Network).

CDNs are getting more popular and the price is dropping rapidly. Akamai isn't the only game in town anymore and you don't have to pay dollars per GB of traffic or sign a contract with a large commit for a multi-year time frame. CDN traffic costs can be very competitive costing only a few pennies more per Gb compared with traffic costs from a shared or dedicated server. Plus, CDNs optimize their servers for delivering content quickly.

Just to throw some math into the discussion let's see how long it would take an electron to go from New York to San Francisco (4,125,910 meters / 299,792,458 meters per second = 13.7 milliseconds). 13.7 millisconds one way, now double that for the request to go there and the response to return. Now we are up to 27.4 milliseconds. And that is assuming a straight shot with no routers slowing things down. Let's look at Melbourne to London. (16,891,360 meters / 299,792,458 meters per second = 56.3 milliseconds). Now double that, throw in some router overhead and you can see that the delays are starting to be noticeable.

The moral of the story is that for most everybody, distributing static content geographically using a CDN is the right thing to do. That problem has been solved. The harder problem is how to get your application running as efficiently as possible. I'll leave that topic for another time.

-@nday91

August 25, 2008

Do You Know Where Your Nameserver Is?

Today we are getting back to the basics. Really simple stuff like how content gets served up on the internet. I'm going to keep things at a fairly high level, so don't flame me if I oversimplify things. I was trying to explain this to my Mom recently (Hi Mom!) and that inspired me to write this blog.

The first thing that has to happen is for the viewer to make a request by typing in a site name or clicking on a link in a web browser. That request usually has a text-based name as part of the request (like "www.softlayer.com"). Each name has a domain ("softlayer.com") and each domain has an authoritative nameserver to translate the name into a numerical address. That numerical address is used by the internet infrastructure to make sure the request gets to the right place. Phone numbers work the same way, so just think of an IP address (and domain name) serving the same purpose as a traditional phone number which defines the location of the “owner” of the number (at least in the landline world) based on country, region, and city.

If the nameserver for a name is slow or down, then the request will be delayed, or even worse, fail because the nameserver was not available to translate the name into an address. And if the translation fails, the viewer will not get the content he or she requested.

So, if you are running a website, you want your nameservers to be highly available and service the request as quickly as possible. Here is where I get to brag about SoftLayer a little. We provide nameserver service to our customers. Our customers can use our web portal or a sophisticated programming interface (the SoftLayer API) to manage the numerical addresses for their names. We have located nameservers at several locations and we keep the data synchronized between the sites. Our nameservers themselves have the same addresses using a technology called anycast (http://en.wikipedia.org/wiki/Anycast).

What all this means is that our customers get to have their name to number translation hosted at multiple sites. This results in faster translation times and in the case of a disaster at one site, the other nameservers will still be working.

In other words, SoftLayer has very cool nameservers.

-@nday91

November 19, 2007

A Feature Too Far

I just finished the best Software Project Management book I have ever read. It covered proper planning, requirements gathering, resource management, inter-organizational communication, and even discussed the immeasurable factor of individual effort. The book's title is 'A Bridge too Far' by Cornelius Ryan. The book is actually a historical account of "Operation Market-Garden" which was an attack by the Allied forces against Nazi Germany in World War II.

First let me say that I am not comparing Software Development to War. I do appreciate the difference between losing one's job and losing one's life. But as I was reading the book, the parallels between the job of a project manager preparing for, managing, and executing a large project are not unlike that of the job of a General's planning staff preparing for a major offensive.

Operation Market-Garden was a combined ground and paratrooper attack into The Netherlands by the Allies a few months after the invasion of Normandy. Things seemed to be going well for the Allies in the months after D-Day and the Allied Generals became confident that they could launch a lightening strike that would end the war sooner rather than later. The operation seemed simple, Airborne paratroopers would be dropped deep in Nazi territory and would capture key bridges along a route into The Netherlands. A ground offensive would quickly follow using the bridges that were captured by the paratroopers to get almost all the way to Germany's borders. The short version of the story is that the ground offensive never caught up to the paratroopers and the offensive didn't succeed.

Reading the historical account, with the benefit of hindsight, it became obvious that the Allied Generals underestimated the difficulty of the task. The offensive scope was too big for the resources on hand and perfect execution of all the individual engagements was required. The schedule the Generals developed was impossible to keep and schedule slips meant death for many of the soldiers. Communications between elements of the units involved was critical but did not occur. However, because of heroic actions of some individuals and personal sacrifice of many, the offensive almost succeeded.

In the early stages of a project, setting realistic goals, and not putting on blinders as to the quantity and quality of your resources are key to a projects success. Going on the assumptions that the 'development weather' will always be perfect, communications will always work, and that all tasks will be completed on schedule is a recipe for disaster. And you can't always plan on individual heroics to save a project.

I usually try to inject some levity into my posts, but not this one. 17,000 Allied soldiers, 13,000 German soldiers, and 10,000 civilians were killed, missing, or wounded as a result of this failed offensive.

-@nday91

September 13, 2007

Ultrasonic Wave Propagation Through Particulate Composites

That is a heck of a strange title for a hosting company blog post.

It was, however, a great title for a Master's thesis. Bear with me though and I'll put it together.

Once upon a time, I spent many a day (evening, night, whatever) in the basement of the Bright building at Texas A&M blasting ultrasonic waves at samples of composite materials and measuring the energy output on the other side. What we found was that if you hit the right frequency that made the little particles resonate, then a lot more energy was transmitted through the material1. But sending a lot of energy at the wrong frequency didn't do any good at all and most of the energy was absorbed. After a while, using the experimental data, we learned how to predict what frequencies transmitted the most energy.

Developing projects for a hosting company is pretty much the same. You can spend a lot of energy writing code and developing products, but if you don't produce something that resonates with the customer, no matter how much energy you put into it, you aren't going to get the results out of the other side. Having been in software development in the hosting industry for quite a while now, I have worked on projects that resonated with customers and a unfortunately on a few that didn't. The trick is to collect enough data before you start by using a mix of experience and customer interaction to predict what will resonate, and what won't.

See, I brought it all together and I get to tell myself that I still use my master's degree.

-@nday91

1I way oversimplfied this. My apologies to Dr. V. Kinra.

June 14, 2007

KVM over IP or Sliced Bread?

I’m spoiled. Really, really spoiled. I have a test lab full of servers to play with about thirty paces away from my office. Most of them have KVM over IP on a daughtercard. When I need to jam an OS on a server or manage to lock myself out by screwing up a network config, do you think I stand up and take a short walk? Nope. I fire up the KVM/IP and take care of business from my comfy office chair.

Let’s see how old the audience is. Raise your hand if you ever had to yell into a phone telling a datacenter tech what to type.

“'S' as in Sam, 'H' as in Harry, 'O' as in Oscar, 'W' as in Wally, SPACE, 'D' as in David, 'E' as in Edward, 'V' as in Victor, 'I' as in Isabel, 'C' as in Charlie, 'E' as in Edward, ENTER” (extra credit to whoever can name the OS without using a search engine or reading ahead).

For some of you this is a recent event, but there will come a day when our IT generation can regale the youngsters with stories of “When I first started in IT, we didn’t have this fancy KVM stuff you kids have today…”.

KVM over IP isn’t exactly brand new. It has been around for a few years starting with external devices hanging off the back of the server. But it is becoming much more common to find daughtercards from your favorite motherboard manufacturer with this capability. The motherboard suppliers have already added other server control technologies like IPMI and iAMT to the motherboard. I wonder how long until KVM over IP makes the jump from the optional daughtercard to coming standard on the motherboard? I’ll bet we’ll see it before you can spell VMS.

-@nday91

Subscribe to Author Archive: %