introductions

April 6, 2009

Solid State Drives – In House Performance Stats

I love working at SoftLayer. I get to play with the newest hardware before anyone else. Intel, Adaptec, Supermicro… The list goes on. If they are going to release something new, we get to play with it first. I also like progression. Speed, size, performance, reliability; I like new products and technologies that make big jumps in these areas. I am always looking to push components and complete systems to the limits.

But alas, Thomas Norris stole my thunder! Check out his article “SSD: A Peek into the Future” for the complete skinny on the SSD’s we use. I seem to be a bit to concise for a nice long blog anyways. But not to worry, I’ve got some nifty numbers that will blow the jam out of your toes!

Solid State Drives (SSD) represent a large jump in drive performance. Not to mention smaller physical size, lower power consumption, and lower heat emissions. The majority of drive activity is random read/write. SSD drives have drastically improved in this area compared to mechanical drives. This results in a drastic overall performance increase for SSD drives.

This is a comparison of the Intel 32GB X25-E Extreme drive vs. other drives we carry. Note the massive jump in the random read/write speed of the SSD drive.

No more waiting on physical R/W heads to move around. How archaic!

Chart

Please note that no performance utility should be used to definitively judge a component or system. In the end, only real time usage is the final judge. But performance tests can give you a good idea of how a component or system compares to others.

Single drive performance increases directly translate into big improvements for RAID configurations as well. I have compared two of our fastest SATA and SAS four drive RAID 10 setups to a four drive SSD RAID 10 using an Adaptec 5405 Controller.

Chart

The Adaptec 5405 RAID controller certainly plays a part in the performance increase, on top on the simple speed doubling due to 2 drives being read simultaneously. (See my future blog on the basics or RAID levels, or check Wikipedia) .

Propeller heads read on:

The numbers indicate a multiplied increase if you take the base drive speed (Cheetah – 11.7mbps / X25-E – 64.8mbps) and double it (the theoretical increase a RAID 10 would give): 23.4mbps and 129.6mbps respectively. Actually performance tests show 27.3mbps and 208.1mbps. That means the Cheetahs are getting a 15% performance boost on random read/write and the X25-E a whopping 37% due to the RAID card. Hooray for math!

Once again, this is all performance tests and a bit of math speculation. The only real measure of performance, IMO, is how it performs the job you need it to do.

April 2, 2009

We Need New Small Businesses

It is often said that small business is the backbone of our economy. According to the U.S. Small Business Administration, small business employs half of all private sector employees. Over the past decade, small business has produced between 60 and 80 percent of net new jobs. We need small businesses to prosper and lead us out of the economic mess in which we find ourselves.

I track growth in domain names every week. I think it indicates how quickly new small businesses are being formed. After all, what business can you think of today (large or small) that does not have some sort of web site? I can’t think of any. One of the things on any small business start up checklist today is the web site. Hence, most all of them register a domain name.

So what’s been happening with growth in domain names? Lately, it’s not too pretty.

Chart

With all the talk lately about stimulating the economy, one of the best ways to do this would be to encourage the formation of new businesses.

Some would argue that we need to fix the credit market mess to help banks be able to lend to small business startups. This couldn’t be further from the truth. How many small businesses do you know that started with a commercial loan from a bank? I cynically say that banks do not want to loan to businesses until the business can survive without need of a bank, and that was true even before the credit crisis. This was certainly true in SoftLayer’s case – when the founders were preparing for launch in late 2005, there wasn’t a bank anywhere that would touch the SoftLayer business plan. What I’m saying is that the credit crisis isn’t that much of a barrier to small business startups. Passionate entrepreneurs will find a way to get going.

But all the passion to start one’s own business doesn’t go very far in the face of the real barriers to starting a business. One of the real barriers that an entrepreneur must overcome is tax issues. Do they begin as a sole proprietor? A partnership? An LLC? An “S” Corp? Should they incorporate? All of them have different tax implications. All of them have to deal with either income taxes at the personal level or corporate level. Some have to deal with self-employment taxes. Others must deal with 941 taxes. Then there are state and local tax issues, such as the margin tax if you’re in Texas. And don’t forget sales taxes and property taxes either.

One of the strategies that allowed the Internet to cement itself in our society during the 1990’s was this: just let it develop without taxing it. Without that burden, the Internet took off like wildfire.

Ergo, if we’d like a bunch of new small businesses to get going, let’s ease up on the tax burden on new startups. This would cost the government hardly any money at all. Think about it – businesses that don’t yet exist do not pay any taxes. Workers that are not yet employed do not pay any taxes. Currently unemployed workers do not pay income taxes, except for a pittance on unemployment benefits. So allowing new businesses to form and employ workers and transact business “tax-free” for a defined start-up period would produce an EXPLOSION of small business startups.

How long should this tax free period be? Per the SBA, if a new business survives 4 years, they have a great shot at surviving long term. So why not give all new business startups a tax holiday for four years as they establish themselves? Can you imagine how big the tax base would grow as these healthy, strong 4-year- old businesses begin paying taxes?

It seems that the biggest issue facing our new President and his administration is how to pay for all the things they’d like to do. Let me suggest that expanding the tax base is the best way to grow government revenues, as opposed to increasing the rates on the current tax base. Allowing a flood of new businesses to take root and grow our tax base may be the best way to fund our growing public budgets.

Naturally, SoftLayer would be more than happy to assist these new businesses with our enterprise class data center outsourcing services so that the new businesses focus on their business plan – not their IT overhead.

March 26, 2009

Use Caution when Outsourcing!

Outsource IT! I have been saying that for years now. But now I say; outsourcer beware!?!?! Really? How do you know if the company you are calling upon to keep your business up and running is safe and sound? Do they have certifications? Are they registered with the Better Business Bureau? Do they have scary fine print in the Terms of Service or User Agreement? Do you actually read those and understand them? How do you find out about all the questions above? Do you go to trade shows? Do you read about companies on the Hosting forum sites? Do you hear it from your friends? There are lots of ways to get that kind of information in today’s social internet jungle. Do you follow the company on Facebook, Twitter, MySpace, Linked-In, or all of the above? Should you? So many questions…

I am going to assume that you think this blog is going to be about how SoftLayer is a reputable, certified PCI compliant and SAS 70 datacenter, with competent and caring employees that can put themselves in the customer’s shoes and understand the frustrations that can go along with outsourcing your datacenter needs. Nah, that would be too easy and not very much fun.

This blog is about mud. Yes, I said mud. I was driving down a county road in Texas recently and we had a bit of rain in the days leading up to my trip. If you aren’t from Texas then you need a quick definition of “County Road”. A county can be paved, gravel or dirt topped and can be a great road or a horrible road, it just depends on the county that it is in, the tax base, and the abilities of the crews hired by the county to maintain them. I was travelling down a very wet gravel top county road, following along on my cell with GPS and Google maps and was about a mile from my destination. In what seemed the blink of an eye the road surface went from wet gravel to dirt and within about 10 feet my truck simply slid off the road into a nice 4 foot ditch filled with rain water. Looks harmless in the picture below doesn’t it?

Mud

It was a nice soft splash landing but my city slicker tires had no chance of getting me out of that ditch even with 4X4 engaged. So when water started coming under the door into the cab of the truck, I knew it was going to be a bad hour or so. It was time to outsource. I called the ranch to see if they had anything that could pull me out but they said that I was in a pretty tough spot and didn’t think they could help. So what would any techie do, I googled mud towing in the closet town. Of course I picked the first place on the list and gave them a call. They said they had a mud recovery truck and they would be out in about 45 minutes. Awesome, just 45 minutes! This was at 4:30PM and it was pretty cold and still raining and the ditch was filling up even further with water. Outsourcer beware, I was expecting a “Mud Recovery Truck!” I had visions of monster trucks dancing in my head. Fail!

Mud

Now I have to say that there weren’t ten forums about mud towing in Navarro county that I could visit, or customer references readily available so I just had to take that leap of faith and trust in the skills of my saviors. And I have to give credit where credit is due, that truck really is a monster! It did things a Transformer would love to be able to do. It got stuck at least 30 times in the 5 hours it took them to get me out of the ditch. Yes, I said 5 hours. Did I mention that monster trucks can do very bad things to city 4X4’s? Thank goodness I have an Echo to drive back and forth to work.

So I don’t want to leave you hanging but my truck is in the shop now and I am still waiting on an estimate. Things I know are wrong; front right A-arm damage from forcibly pulling the truck over a stump in the ditch, alignment issues, check engine light on, cruise control doesn’t work anymore, passenger side back door pushed up about half an inch including damage at bottom from the same stump, muffler caved in and exhaust pipe dragging the ground, front bumper air damn ripped off and metal bumper bent outward, yea you guessed it the pesky stump again and last but not least I need an entire new jack assembly because it is either broken or lost in the mud or both I should say (attempting to jack the truck over the stump).

The moral of this blog, if you have the tools available to research the company you are going to outsource to and they have references be sure to use them. They might save you a $300 mud recovery bill and a $1000 deductible somewhere down the road.

March 23, 2009

Naked Servers

So, Fat Tuesday rolls around and all of the datacenter employees are treated to King cake. Little do I know that we are starting a new tradition up here at SoftLayer: blogs for the baby.

So, the day goes by and everything is normal when I decide to break myself off a chunk of this King cake… Eat a little, work a little, eat a little, work a little and BAM! There is this baby just hiding in my next bite. At that point I have to just jump up and proclaim my victory! YES! I’m the lucky one that found the baby! But wait, why is everyone asking me about some unknown blog? What is this that they speak of? Some sort of mixture of baby and log?, that just does not sound right. This whole thing is too confusing for a guy that just buries his head in a server all day long.

So here I am; just wondering “What do I,” of all people, “write in a blog?” I do not read blogs, follow micro-blogs, or even really participate in these new fangled social world wide web sites. Sure, I am on the mighty book of faces, but not really to do anything other than have an account after everyone bugged me for months. Am I just that out of date with this internet fad (which I think may just catch on)?

Oh well, I go on with my day of wondering and bewilderment. I work a few more tickets and make some customers happy and overhear my boss talking about finally getting Cable. Wow, I thought that I was out of date, but, just, WOW. He is complaining that you have to get everything as a bundle; no hand picking the channels that you want. He doesn’t need the extra 900 channels of fluff. I’ve heard this story so many times, but I then begin to wonder the same.

Most industries have either catered to their customers exact needs or are moving in that direction, but not cable. What other industries are forcing you to bundle? For the most part, in order to get any type of deal, you must order your servers as part of some special bundle with most hosting providers. You want a cost efficient server that you can use to host your home business with, you have to get the whole kit and kaboodle, but not here at SoftLayer! I know that we have many customers that use other Managed Hosting providers that bundle their services with the cost of the server, which on the surface appears to be a great deal, but once they are up and going they do not need any further off-site management of the server to be done. The server is up, customized and should anybody aside from their admin login it could cause major issues with their “Optimization” techniques. These same people will often get a server with us, for their dev work, and find that we provide hardware on par with or better than their Managed Hosting provider, but are not going to bundle in some “Management” fee. We also offer iSCSI, NAS, EVault, CDN, Transcoding, Gigabit, Bandwidth Pooling, Vulnerability Assessment, PCI Scans, IPS (NIPS and HIPS), AntiVirus, Virtualization, OOB IPMI (SOL and KVM) , an API for EVERY method in our Portal(ohh, what some of our customers have been able to accomplish), Automated OS Reloads, Anycast DNS, Hardware Firewalls, Load Balancing, Global Load Balancing, Tipping Point, FREE Cross Connects, private vlan’s, local mirrors/repositories for operating systems and other software, Customer to Customer Cross Connects... do I need to keep going? We have a ton of new offerings on the horizon!

Categories: 
Keywords:
Categories:
March 18, 2009

Code Performance Matters Again

With the advent of cloud computing, processing power is coming under the microscope more and more. Last year, you could just buy a 16-core system and be done with it, for the most part. If your code was a little inefficient, the load would be high, there really wasn't a problem. For most developers, it's not like you're writing digg and need to make sure you can handle a million page requests a day. So what if your site is a little inefficient, right?

Well think again. Now you're putting your site on "the cloud" that you've heard so much about. On the cloud, each processor cycle costs money. Google AppEngine charges by the CPU core hour, as does Mosso. The more wasted cycles in your code, the more it will cost to run it per operation. If your code uses a custom sorting function, and you went with bubble sort because "it was only 50 milliseconds slower than merge sort and I can't be bothered to write merge sort by hand" then be prepared for the added cost over a month's worth of page requests. Each second of extraneous CPU time at 50,000 page views per day costs 417 HOURS of CPU time per month.

Big-O notation hasn't really been important for the majority of programmers for the last 10 to 15 years or so. Loop unrolling, extra checks, junk variables floating around in your code, all of that stuff would just average out to "good enough" speeds once the final product was in place. Unless you're working on the Quake engine, any change that would shave off less than 200ms probably isn't worth the time it would take to re-engineer the code. Now, though, you have to think a lot harder about the cost of your inefficient code.

Developers who have been used to having a near-infinite supply of open CPU cycles need to re-think their approach to programming large or complex systems. You've been paying for public bandwidth for a long time, and it's time to think about CPU the same manner. You have a limited amount of "total CPU" that you can use per month before the AppEngine's limits kick in and you begin getting charged for it. If you're using a different host, your bill will simply go up. You need to treat this sort of thing like you would bandwidth. Minimize your access to the CPU just like you'd minimize access to the public internet, and keep your memory profiles low.

The problem with this approach is that the entire programming profession has been moving away from concentrating on individual CPU cycles. Helper classes, template libraries, enormous include files with rarely-used functions; they all contribute to the CPU and memory glut of the modern application. We, as an industry, are going to need to cut back on that. You see some strides toward this with the advent of dynamic include functions and libraries that wait to parse an include file until that object or function is actually used by the execution of the program for the first time. However, that's only the first step. If you're going to be living on the cloud, cutting down on the number of times you access your libraries isn't good enough. You need to cut down on the computational complexities of the libraries themselves. No more complex database queries to find a unique ID before you insert. No more custom hashing functions that take 300 cycles per character. No more rolling your own sorting functions. And certainly no more doing things in code that should be done in a database query.

Really good programmers are going to become more valuable than they already are once management realizes that they're paying for CPU cycles, not just "a server." When you can monetize your code efficiency, you'll have that much more leverage with managers and in job interviews. I wouldn't be surprised if, in the near future, an interviewer asked about cost algorithms as an analogy for efficiency. I also wouldn't be surprised if database strategy changed in the face of charging per CPU cycle. We've all (hopefully) been trying for third normal form on our databases, but JOINs take up a lot of CPU cycles. You may see websites in the near future that run off large denormalized tables that are updated every evening.

So take advantage of the cloud for your computing needs, but remember that it's an entirely different beast. Code efficiency is more important in these new times. Luckily, "web 2.0" has given us one good tool to decrease our CPU times. AJAX, combined with client-side JavaScript, allows a web developer to generate a web tool where the server does little more than fetch the proper data and return it. Searching, sorting, and paging can all be done on the client side given a well designed application. By moving a lot of the "busy work" to the client, you can save a lot of CPU cycles on the server.

For all those application developers out there, who don't have a client to execute some code for you, you're just going to have to learn to write more efficiently I guess. Sorry.

-Daniel

Categories: 
March 13, 2009

SSD. A Peek Into the Future?

Remember back in the day when Beta and VHS tapes were "it" for movies until those crazy discs called DVD's came out? Well some of you may not remember those days *rolls eyes*. Anyway, for me at least I couldn't imagine there being anything better until LCD TV's and Blu-rays became the norm. Now the question is what could possibly be better than Blu-ray? This next step could be movies written onto a chip something like a flash drive. After all we've seen flash drives go from 64MB to an astounding 64GB all packed into a small USB keychain in no time.

Your thinking, alright that's great and all, but what is SSD? It stands for Solid State Drive and could just very well be the beginning of the end for Disk Drives.

Here at SoftLayer we offer the best SSD's you can find on the market which are the Intel X25-E's (The E stands for Extreme Edition). Unlike the X25-M's which are multi-level cell (MLC) the Extreme Edition's are single-level cell (SLC). Usually you think that multi would be better than single. For space yes, but performance wise, it adds about 100MB/sec to the write speed. They both however, have identical 250MB/sec sustained read speeds.

Okay, so how can SSD's help your server?

Where SSD's scream is their I/O performance. According to benchmarks done by Tom's Hardware (TH) using IOMeter (Database Benchmarking), the X25-E's are pushing over 5,000 I/O operations per second falling to ~3000 dependant on the queue depth. Comparing this to a 15.5K Seagate Cheetah is like comparing a Cassette Tape to a CD, as the Cheetah maxes out ~500 I/O operations per second. TH did several other benchmarks including File Server, Web Server, and Work Station benchmarks, yielding similar results giving the X25-E a lead of about 10 fold over the Cheetah. Access times are also no competition at 0.1ms vs. 5.7ms. The TH review is definitely worth the read if you are looking into such an upgrade and can be found at: http://www.tomshardware.com/reviews/intel-x25-e-ssd,2158.html

Going Green?

SSD consumes only a fraction of the power that a HDD uses. According to Intel the X25 uses only 0.06W at idle and peaks at 2.4W under load. Your typical Hard Disk Drive will use on average ~7.5W at idle and ~10W under load.

As well as using less power, since there are no moving parts that in turn means higher reliability as well as less noise generated. Less noise means us tech's won't have problems hearing you on the phone from the ringing in our ears from working in the datacenter *lol*.

Taking SSD to the extreme …

Samsung recently released a video where they strung together 24 Samsung 256GB MLC SSD's resulting in 6TB of storage and 2GB/sec of throughput. For your viewing pleasure: http://www.youtube.com/watch?v=96dWOEa4Djs&fmt=22

Who knows what the future has in store for us, in ten years we may look at this video like we look at this one: http://www.youtube.com/watch?v=TZb0avfQme8

One thing is for certain, quieter, faster, more reliable, and more energy efficient datacenters.

Categories: 
Keywords:
Categories:
March 11, 2009

Social Networking and Customer Relationships

Social networking websites are all the rage today. You are able to keep up with old friends, collegues, and now even business contacts. I believe this is truly the coolest part about these types of sites. While it is fun to constantly know what your first boyfriend's sister's cousin is up to at any given time, it is also nice to be able to make a business relationship a little more personal.

I started working in the web hosting industry back in 2003. While there were social networking websites out there, they weren't a huge thing. I certainly was not a member of one at the time. Therfore, my only way of communicating with my customers was via the regular methods: phone, chat, and email. I think I was successful at my job and did a good job of keeping up with customers. But now, I am friends with clients through everything from FaceBook to Linkedin to Twitter. Linkedin is great, because you are able to add referrals and comments on service, and really help to expand one another's business. It is also hilarious to get to know the more personal side of my clients by looking at silly pictures of them and seeing their comments on FaceBook. It definitely lightens the atmosphere and makes it even more enjoyable to work with one another when conducting business.

As you can see on SoftLayer's website, you can network with us as a company through FaceBook, Twitter, LinkedIn, drop.io, and GitHub. So please come and check us out for all of the latest updates on product offerings, improvements, and advancements that SoftLayer has to offer you! Oh, and I would be happy to be your personal friend too!

Categories: 
Keywords:
Categories:
March 9, 2009

Spindle Theory

Spindle: the axis to which hard drive platters are attached.

I'm going to start this blog off with making a statement: 250 + 250 > 500. Some of you already know where this is going and you're sitting there thinking 'well, duh'. Other readers probably think I've just failed basic math. If you're in the first group, scan through and see if I bring up something you haven't thought of. If you're in the second group, this article could be useful to you.

Let us suppose you are running a medium popularity website. You have a forum, a database and you serve a bunch of video or image data. You knew that for content plus backups you'd need about 500GB. You have two hard drives, 2 x 250 GB. Content is on drive zero, and drive one is used for holding backups (you are routinely backing up, aren't you?). Being a good and noble system administrator you've kept an eye on your resource usage over time and lately you've noticed that your load average is going up.

Say you were to run a 'top' and get back something like this:

Top

First a couple of caveats:

1) For the astute in the class... yes, I've made up the numbers above.. I don't have a distressed machine handy but the made up numbers come from what I've seen in actual cases.

2) A load of 15 may not be bad for your machine and workload. The point here is that if your load is normally 5 and lately its been more like 15 then something is going on. It is all about knowing what is normal for your particular system.

So, what is top saying? Its saying that on average you've got 14 or 15 things going on and wanting to run. You'll notice from the swap line that the machine isn't particularly hitting swap space so you're probably not having a RAM issue. Lets look closer at the CPU line.

Cpu(s): 10.3% us, 5.7% sy, 0.0% ni, 15.0% id, 80.3% wa, 0.0% hi, 0.0% si

10% user time, 5% system time.. doesn't seem so bad. Even have 15% idle time. But wait, what do we have here.. 80% wa? Who is wa and why is he hogging all the time? wa% is the percentage of time your system is spending waiting on an IO request to finish. Frequently this is time spent waiting on your hard drive to deliver data. If the processor can't work on something right now (say because it needs some data) that thing goes on the run stack. Well what you can end up with is that you have a bunch of processes on the run stack because the CPU is waiting on the hard drive to cough up some data for each one. The hard drive is doing its best but sometimes there is just too much going on.

So how do we dig down further? So glad you asked. Our next friend is going to be 'iostat'. The iostat command will show you which hard drive partitions are spending most of the time processing requests. Normally when I run it I'm not concerned with the 'right then' results or even the raw numbers... rather I'm looking for the on-going action trend so I'll run it as 'iostat 5' which tells it to re-poll every 5 seconds.

isStat

(again I've had to fudge the numbers)

So if you were running 'iostat 5' and over time you were seeing the above formation what this tells you is that disk0 (or hda, in the chart) is doing all the work. This makes sense.. previously we discussed that your setup is disk0 - content, disk1 - backups. What we know so far is that disk0 is spinning its little heart out servicing requests while disk1 is sitting back at the beach just watching and giggling. This is hardly fair.

How do you right this injustice? Well, that depends on your workloads and whether you want to (or can) go local or remote. The general idea is going to be to re-arrange the way you do your storage to spread the workload around to multiple spindles and you could see a nice gain in performance (make sure a thing and its backup is NOT on the same disk though). The exact best ideas for your situation is going to be a case-by-case thing. If you have multiple busy websites you might just start with putting the busiest one on disk1, then see how it goes. If you have only one really active site though then you have to look at how that workload is structured. Are there multiple parts to that site which could be spread between disks? The whole point of all this is that you want to eliminate bottlenecks on performance. A single hard drive can be just such a bottleneck.

If the workload is largely writes (most of the time files being created/uploaded) then you're looking at local solutions such as making better use of existing drives in the system or adding new drives to the system. Depending on what you are doing it might be possible to jump up to faster drives as well, say 15,000 RPM drives. I should include mention here of RAID0. RAID0 is striping without parity. What you have is multiple physical drives presented to the operating system as one single unit. What happens is that as a file is written, parts of it end up on different drives so you have multiple spindles going for each request. This can be wickedly fast. HOWEVER...it is also dangerous because if you lose one drive you potentially have lost the entire volume. Make no mistake, hard drives will fail and they'll find the most irritating time to do it. If you think you would want to use a RAID0 and you cannot afford the downtime when a drive fails taking the whole volume with it then you might look at a RAID10. Ten is a RAID0 that is mirrored against another RAID0. This provides some fault tolerance against a failed drive.

If the workload is mostly just reading, say you host images or videos, then you could do multiple drives in the system or the "Big Dog" of adding spindles for read-only workloads is something like our CDNLayer product where a global network of caches stores the image files and people get them from their nearby cache. This takes quite a bit of the I/O load off your system and also (subject to network routing goofiness) your visitors could be getting your content from a box that is hundreds or thousands of miles closer to them.

So, with apologies to my math teacher friend, there are times when 250 + 250 > 500 can be true. Until next time, have a good day and respect your spindles!

Categories: 
Keywords:
Categories:
March 4, 2009

Web Site Optimization

There are many techniques to speed up the web page load time. This Yahoo developer network article sums most of them. If you own a web site with thousands of daily visitors, some of these tweaks will help you provide a better experience on your web site to visitors.

I’ve been on the CDNLayer development since I started working at SoftLayer. One day I wondered how much performance increase some of these techniques can bring. I chose to implement the techniques below.

  1. Using CDN
  2. Combining multiple JS and CSS files to a single JS and CSS
  3. Compressing JS and CSS
  4. Serving files from 2 different domains

I chose them because they are easy to implement and CDN has become very affordable nowadays. I copied the index page of SoftLayer.com and took 5 different steps to optimize the page. To make the page a bit larger, I added a JS and a CSS file to the index page. So the total file size was about 980 kilobytes.

  1. Step #1: “HTML + 2 JS + 2 CSS + images” served from my server
  2. Step #2: “HTML + 2 JS + 2 CSS + images” served from CDN
  3. Step #3: “HTML + 1 combined JS + 1 combined CSS + images” served from CDN
  4. Step #4: “HTML + 1 combined/compressed JS + 1 combined/compressed CSS + images” served from CDN
  5. Step #5: “HTML + 1 combined/compressed JS + 1 combined/compressed CSS” served from a single CDN host A + “images” from 2 different CDN hosts A and B

The page loads within 2 seconds in real life so I disabled both the disk and memory cache in Fire Fox to exaggerate the result. I requested each version 10 times and here is the average page load time in seconds.

  1. Step #1: 11.976
  2. Step #2: 9.602
  3. Step #3: 9.626
  4. Step #4: 9.123
  5. Step #5: 8.72

First, my test site is not on SoftLayer’s server, it is located somewhere in Pennsylvania. Second, using CDN, thus the files are served from Dallas POP, gave me a good 2 seconds decrease. Third, combining JS and CSS files did not give me any benefit. It was only 2 less trips to the server anyways and I’m using a high-speed Internet so I guessed this would not make much of difference in my case. However, if I had a dial-up, fewer trips to server, even it were only 2, will help the page load time. Fourth, gzip compression reduced the content size and it shortened page load by 0.5 seconds. It doesn’t seem like a big benefit as far as the page load time is concerned but keep in mind that the compression decreased the page size by more than 100 kilobytes. If you have a large amount of visitors, it will help you save lots of bandwidth. Finally, serving files from 2 different domains can reduce a significant amount of page load time. This is due to the limit within the browser itself. Most browsers are set to download 2 files from a domain at a time regardless of how fast your Internet connection is. So if you serve your files from multiple domains or sub domains, your visitors will be able to download more files simultaneously. If a visitor is using a high-speed connection, this trick will help the page load significantly.

I hope some of these techniques can help your sites get prepared for a large number of visitors. Web site optimization techniques not only will reduce the page load time but it will also help extend your web site’s capacity. Who knows, your web site may get Digged tomorrow!

Categories: 
Keywords:
Categories:
March 2, 2009

Start Your Own Stimulus Plan 2009

In case you have been living under a rock, a stimulus plan was passed a few weeks ago here in the United States. There have been numerous blogs, news articles, videos posted on why it will and will not work. If you want to know more about it, you can get additional information from the government site http://www.recovery.gov. You may be relieved to know I am not here to get into some big rant on the merits and flaws in the plan.

What I do want is to voice my opinion on the whole issue and what you can do about it. I think this is a very pivotal time for the entire world and sitting around waiting on the government is not in anyone’s best interest. At SoftLayer we began referring customers to server management companies a few months after we opened, via our forums. Since this has helped numerous companies listed in the forums, I think it is time to branch out and expand on this. I found a website not associated with SoftLayer called “Make a Referral Week”. I think it is best described on their website:

“As the talk of recession crowds the news and economic stimulus package debates rage in Washington DC, it’s time for small businesses to take the matter into their own hands. Therefore we hereby declare March 9-13, 2009 - Make a Referral Week.”

Make a Referral Week is an entrepreneurial approach to stimulating the small business economy one referred business at a time. The goal for the week is to generate 1000 referred leads to 1000 deserving small businesses in an effort to highlight the impact of a simple action that could blossom into millions of dollars in new business. Small business is the lifeblood and job-creating engine of the economy and merits the positive attention so often saved for corporate bailout stories.”

I want to encourage you to participate in the event, and if you stumble across this post after the event go ahead and try it out for a week. It’s a great way for you to help fellow business owners small and large and get involved in making a difference.

Click here for more information and to make a difference:

make a referral week

Categories: 
Keywords:
Categories:

Pages

Subscribe to introductions