Posts Tagged 'Performance'

December 2, 2010

Once a Bug Killer, Now a Killer Setup

Not everyone enjoys or has the benefit of taking what they learn at work to apply at home in personal situations, but I consider myself lucky because the things I learn from work can often be very useful for hobbies in my own time. As an electronics and PC gaming fanatic, I always enjoy tips that would increase the performance of my technological equipment. Common among PC gaming enthusiasts is the obsession with making their gaming rig excel in every aspect by upgrading video card, ram, processor, etc. Before working at SoftLayer, I had only considered buying better hardware to improve performance but never really looked into the advantages of different types of setups for a computer.

This new area of exploration for me started shortly after my first days at SoftLayer when I was introduced to RAID (Redundant Array of Inexpensive Disks) for our servers. In the past, I had heard mention of the term but never had any idea of what that entailed and was only familiar with our good ole bug killer brand Raid. You can imagine my excitement as I learned more about its intricacies and how the different types of RAID could benefit my computer’s performance.

Armed with this new knowledge, I was determined to reconfigure my gaming pc at home to reap the benefits. Upon looking at the different RAID setups, I decided to go with a RAID 0 because I did not want to sacrifice storage space and my data was not critical enough that I would need a mirror such as provided with RAID 1.

One thing led to another as I became occupied for a good amount of time with benchmarking drive performance in my old setup versus my new setup. In the end, I was happy to report a significant performance gain in what I now refer to as my “killer setup”. Applications would launch noticeably faster and even in games where videos were stored locally on hard drives, the cinematic scenes would come up faster than before.

To add to the hype, a coworker was also building a new computer in anticipation of a new game called Final Fantasy XIV. It felt like a competition to exceed each other with better scores. I’m already planning ahead for future upgrades since this time around I had only used SATA drives. For my next upgrade I would love to run a RAID 0 with two SSD drives to see what kind of boost I would get.

So for business or pleasure, have you ever considered the benefits of setting up a RAID system?


November 22, 2010

Free is Just a Word for Nothing Left to Lose

Last week, Amazon Web Services unveiled the “AWS Free Usage Tier”. The idea is to encourage customers to experiment with the cloud, hopefully leading to a fee-based relationship sometime in the future. You can read about it here.

Free is always an interesting concept. Everybody loves free – free beer, free music, free love and now free cloud. The question that begs to be answered is what, exactly, does free mean when we are talking about an Amazon cloud. In other words is it an award winning Cigar City Bourbon Barrel Aged Hunaphu’s Imperial Stout or a PBR? There is little doubt that they are offering lots of stuff – storage, load balancing etc - but it ought to come with a caveat that reads “If you intend to do anything other than play with this, please think again.” The service offered is clearly not robust enough for much else beyond experimentation. A company that plans on presenting an application via the cloud to internal or external customers must simply make other arrangements. Limited RAM, combined with no processor guarantees and no service promises make for a poor business decision.

So, is this really a bad offering? The answer is no it’s not, just so long as everyone maintains a cool head and remembers what it is for – experimentation and education. And this makes it a good offer. Amazon is effectively helping to seed the marketplace for the cloud by providing a free platform to encourage a wider audience to dip their toes in the cloud. There is little doubt that some will transition from this offer to a full blown, fee-based service with Amazon because they generally do a good job. The great thing is that as the market educates itself about the cloud, SoftLayer will also benefit. We are very good at what we do and it simply makes sense to have a SoftLayer discussion when a company gets serious about the cloud.


March 18, 2009

Code Performance Matters Again

With the advent of cloud computing, processing power is coming under the microscope more and more. Last year, you could just buy a 16-core system and be done with it, for the most part. If your code was a little inefficient, the load would be high, there really wasn't a problem. For most developers, it's not like you're writing digg and need to make sure you can handle a million page requests a day. So what if your site is a little inefficient, right?

Well think again. Now you're putting your site on "the cloud" that you've heard so much about. On the cloud, each processor cycle costs money. Google AppEngine charges by the CPU core hour, as does Mosso. The more wasted cycles in your code, the more it will cost to run it per operation. If your code uses a custom sorting function, and you went with bubble sort because "it was only 50 milliseconds slower than merge sort and I can't be bothered to write merge sort by hand" then be prepared for the added cost over a month's worth of page requests. Each second of extraneous CPU time at 50,000 page views per day costs 417 HOURS of CPU time per month.

Big-O notation hasn't really been important for the majority of programmers for the last 10 to 15 years or so. Loop unrolling, extra checks, junk variables floating around in your code, all of that stuff would just average out to "good enough" speeds once the final product was in place. Unless you're working on the Quake engine, any change that would shave off less than 200ms probably isn't worth the time it would take to re-engineer the code. Now, though, you have to think a lot harder about the cost of your inefficient code.

Developers who have been used to having a near-infinite supply of open CPU cycles need to re-think their approach to programming large or complex systems. You've been paying for public bandwidth for a long time, and it's time to think about CPU the same manner. You have a limited amount of "total CPU" that you can use per month before the AppEngine's limits kick in and you begin getting charged for it. If you're using a different host, your bill will simply go up. You need to treat this sort of thing like you would bandwidth. Minimize your access to the CPU just like you'd minimize access to the public internet, and keep your memory profiles low.

The problem with this approach is that the entire programming profession has been moving away from concentrating on individual CPU cycles. Helper classes, template libraries, enormous include files with rarely-used functions; they all contribute to the CPU and memory glut of the modern application. We, as an industry, are going to need to cut back on that. You see some strides toward this with the advent of dynamic include functions and libraries that wait to parse an include file until that object or function is actually used by the execution of the program for the first time. However, that's only the first step. If you're going to be living on the cloud, cutting down on the number of times you access your libraries isn't good enough. You need to cut down on the computational complexities of the libraries themselves. No more complex database queries to find a unique ID before you insert. No more custom hashing functions that take 300 cycles per character. No more rolling your own sorting functions. And certainly no more doing things in code that should be done in a database query.

Really good programmers are going to become more valuable than they already are once management realizes that they're paying for CPU cycles, not just "a server." When you can monetize your code efficiency, you'll have that much more leverage with managers and in job interviews. I wouldn't be surprised if, in the near future, an interviewer asked about cost algorithms as an analogy for efficiency. I also wouldn't be surprised if database strategy changed in the face of charging per CPU cycle. We've all (hopefully) been trying for third normal form on our databases, but JOINs take up a lot of CPU cycles. You may see websites in the near future that run off large denormalized tables that are updated every evening.

So take advantage of the cloud for your computing needs, but remember that it's an entirely different beast. Code efficiency is more important in these new times. Luckily, "web 2.0" has given us one good tool to decrease our CPU times. AJAX, combined with client-side JavaScript, allows a web developer to generate a web tool where the server does little more than fetch the proper data and return it. Searching, sorting, and paging can all be done on the client side given a well designed application. By moving a lot of the "busy work" to the client, you can save a lot of CPU cycles on the server.

For all those application developers out there, who don't have a client to execute some code for you, you're just going to have to learn to write more efficiently I guess. Sorry.


Subscribe to performance