Posts Tagged 'Cloud Computing'

February 1, 2012

Flex Images: Blur the Line Between Cloud and Dedicated

Our customers are not concerned with technology for technology's sake. Information technology should serve a purpose; it should function as an integral means to a desired end. Understandably, our customers are focused, first and foremost, on their application architecture and infrastructure. They want, and need, the freedom and flexibility to design their applications to their specifications.

Many companies leverage the cloud to take advantage of core features that enable robust, agile architectures. Elasticity (ability to quickly increase or decrease compute capacity) and flexibility (choice such as cores, memory and storage) combine to provide solutions that scale to meet the demands of modern applications.

Another widely used feature of cloud computing is image-based provisioning. Rapid provisioning of cloud resources is accomplished, in part, through the use of images. Imaging capability extends beyond the use of base images, allowing users to create customized images that preserve their software installs and configurations. The images persist in an image library, allowing users to launch new cloud instances based their images.

But why should images only be applicable to virtualized cloud resources?

Toward that end, we're excited to introduce SoftLayer Flex Images, a new capability that allows us to capture images of physical and virtual servers, store them all in one library, and rapidly deploy those images on either platform.

SoftLayer Flex Images

Physical servers now share the core features of virtual servers—elasticity and flexibility. With Flex Images, you can move seamlessly between and environments as your needs change.

Let's say you're running into resource limits in a cloud server environment—your data-intensive server is I/O bound—and you want to move the instance to a more powerful dedicated server. Using Flex Images, you can create an image of your cloud server and, extending our I/O bound example, deploy it to a custom dedicated server with SSD drives.

Conversely, a dedicated environment can be quickly replicated on multiple cloud instances if you want the scaling capability of the cloud to meet increased demand. Maybe your web heads run on dedicated servers, but you're starting to see periods of usage that stress your servers. Create a Flex Image from your dedicated server and use it to deploy cloud instances to meet demand.

Flex Image technology blurs the distinctions—and breaks down the walls—between virtual and physical computing environments.

We don't think of Flex Images as new product. Instead—like our network, our portal, our automated platform, and our globe-spanning geographic diversity—Flex Image capability is a free resource for our customers (with the exception of standard nominal costs in storing the Flex Images).

We think Flex Images represents not only great value, but also provides a further example of how SoftLayer innovates continually to bring new capabilities and the highest possible level of customer control to our automated services platform.

To sum up, here are some of the key features and benefits of SoftLayer Flex Images:

  • Universal images that can be used interchangeably on dedicated or cloud systems
  • Unified image library for archiving, managing, sharing, and publishing images
  • Greater flexibility and higher scalability
  • Rapid provisioning of new dedicated and cloud environments
  • Available via SoftLayer's management portal and API

In public beta, Flex Images are available now. We invite you to try them out, and, as always, we want to hear what you think.

-Marc

November 21, 2011

SLaying at Cloud Expo West 2011

A month ago, Summer talked about how SoftLayer defies the laws of physics by being in several different places at the same time. With a worldwide network and data center footprint, that's always going to be the case, but when we have several events going on in a given week, we're even more dispersed. As Summer mentioned in her Server Challenge blog this morning, she traveled east to New York City for ad:tech with a few SLayers, and I joined a team that headed west for Cloud Expo West in Santa Clara, California.

We set up shop on the expo floor and had the opportunity to meet with interesting and interested attendees between session. In addition to our exhibit hall presence, SoftLayer had three SLayers featured in presentations, and the response to each was phenomenal.

Our first presenter was none other than SoftLayer CTO Duke Skarda. His presentation, "Not Your Grandpa's Cloud," was about dedicated servers and whether cloud computing may be surpassing that "grandpa" of the hosting industry. Joined by RightScale CEO Michael Crandell, Duke also announced our SoftLayer's new relationship with RightScale. If you didn't have a chance to join us, we have a treat for you ... You can download Duke's presentation from Sys-con!

Five minutes after Duke left the stage, SoftLayer Director of Product Innovation Marc Jones spoke to Cloud Expo attendees about "Building at Internet Scale in a Hosted Environment." His focus was how businesses could enable technologies, design and architecture of Internet scale solutions in a hosted environment. He shared trends from SoftLayer customers and partners, explained what SoftLayer believes Internet-scale is from a technology perspective, and the products and services in the market that create a scalable solution.

On Day 3, SoftLayer Director of Corporate Analytics Francisco Romero presented a question to attendees: "How Smart is it to Build Your Own Cloud?" With concerns around security, hardware, software and flexibility, is a business better off going with a hosted solution over building its own cloud infrastructure. Spoiler alert: He showed how the hosted environment was head-and-shoulders over the in-house environment in most cases.

All in all, Cloud Expo West was an exemplary tradeshow for SoftLayer ... Three fantastic speakers in two days driving traffic to our booth where we could share how SoftLayer has built our cloud and how our approach is part of a bigger effort to drive innovation in the world of hosting.

As Summer mentioned in her post, we want to see your smiling faces at our booths and in our presentations in the future, so bookmark the SoftLayer Event Calendar and start planning your trips to meet us in 2012!

-Natalie

September 30, 2010

What is a Cloud?

What is a Cloud? This seems like a simple question that would have a simple answer. If you ask this question amongst your “techie” friends you will find similar yet different definitions on what cloud computing actually is. I can say this because it just recently happened to me and it turned out to be a very interesting conversation. There is no single industry accepted definition as of yet so here is my take on what cloud computing is.

Cloud computing is accessing IT resources that are owned and operated by a third-party provider in one or more data centers such as SoftLayer. They feature on-demand provisioning (as fast as 5 minutes at SoftLayer) and pay as you go billing with minimal upfront investment. It is a great way to deliver cost effective computing power over the Internet. It will minimize capital expense and tie operating expense to actual use. I do think that many cloud offerings are really no more than your common managed hosting being marketed as clouds.

Cloud services can be categorized into different models such as Software as a Service (SaaS) and Infrastructure as a Service (Iaas). There are also two types of deployment models. You can have a public cloud which is a “multi-tenant” environment. The physical servers are shared among multiple customers of the cloud. The other type of deployment is the private cloud. Only one customer would be utilizing the physical server or servers.

That is my definition of “what is a cloud.” A wise man once told me that cloud computing is really nothing more than another pricing model and delivery model.

Categories: 
August 3, 2010

How Clouds Killed The PC

Most days, it seems that technology progresses all too slowly. It is a different feeling when you work with cutting edge technology on a daily basis: deploying the first dual network datacenter infrastructure, being entrenched in solutions for everything from CDN to ISCI to DTS and more, testing the latest enterprise solutions from leading industry vendors long before money could buy them… it never really meant a whole lot to me; it was very much just, “How we roll”, as the gang would say.

But every so often, there is a day when a new technology catches my attention and reminds me why I got involved in the IT industry. Something that reminds me of the days spent tapping out QuickBasic 2.0 applications on my 18MHz 386 and 16 color EGA monitor. Surprisingly, the rise of cloud computing did just that. There was a day some still remember, when the cost of localized hardware was significant enough that terminals ruled the world. Occasionally, you may still see one at a grocery checkout stand or being used in a retail stockroom to check inventory across locations. Early terminals were commonly thin clients lacking a processor, non-volatile user storage, and only possessing enough memory to display what was on the screen at any given time. As the cost of memory declined, fat clients gained some popularity offering locally programmable memory. However, the concept was still the same: one host machine, usually a mainframe, serving applications over a distance to multiple (less capable) client machines.

Terminals were not destined to last though. In a twist of irony one of the innovations that they helped to inspire, the microprocessor, combined with the falling price and increased capacity of memory eventually led the decline of terminals. Left behind, in a cloud of dust, by hardware manufacturer’s race for speed capacity combined with advances in networking technology, the terminal PC became a historical relic looked upon as a necessary stop-gap solution used in the days when hardware was just too-darn-expensive. It was at that time the truly personal computer that we know and love was born and has forever-since reigned supreme. Then came the ARPANET, which gave way to the Information Super Highway, gave way to the World Wide Web, gave way to the internet we know today.

Mainframes gave way to servers. And today, I walk into a datacenter surrounded by servers boasting quad octo-core processors and Cloud Computing Instances, talking to customers who use their smart-phones to remotely access their web hosts, and quietly thinking to myself, “Have things really changed?” How far off is the day, when the benefits of remotely hosted applications outweigh the benefits of localized hardware? When we sit at the start of a new era where CCI’s can be created in minutes, regularly imaged for data security, migrated and restored quickly in the event of hardware failure, accessed from anywhere and from a variety of client hardware and software implementations, how much more would it take for us to return to the days of terminal PC’s. As bandwidth continues to improve, purchase and operational costs per processing core continues to fall, people demand more and more ‘anywhere access’, open source gains popularity and the idea of renting freely upgraded applications becomes accepted outside of the IT community, who knows what the future might hold. In a future where the concept of parallel uplinks may be no more foreign than that of parallel data transfer over CAT6 is to the layman, I wonder if personal computers will be thought of as the necessary stop-gap solution used while we waited for bandwidth to catch up to useable processing power; nothing more than a dinosaur that gave way to the green-movement and our need to be connected everywhere.

While I work on bringing my head out of the clouds, I remember why I am here. I am not here because technology’s past was all that fantastic, or because the present is all that glamorous, but because the future is still wide open. Whether-or-not clouds ever really kill the PC is anyone’s guess and only time will tell. However, one thing is currently known, as companies continue to see the benefit of having their staff conduct business through a web-portal interface, consumers continue trying to figure out what they are going to do with the extra two or three of the four cores they have, and the cost-to-performance ratio associated with remote resources continues to fall, we are steadily moving that way.

Categories: 
June 22, 2010

Fajitas, Chicken Wings, and Cloud Computing

Three of Lance Crosby’s favorite things are fajitas, chicken wings, and cloud computing. Believe it or not, there is a common thread between all three. See if you can figure it out.

First, let’s consider fajitas. What are they? Well, the only true fajita is beef outside skirt steak. Everything else is just grilled meat that you stuff in a tortilla. For many years, the outside skirt steak was a “throwaway” cut often given to vaqueros as part of their pay <http://en.wikipedia.org/wiki/Fajita> . I know a man who grew up in a family of migrant farm workers, and in his youth they would visit slaughterhouses to ask for free throwaway cuts. They often got fajitas.

Back in the ‘80s, the retail price of fajitas skyrocketed. Tex-mex restaurants suddenly made that cut of meat popular. Then, in 1988, a treaty with Japan allowed the Japanese to import American outside skirt steak without the usual 200% tariff. Thus, 90% of our outside skirt steak winds up in Japan. Bottom line, a previously unutilized throwaway cut of meat became a gold mine and boosted the utilization of a side of beef. Consequently, when you order fajitas today, you usually get some sort of substitute beef <http://www.dallasobserver.com/2009-06-18/restaurants/so-what-exactly-are-you-eating-when-you-order-fajitas-in-a-tex-mex-restaurant/1> , not true outside skirt steak.

Next, think about the lowly chicken wing. I just saw an ad for a local chicken wing place offering their “boneless” chicken wings for a special low price. These aren’t really wings. They are pure white tender boneless chicken breast strips – what you would think is the premium cut of a chicken. The fine print on the ad says that bone-in wings may NOT be substituted for this promotion. Huh? You can’t sub a worse cut of meat that’s mostly bone for a premium cut that’s all meat and no bone?

As it turns out, the demand for the formerly throwaway cut of chicken wings has driven up their price such that boneless breast strips yield a higher profit margin <http://www.abc3340.com/news/stories/0310/711570.html> than the bony wings. Once again, a formerly thrown away item becomes a gold mine and allows for higher utilization of the whole bird.

Finally, let’s add in cloud computing to this puzzle. When dedicated servers are used, they each often perform a single task, whether it’s an email server, a web server, an application server, a database server, etc. Such servers frequently have a resource utilization rate of less than 20%, which means that 80% of the server’s processing power is thrown away.

Enter cloud computing. When done correctly, cloud computing increases the utilization rate of each individual server and turns the formerly thrown away processing power into a gold mine. This allows for more efficient capital investments and a higher return on assets.

So what’s the common thread between fajitas, chicken wings, and cloud computing? You’ve probably already figured it out. All three have taken something that previously was almost worthless and thrown away and turned it into something valuable and highly demanded by boosting utilization.

SoftLayer plans to take this to another level later this year when we release BYOC – Build Your Own CloudTM. You’ll then be able to tailor your processing power to exactly what you need. Just select the amount of RAM, number of processors, storage space, an operating system, select hourly or monthly billing, and go. You don’t pay for resources you don’t need or use, and we have less unused processing capacity in our datacenters. It’s a win-win for our customers, our company, and the environment since power and real estate will be used more efficiently.

February 25, 2010

When things get hectic, Cloud computing to the rescue!

Nothing’s worse than trying to use someone’s website when you absolutely need information right now, and it’s unavailable. Last semester when attempting to figure out where the heck my classes were located, the school’s website was crippled by the influx of new freshmen that were trying to do the same. Imagine over 20,000 people trying to access this site at the same time, and because of this, the site is rendered practically useless.

We’ve had customers of ours face all sorts of hardships with their sites. Whether they’re featured on a popular TV show, or they’ve seen an unprecedented rise in traffic due to such sites as www.digg.com and www.Slashdot.org (commonly referred to as the Digg effect, or being ‘slashdotted’ <http://en.wikipedia.org/wiki/Slashdot_effect> ) it’s often difficult to get a new dedicated server online quickly enough to mitigate this effect. Imagine that instead of tens of thousands of college students, its tens of thousands of dollars! Quite the predicament, right?

Not a problem though! Cloud computing to the rescue! CloudLayer computing instances are able to be rapidly deployed to provide additional resources should they be required. Even better, if you only anticipate a short burst, you can grab a few up, use them while they’re needed, and then toss them, all while only being billed by the hour! With cloud computing administrators can quickly react to changing situations. We offer several solutions in our bag of tricks, including Dedicated, Bare Metal Cloud, and CloudLayer computing. With proper planning and deployment, your site can be profitable regardless of the situation. This includes a popular product, blog, or the first day of college.

October 7, 2009

GAHAP Revisited. Otherwise titled “Credit Analysts, Statistics, and Common Sense”

From time to time, I have posted about my frustration with GAAP accounting and traditional credit analysis and how it is not friendly to the hosting business model. For a refresher, click here, here, here, here, and here. By GAHAP, I jokingly mean “generally accepted hosting accounting principles.”

Mike Jones came in my office after a frustrating phone call with a credit analyst. They were trying to talk through collateral possibilities. He told me that the credit analyst has a problem because we carry hardly any accounts receivable. The credit analyst wants something that he can collect in case of default. In GAAP (generally accepted accounting principles), accounts receivable is the total amount that you have billed your customers but have not yet collected from them. Common sense hint: the accounts receivable balance won’t pay your bills – they won’t get paid until you collect the cash.

SoftLayer includes this common sense in its business model. Rather than send out invoices and bug people to pay us later, we choose to have our customers pay us in advance of their use of products and services. Many other hosting companies do the same. There are many advantages to this: we save costs that we would incur collecting the cash, we reduce the amount of abusive accounts that would sign up for a few days of malicious activity and never pay us, and it helps facilitate the on-demand billing side of the cloud computing model.
Again, the disadvantage of this practice comes about when trying to educate a set-in-his-ways credit analyst about our business model. Here is the basic gist of a mythical conversation between a credit analyst and a hosting company:

Credit Analyst: “I see you don’t have any accounts receivable to speak of.”

Hosting Company: “I know! Isn’t that great?”

Credit Analyst: “But if you default, what can I collect?”

Hosting Company: “You’d simply continue to bill the customers for their continued business. Because our customer agreement is month-to-month, you just collect for their next month of service over the next 30 days and you’ve essentially done the same as collect receivables. In fact, that is far easier than collecting past due receivables. We’d be happy place the anticipated next month billing to our customers on the balance sheet in an accounts receivable type of account, but GAAP does not allow this.”

Credit Analyst: “Oh my…you don’t have long term contracts? So all of your customers could leave at once? Isn’t that risky?”

Hosting Company: “We have several thousand customers who trust us with mission critical needs. They will not all leave at once. Our statistics show only a very low percentage of customers terminate services each month. Even through the depths of the recession, we had more new customers joining us than we had customers leaving.”

Credit Analyst: “But conceptually, they could all leave at once since they have no contracts.”

Hosting Company: “That is statistically impossible. The odds of that event are so low that it’s immeasurable. As I said, we provide mission critical services to our customers. To think that they will all no longer need these services simultaneously is paranoid. And if they did, would a contract keep them paying us? That’s doubtful. Let me ask you – do you lend to the electric company or the phone company?”

Credit Analyst: “Of course.”

Hosting Company: “Do their customers sign long term contracts?”

Credit Analyst: “Some do for special promotions. But for the most part – no.”

Hosting Company: “So why do you lend to them?”

Credit Analyst: “Why, the customers can’t live without electricity or phones. That’s a no brainer.”

Hosting Company: “It is exactly the same with our business. In this information age economy, our customers cannot live without the hosting services that we provide. You should look at us in a similar way that you look at a utility company.”

Credit Analyst: “But we classify your business as a technology company. Can’t you just have your customers sign contracts?”

Hosting Company: “Well, wouldn’t that conflict with the on-demand, measured billing aspects of cloud computing?”

Credit Analyst: “I guess there’s not much hope of you building up a sizeable accounts receivable balance then.”

Hosting Company: “It really makes no sense for us to do that.”

Credit Analyst: “We may not be able to do business with you. Do you have any real estate?”

Conclusion: Most credit analysts are so wrapped up in GAAP that they’ve forgotten the laws of statistics and many have even lost touch with common sense. Is it any wonder we’ve had a big banking crisis over the past couple of years?

April 14, 2009

EVA, Cloud Computing, and the Capex vs. Opex Debate

So far in 2009, there’s been a fair amount of discussion pro and con regarding the financial benefits (or lack thereof) of cloud computing. It’s very reminiscent of the whole “do-it-yourself” or “outsource it” debate. Blog posts like this and articles like this are samples of the recent debate.

One thing I have not yet seen or heard discussed regarding cloud computing is the concept of EVA, or Economic Value Added. Let me add at this point that EVA is a registered service mark of EVA Dimensions LLC and of Stern Stewart & Co. It is the concept of economic income instead of accounting income. SoftLayer subscribes to software from EVA Dimensions LLC. Get more info here.

For you to buy into the premise of this post, you’ll have to be sold on EVA as a valuable metric. Bottom line, EVA cleans up the distortions of GAAP and aligns all areas of the business so that more EVA is always better than less EVA. Most other metrics when pushed to extremes can actually harm a business, but not EVA. Yes, even bottom line GAAP net income when pushed to an extreme can harm a business. (How that can happen is fodder for another blog post.) Several books have been written about EVA and its benefits, so that’s too much to write about in this post. This is a good summary link, and for more info you can Google it on your own. And if you do Google it on your own, be warned that you may have to wade through links regarding Eva Longoria and/or Eva Mendes .

Part of the Cloud computing debate revolves around “capex vs. opex.” Specifically, this involves paying for IT infrastructure yourself using capital expenditures (“capex”) or employing Cloud computing and buying IT infrastructure with operating expenditures (“opex”). Geva Perry recently said, “There is no reason to think that there is a financial benefit to making an OpEx expense vs. CapEx expense. Period.” I disagree. When you look at this in terms of EVA, whether you use capex or opex can make a big difference in creating value for your business.

Let’s look at the effect of switching capex to opex on EVA. Coca-Cola is a company that employs EVA. Years ago, they decided to ship their beverage concentrate in single-use cardboard containers instead of reusable stainless steel. This made GAAP measures worse – profit and profit margins actually went down. But EVA went up by making the move from capex to opex. How can this be? Grab something caffeinated and check out some numbers here if you dare.

OK, that’s all fine. But how would shifting IT spending from capex to opex affect EVA? Glad you asked. Last summer, I modeled some full-fledged financials to illustrate financial benefits of outsourcing IT vs. doing it yourself. I’ve taken those and added the EVA calcs to them. Take another swig of caffeine and check them out here and here.

Assuming that EVA is a worthwhile metric (and I think it is), moving capex to opex is possibly a very good financial decision. Any questions? As always, your mileage may vary. Model carefully!

March 18, 2009

Code Performance Matters Again

With the advent of cloud computing, processing power is coming under the microscope more and more. Last year, you could just buy a 16-core system and be done with it, for the most part. If your code was a little inefficient, the load would be high, there really wasn't a problem. For most developers, it's not like you're writing digg and need to make sure you can handle a million page requests a day. So what if your site is a little inefficient, right?

Well think again. Now you're putting your site on "the cloud" that you've heard so much about. On the cloud, each processor cycle costs money. Google AppEngine charges by the CPU core hour, as does Mosso. The more wasted cycles in your code, the more it will cost to run it per operation. If your code uses a custom sorting function, and you went with bubble sort because "it was only 50 milliseconds slower than merge sort and I can't be bothered to write merge sort by hand" then be prepared for the added cost over a month's worth of page requests. Each second of extraneous CPU time at 50,000 page views per day costs 417 HOURS of CPU time per month.

Big-O notation hasn't really been important for the majority of programmers for the last 10 to 15 years or so. Loop unrolling, extra checks, junk variables floating around in your code, all of that stuff would just average out to "good enough" speeds once the final product was in place. Unless you're working on the Quake engine, any change that would shave off less than 200ms probably isn't worth the time it would take to re-engineer the code. Now, though, you have to think a lot harder about the cost of your inefficient code.

Developers who have been used to having a near-infinite supply of open CPU cycles need to re-think their approach to programming large or complex systems. You've been paying for public bandwidth for a long time, and it's time to think about CPU the same manner. You have a limited amount of "total CPU" that you can use per month before the AppEngine's limits kick in and you begin getting charged for it. If you're using a different host, your bill will simply go up. You need to treat this sort of thing like you would bandwidth. Minimize your access to the CPU just like you'd minimize access to the public internet, and keep your memory profiles low.

The problem with this approach is that the entire programming profession has been moving away from concentrating on individual CPU cycles. Helper classes, template libraries, enormous include files with rarely-used functions; they all contribute to the CPU and memory glut of the modern application. We, as an industry, are going to need to cut back on that. You see some strides toward this with the advent of dynamic include functions and libraries that wait to parse an include file until that object or function is actually used by the execution of the program for the first time. However, that's only the first step. If you're going to be living on the cloud, cutting down on the number of times you access your libraries isn't good enough. You need to cut down on the computational complexities of the libraries themselves. No more complex database queries to find a unique ID before you insert. No more custom hashing functions that take 300 cycles per character. No more rolling your own sorting functions. And certainly no more doing things in code that should be done in a database query.

Really good programmers are going to become more valuable than they already are once management realizes that they're paying for CPU cycles, not just "a server." When you can monetize your code efficiency, you'll have that much more leverage with managers and in job interviews. I wouldn't be surprised if, in the near future, an interviewer asked about cost algorithms as an analogy for efficiency. I also wouldn't be surprised if database strategy changed in the face of charging per CPU cycle. We've all (hopefully) been trying for third normal form on our databases, but JOINs take up a lot of CPU cycles. You may see websites in the near future that run off large denormalized tables that are updated every evening.

So take advantage of the cloud for your computing needs, but remember that it's an entirely different beast. Code efficiency is more important in these new times. Luckily, "web 2.0" has given us one good tool to decrease our CPU times. AJAX, combined with client-side JavaScript, allows a web developer to generate a web tool where the server does little more than fetch the proper data and return it. Searching, sorting, and paging can all be done on the client side given a well designed application. By moving a lot of the "busy work" to the client, you can save a lot of CPU cycles on the server.

For all those application developers out there, who don't have a client to execute some code for you, you're just going to have to learn to write more efficiently I guess. Sorry.

-Daniel

Categories: 
June 14, 2008

In Memory of Dawn

Dawn was the best friend I’ve ever had, except for my little sister. Just yesterday I got home only to find out that Dawn had died silently in the night. No amount of resuscitation could bring her back. Needless to say, I was quite sad.

Dawn was my computer.*

The funny part of it all was just how much of my time involves a computer. I watch TV and Movies on my computer, I play games on my computer, I do my banking on my computer, I pay all my bills on my computer, I schedule my non-computer time on my computer, I use my computer as a jukebox.

In other words, I was completely lost. What made it worse, however, was that I had had yesterday scheduled to pay my bills. But where was my list of bills?

If you guessed “Dawn had all your bills”, then you are right.

What about paper bills? I’ve got the Internet and a computer! So, in most cases I’ve canceled paper bills. All paper bills I get are shredded forthwith. So I had no paper backup of bills.

Well, I made do. I kicked my roommate off his computer (a technique involving making annoying noises while he tries to concentrate playing Call of Duty 4) and used it to pay what bills I could remember. I kept track of the bills I was paying by entering them into a Google Document.

That’s when it hit me! Why wasn’t my bill spreadsheet on Google Documents? Along with my bill list? Along with all the other documents I work on every day? Cloud Computing For The Win! As soon as I get my next computer up and running (and I figure out a new naming algorithm) I’m going to put all my vital files on Google Docs. This ties in well with Justin Scott’s post; the key to not having your data disappear during a disaster is to have a backup copy. You want backups out there, far away from your potential point of failure. (I did have backups… but they’re all on CDs that I didn’t want to have to sort through to find just one file. And had the disaster been, say, a flood, I would have had no backups.)

Google Docs is a great example of Cloud Computing: Putting both the program and the file being worked on “in the cloud.” Having built internal applications for a few people, I would make the same recommendation: Since many business apps are moving to PHP anyway (thanks for the reminder, Daniel!), you might as well move the application AND the data out of the building and onto a secure server. And as Mr. Scott** mentioned, SoftLayer ALREADY has geographic diversity as well as a private network that will allow you to link your application and data servers together in real time through all datacenters… for free. Along with the added bonus of being able to access your application from any computer… should yours meet up with Misty, May, and Dawn at the Great Datacenter in the Sky.

-Zoey

* I had a system of naming my computers after the female protagonists from the Pokemon series. Dawn, however, is the last of that series…

** I’ve decided that since Justin is an Engineer, calling him Mr. Scott is funny.

Subscribe to cloud-computing