Posts Tagged 'Cloud Computing'

August 17, 2012

SoftLayer Private Clouds - Provisioning Speed

SoftLayer Private Clouds are officially live, and that means you can now order and provision your very own private cloud infrastructure on Citrix CloudPlatform quickly and easily. Chief Scientist Nathan Day introduced private clouds on the blog when it was announced at Cloud Expo East, and CTO Duke Skarda followed up with an explanation of the architecture powering SoftLayer Private Clouds. The most amazing claim: You can order a private cloud infrastructure and spin up its first virtual machines in a matter of hours rather than days, weeks or months.

If you've ever looked at building your own private cloud in the past, the "days, weeks or months" timeline isn't very surprising — you have to get the hardware provisioned, the software installed and the network configured ... and it all has to work together. Hearing that SoftLayer Private Clouds can be provisioned in "hours" probably seems too good to be true to administrators who have tried building a private cloud in the past, so I thought I'd put it to the test by ordering a private cloud and documenting the experience.

At 9:30am, I walked over to Phil Jackson's desk and asked him if he would be interested in helping me out with the project. By 9:35am, I had him convinced (proof), and the clock was started.

When we started the order process, part of our work is already done for us:

SoftLayer Private Clouds

To guarantee peak performance of the CloudPlatform management server, SoftLayer selected the hardware for us: A single processor quad core Xeon 5620 server with 6GB RAM, GigE, and two 2.0TB SATA II HDDs in RAID1. With the management server selected, our only task was choosing our host server and where we wanted the first zone (host server and management server) to be installed:

SoftLayer Private Clouds

For our host server, we opted for a dual processor quad core Xeon 5504 with the default specs, and we decided to spin it up in DAL05. We added (and justified) a block of 16 secondary IP addresses for our first zone, and we submitted the order. The time: 9:38am.

At this point, it would be easy for us to game the system to shave off a few minutes from the provisioning process by manually approving the order we just placed (since we have access to the order queue), but we stayed true to the experiment and let it be approved as it normally would be. We didn't have to wait long:

SoftLayer Private Clouds

At 9:42am, our order was approved, and the pressure was on. How long would it take before we were able to log into the CloudStack portal to create a virtual machine? I'd walked over to Phil's desk 12 minutes ago, and we still had to get two physical servers online and configured to work with each other on CloudPlatform. Luckily, the automated provisioning process took on a the brunt of that pressure.

Both server orders were sent to the data center, and the provisioning system selected two pieces of hardware that best matched what we needed. Our exact configurations weren't available, so a SBT in the data center was dispatched to make the appropriate hardware changes to meet our needs, and the automated system kicked into high gear. IP addresses were assigned to the management and host servers, and we were able to monitor each server's progress in the customer portal. The hardware was tested and prepared for OS install, and when it was ready, the base operating systems were loaded — CentOS 6 on the management server and Citrix XenServer 6 on the host server. After CentOS 6 finished provisioning on the management server, CloudStack was installed. Then we got an email:

SoftLayer Private Clouds

At 11:24am, less than two hours from when I walked over to Phil's desk, we had two servers online and configured with CloudStack, and we were ready to provision our first virtual machines in our private cloud environment.

We log into CloudStack and added our first instance:

SoftLayer Private Clouds

We configured our new instance in a few clicks, and we clicked "Launch VM" at 11:38am. It came online in just over 3 minutes (11:42am):

SoftLayer Private Clouds

I got from "walking to Phil's desk" to having a multi-server private cloud infrastructure running a VM in exactly two hours and twelve minutes. For fun, I created a second VM on the host server, and it was provisioned in 31.7 seconds. It's safe to say that the claim that SoftLayer takes "hours" to provision a private cloud has officially been confirmed, but we thought it would be fun to add one more wrinkle to the system: What if we wanted to add another host server in a different data center?

From the "Hardware" tab in the SoftLayer portal, we selected "Add Zone" to from the "Actions" in the "Private Clouds" section, and we chose a host server with four portable IP addresses in WDC01. The zone was created, and the host server went through the same hardware provisioning process that our initial deployment went through, and our new host server was online in < 2 hours. We jumped into CloudStack, and the new zone was created with our host server ready to provision VMs in Washington, D.C.

Given how quick the instances were spinning up in the first zone, we timed a few in the second zone ... The first instance was online in about 4 minutes, and the second was running in 26.8 seconds.

SoftLayer Private Clouds

By the time I went out for a late lunch at 1:30pm, we'd spun up a new private cloud infrastructure with geographically dispersed zones that launched new cloud instances in under 30 seconds. Not bad.

Don't take my word for it, though ... Order a SoftLayer Private Cloud and see for yourself.

-@khazard

June 28, 2012

Never Break Up with Your Data Again

Wouldn't it be nice if you could keep the parts of a relationship that you like and "move on" from the parts you don't? You'd never have to go through the awkward "getting to know each other" phase where you accidentally order food the other person is allergic to, and you'd never have to experience a break up. As it is, we're faced with a bit of a paradox: Relationships are a lot of work, and "Breaking up is hard to do."

I could tell you story after story about the break ups I experienced in my youth. From the Ghostbuster-jumpsuited boyfriend I had in kindergarten who stole my heart (and my barrettes) to until it was time to take my had-to-have "My Little Pony" thermos lunchbox to another table at lunch after a dramatic recess exchange to the middle school boyfriend who took me to see Titanic in the theater four times (yes, you read that correctly), my early "romantic" relationships didn't pan out in the "happily ever after" way I'd hoped they would. Whether the result of an me unwelcome kiss under the monkey bars or a move to a different school (which might as well have been on Mars), I had to break up with each of the boys.

Why are you reading about my lost loves on the SoftLayer Blog? Simple: Relationships with IT environments — specifically applications and data — are not much different from romantic relationships. You might want to cut ties with a high maintenance piece of equipment that you've been with for years because its behavior is getting erratic, and it doesn't look like it'll survive forever. Maybe you've outgrown what your existing infrastructure can provide for you, and you need to move along. Perhaps you just want some space and need to take a break from a project for six months.

If you feel like telling your infrastructure, "It's not you, it's me," what are your options? Undo all of your hard work, schedule maintenance and stay up in the dead of a weeknight to migrate, backup and restore all of your data locally?

When I talk to SoftLayer customers, I get to be a relationship therapist. Because we've come out with some pretty innovative tools, we can help our customers avoid ever having to break up with their data again. Two of the coolest "infrastructure relationship"-saving releases: Flex Images (currently in public beta) and portable storage volumes for cloud computing instances (CCIs).

With Flex Images, customers using RedHat, CentOS or Windows systems can create and move server images between physical and virtual environments to seamlessly transition from one platform to the other. With about three clicks, a customer-created image is quickly and uniformly delivered to a new dedicated or cloud server. The idea behind Flex Images is to blur the line between physical and virtual environments so that if you feel the need to break up with one of the two, the other is able to take you in.

Portable storage volumes (PSVs) are secondary CCI volumes that can be added onto any public or private CCI. Users can detach a PSV from any CCI and have it persist in the cloud, unattached to any compute resource, for as long as necessary. When that storage volume is needed again, it can be re-attached as secondary storage on any other CCI across all of SoftLayer's facilities. The best relationship parallel would be "baggage," but that's got a negative connotation, so we'll have to come up with something else to call it ... "preparedness."

We want to help you avoid break ups and provide you easy channels to make up with your old infrastructure if you have a change of heart. The result is an infrastructure that's much easier to manage, more fluid and less dramatic.

Now if I can only figure out a way to make Flex Images and portable storage volumes available for real-life relationships .... I'd make millions! :-)

-Arielle

June 6, 2012

Today's Technology "Game Changers": IPv6 and Cloud

"Game Changers" in technology force a decision: Adapt or die. When repeating rifles gained popularity in the late 1800s, a business of manufacturing muzzle-loading or breech-loading rifles would have needed to find a way to produce a repeating rifle or it would have lost most (if not all) of it's business to Winchester. If a fresh-faced independent musician is hitting it big on the coffee shop scene in 2012, she probably won't be selling out arenas any time soon if she refuses to make her music available digitally. Just ask any of the old-timers in the print media industry ... "Game Changers" in technology can be disastrous for an established business in an established industry.

That's pretty intimidating ... Even for tech businesses.

Shifts in technology don't have to be as drastic and obvious as a "printed newspaper v. social news site" comparison for them to be disruptive. Even subtle advances can wind up making or breaking a business. In fact, many of today's biggest and most successful tech companies are scrambling to adapt to two simple "game changers" that seem terribly significant:

  • IPv6
  • "The Cloud"

IPv6

A quick search of the SoftLayer Blog reminds me that Lance first brought up the importance of IPv6 adoption in October 2007:

ARIN has publically announced the need to shift to IPv6 and numerous articles have outlined the D-Day for IPv4 space. Most experts agree, its coming fast and that it will occur sometime in 2010 at the current pace (that's about two years for those counting). IPv6 brings enough IP space for an infinite number of users along with improved security features and several other operational efficiencies that will make it very popular. The problem lies between getting from IPv4 to IPv6.

When IPv4 exhaustion was just a blip on the horizon, many businesses probably thought, "Oh, I'll get around to it when I need to. It's not a problem yet." When IANA exhausted the IPv4 pool, they probably started picking up the phone and calling providers to ask what plans they had in place. When some of the Internet's biggest websites completed a trial transition to IPv6 on World IPv6 Day last year, those businesses started feeling the urgency. With today's World IPv6 Launch, they know something has to be done.

World IPv6 Launch Day

Regardless of how conservative providers get with IPv4 space, the 4,294,967,296 IPv4 addresses in existence will not last much longer. Soon, users will be accessing an IPv6 Internet, and IPv4-only websites will lose their opportunity to reach those users. That's a "game changer."

"The Cloud"

The other "game changer" many tech businesses are struggling with these days is the move toward "the cloud." There are a two interesting perspectives in this transition: 1) The challenge many businesses face when choosing whether to adopt cloud computing, and 2) The challenges for businesses that find themselves severing as an integral (sometimes unintentional) part of "the cloud." You've probably seen hundreds of blog posts and articles about the first, so I'll share a little insight on the second.

When you hear all of the hype about cloud computing and cloud storage offering a hardware-agnostic Utopia of scalable, reliable power, it's easy to forget that the building blocks of a cloud infrastructure will usually come from vendors that provided a traditional hosting resources. When a computing instance is abstracted from a hardware device, it's opens up huge variations in usage. It's possible to have dozens of public cloud instances using a single server's multi-proc, multi-core resources at a given time. If a vendor prices a piece of software on a "per server" basis, how do they define a "server" when their users are in the cloud? It can be argued that a cloud computing instance with a single core of power is a "server," and on the flip-side, it's easy to define a "server" as the hardware object on which many cloud instances may run. I don't know that there's an easy way to answer that question, but what I do know is that applying "what used to work" to "what's happening now" isn't the right answer.

The hardware and software providers in the cloud space who are able to come up with new approaches unencumbered by the urge to continue "the way we've always done it" are going to be the ones that thrive when technology "game changers" emerge, and the providers who dig their heels in the dirt or try to put a square peg into a round hole will get the short end of the "adapt or die" stick.

We've tried to innovate and take a fresh look at every opportunity that has come our way, and we do our best to build relationships with agile companies that we see following suit.

I guess a better way to position the decision at the beginning of this post would be to add a little tweak: "Innovate, adapt or die." How you approach technology "game changers" will define your business's success.

-@gkdog

April 24, 2012

RightScale + SoftLayer: The Power of Cloud Automation

SoftLayer's goal is to provide unparalleled value to the customers who entrust their business-critical computing to us — whether via dedicated hosting, managed hosting, cloud computing or a hybrid environment of all three. We provide the best platform on the market, delivering convenience, ease of use, compelling return on investment (ROI), significant competitive advantage, and consistency in a world where the only real constant seems to be change.

That value proposition is one of the biggest driving forces behind our partnership with RightScale. We're cloud computing soul mates.

RightScale

RightScale understands the power of automation, and as a result, they've created a cloud management platform that they like to say delivers "abstraction with complete customization." RightScale customers can easily deploy and manage applications across public, private and hybrid cloud environments, unencumbered by the underlying details. They are free to run efficient, scalable, highly available applications with visibility into and control over their computing resources available in one place.

As you know, SoftLayer is fueled by automation as well, and it's one of our primary differentiators. We're able to deliver a phenomenal customer experience because every aspect of our platform is fully and seamlessly automated to accelerate provisioning, mitigate human error and provide customers with access and features that our competitors can only dream of. Our customers get simple and total control over an ever-expanding number of back-end services and functions through our easy-to-use Customer Portal and via an open, robust API.

The compatibility between SoftLayer and RightScale is probably pretty clear already, but if you needed another point to ponder, you can ruminate on the fact that we both share expertise and focus across a number of vertical markets. The official announcement of the SoftLayer and RightScale partnership will be particularly noteworthy and interesting in the Internet-based business and online gaming market segments.

It didn't take long to find an amazing customer success story that demonstrated the value of the new SoftLayer-RightScale partnership. Broken Bulb Game Studios — the developer of social games such as My Town, Braaains, Ninja Warz and Miscrits — is already harnessing the combined feature sets made possible by our partnership with RightScale to simplify its deployment process and scale to meet its customers' expectations as its games find audiences and growing favor on Facebook. Don't take our word for it, though ... Check out the Broken Bulb quote in today's press release announcing the partnership.

Broken Bulb Game Studios

Broken Bulb and other developers of social games recognize the importance of getting concepts to market at breakneck speed. They also understand the critical importance of intelligently managing IT resources throughout a game's life cycle. What they want is fully automated control over computing resources so that they can be allocated dynamically and profitably in immediate response to market signals, and they're not alone.

Game developers of all sorts — and companies in a growing number of vertical markets — will need and want the same fundamental computing-infrastructure agility.

Our partnership with RightScale is only beginning. You're going to see some crazy innovation happening now that our cloud computing mad scientists are all working together.

-Marc

February 1, 2012

Flex Images: Blur the Line Between Cloud and Dedicated

Our customers are not concerned with technology for technology's sake. Information technology should serve a purpose; it should function as an integral means to a desired end. Understandably, our customers are focused, first and foremost, on their application architecture and infrastructure. They want, and need, the freedom and flexibility to design their applications to their specifications.

Many companies leverage the cloud to take advantage of core features that enable robust, agile architectures. Elasticity (ability to quickly increase or decrease compute capacity) and flexibility (choice such as cores, memory and storage) combine to provide solutions that scale to meet the demands of modern applications.

Another widely used feature of cloud computing is image-based provisioning. Rapid provisioning of cloud resources is accomplished, in part, through the use of images. Imaging capability extends beyond the use of base images, allowing users to create customized images that preserve their software installs and configurations. The images persist in an image library, allowing users to launch new cloud instances based their images.

But why should images only be applicable to virtualized cloud resources?

Toward that end, we're excited to introduce SoftLayer Flex Images, a new capability that allows us to capture images of physical and virtual servers, store them all in one library, and rapidly deploy those images on either platform.

SoftLayer Flex Images

Physical servers now share the core features of virtual servers—elasticity and flexibility. With Flex Images, you can move seamlessly between and environments as your needs change.

Let's say you're running into resource limits in a cloud server environment—your data-intensive server is I/O bound—and you want to move the instance to a more powerful dedicated server. Using Flex Images, you can create an image of your cloud server and, extending our I/O bound example, deploy it to a custom dedicated server with SSD drives.

Conversely, a dedicated environment can be quickly replicated on multiple cloud instances if you want the scaling capability of the cloud to meet increased demand. Maybe your web heads run on dedicated servers, but you're starting to see periods of usage that stress your servers. Create a Flex Image from your dedicated server and use it to deploy cloud instances to meet demand.

Flex Image technology blurs the distinctions—and breaks down the walls—between virtual and physical computing environments.

We don't think of Flex Images as new product. Instead—like our network, our portal, our automated platform, and our globe-spanning geographic diversity—Flex Image capability is a free resource for our customers (with the exception of standard nominal costs in storing the Flex Images).

We think Flex Images represents not only great value, but also provides a further example of how SoftLayer innovates continually to bring new capabilities and the highest possible level of customer control to our automated services platform.

To sum up, here are some of the key features and benefits of SoftLayer Flex Images:

  • Universal images that can be used interchangeably on dedicated or cloud systems
  • Unified image library for archiving, managing, sharing, and publishing images
  • Greater flexibility and higher scalability
  • Rapid provisioning of new dedicated and cloud environments
  • Available via SoftLayer's management portal and API

In public beta, Flex Images are available now. We invite you to try them out, and, as always, we want to hear what you think.

-Marc

November 21, 2011

SLaying at Cloud Expo West 2011

A month ago, Summer talked about how SoftLayer defies the laws of physics by being in several different places at the same time. With a worldwide network and data center footprint, that's always going to be the case, but when we have several events going on in a given week, we're even more dispersed. As Summer mentioned in her Server Challenge blog this morning, she traveled east to New York City for ad:tech with a few SLayers, and I joined a team that headed west for Cloud Expo West in Santa Clara, California.

We set up shop on the expo floor and had the opportunity to meet with interesting and interested attendees between session. In addition to our exhibit hall presence, SoftLayer had three SLayers featured in presentations, and the response to each was phenomenal.

Our first presenter was none other than SoftLayer CTO Duke Skarda. His presentation, "Not Your Grandpa's Cloud," was about dedicated servers and whether cloud computing may be surpassing that "grandpa" of the hosting industry. Joined by RightScale CEO Michael Crandell, Duke also announced our SoftLayer's new relationship with RightScale. If you didn't have a chance to join us, we have a treat for you ... You can download Duke's presentation from Sys-con!

Five minutes after Duke left the stage, SoftLayer Director of Product Innovation Marc Jones spoke to Cloud Expo attendees about "Building at Internet Scale in a Hosted Environment." His focus was how businesses could enable technologies, design and architecture of Internet scale solutions in a hosted environment. He shared trends from SoftLayer customers and partners, explained what SoftLayer believes Internet-scale is from a technology perspective, and the products and services in the market that create a scalable solution.

On Day 3, SoftLayer Director of Corporate Analytics Francisco Romero presented a question to attendees: "How Smart is it to Build Your Own Cloud?" With concerns around security, hardware, software and flexibility, is a business better off going with a hosted solution over building its own cloud infrastructure. Spoiler alert: He showed how the hosted environment was head-and-shoulders over the in-house environment in most cases.

All in all, Cloud Expo West was an exemplary tradeshow for SoftLayer ... Three fantastic speakers in two days driving traffic to our booth where we could share how SoftLayer has built our cloud and how our approach is part of a bigger effort to drive innovation in the world of hosting.

As Summer mentioned in her post, we want to see your smiling faces at our booths and in our presentations in the future, so bookmark the SoftLayer Event Calendar and start planning your trips to meet us in 2012!

-Natalie

September 30, 2010

What is a Cloud?

What is a Cloud? This seems like a simple question that would have a simple answer. If you ask this question amongst your “techie” friends you will find similar yet different definitions on what cloud computing actually is. I can say this because it just recently happened to me and it turned out to be a very interesting conversation. There is no single industry accepted definition as of yet so here is my take on what cloud computing is.

Cloud computing is accessing IT resources that are owned and operated by a third-party provider in one or more data centers such as SoftLayer. They feature on-demand provisioning (as fast as 5 minutes at SoftLayer) and pay as you go billing with minimal upfront investment. It is a great way to deliver cost effective computing power over the Internet. It will minimize capital expense and tie operating expense to actual use. I do think that many cloud offerings are really no more than your common managed hosting being marketed as clouds.

Cloud services can be categorized into different models such as Software as a Service (SaaS) and Infrastructure as a Service (Iaas). There are also two types of deployment models. You can have a public cloud which is a “multi-tenant” environment. The physical servers are shared among multiple customers of the cloud. The other type of deployment is the private cloud. Only one customer would be utilizing the physical server or servers.

That is my definition of “what is a cloud.” A wise man once told me that cloud computing is really nothing more than another pricing model and delivery model.

Categories: 
August 3, 2010

How Clouds Killed The PC

Most days, it seems that technology progresses all too slowly. It is a different feeling when you work with cutting edge technology on a daily basis: deploying the first dual network datacenter infrastructure, being entrenched in solutions for everything from CDN to ISCI to DTS and more, testing the latest enterprise solutions from leading industry vendors long before money could buy them… it never really meant a whole lot to me; it was very much just, “How we roll”, as the gang would say.

But every so often, there is a day when a new technology catches my attention and reminds me why I got involved in the IT industry. Something that reminds me of the days spent tapping out QuickBasic 2.0 applications on my 18MHz 386 and 16 color EGA monitor. Surprisingly, the rise of cloud computing did just that. There was a day some still remember, when the cost of localized hardware was significant enough that terminals ruled the world. Occasionally, you may still see one at a grocery checkout stand or being used in a retail stockroom to check inventory across locations. Early terminals were commonly thin clients lacking a processor, non-volatile user storage, and only possessing enough memory to display what was on the screen at any given time. As the cost of memory declined, fat clients gained some popularity offering locally programmable memory. However, the concept was still the same: one host machine, usually a mainframe, serving applications over a distance to multiple (less capable) client machines.

Terminals were not destined to last though. In a twist of irony one of the innovations that they helped to inspire, the microprocessor, combined with the falling price and increased capacity of memory eventually led the decline of terminals. Left behind, in a cloud of dust, by hardware manufacturer’s race for speed capacity combined with advances in networking technology, the terminal PC became a historical relic looked upon as a necessary stop-gap solution used in the days when hardware was just too-darn-expensive. It was at that time the truly personal computer that we know and love was born and has forever-since reigned supreme. Then came the ARPANET, which gave way to the Information Super Highway, gave way to the World Wide Web, gave way to the internet we know today.

Mainframes gave way to servers. And today, I walk into a datacenter surrounded by servers boasting quad octo-core processors and Cloud Computing Instances, talking to customers who use their smart-phones to remotely access their web hosts, and quietly thinking to myself, “Have things really changed?” How far off is the day, when the benefits of remotely hosted applications outweigh the benefits of localized hardware? When we sit at the start of a new era where CCI’s can be created in minutes, regularly imaged for data security, migrated and restored quickly in the event of hardware failure, accessed from anywhere and from a variety of client hardware and software implementations, how much more would it take for us to return to the days of terminal PC’s. As bandwidth continues to improve, purchase and operational costs per processing core continues to fall, people demand more and more ‘anywhere access’, open source gains popularity and the idea of renting freely upgraded applications becomes accepted outside of the IT community, who knows what the future might hold. In a future where the concept of parallel uplinks may be no more foreign than that of parallel data transfer over CAT6 is to the layman, I wonder if personal computers will be thought of as the necessary stop-gap solution used while we waited for bandwidth to catch up to useable processing power; nothing more than a dinosaur that gave way to the green-movement and our need to be connected everywhere.

While I work on bringing my head out of the clouds, I remember why I am here. I am not here because technology’s past was all that fantastic, or because the present is all that glamorous, but because the future is still wide open. Whether-or-not clouds ever really kill the PC is anyone’s guess and only time will tell. However, one thing is currently known, as companies continue to see the benefit of having their staff conduct business through a web-portal interface, consumers continue trying to figure out what they are going to do with the extra two or three of the four cores they have, and the cost-to-performance ratio associated with remote resources continues to fall, we are steadily moving that way.

Categories: 
June 22, 2010

Fajitas, Chicken Wings, and Cloud Computing

Three of Lance Crosby’s favorite things are fajitas, chicken wings, and cloud computing. Believe it or not, there is a common thread between all three. See if you can figure it out.

First, let’s consider fajitas. What are they? Well, the only true fajita is beef outside skirt steak. Everything else is just grilled meat that you stuff in a tortilla. For many years, the outside skirt steak was a “throwaway” cut often given to vaqueros as part of their pay <http://en.wikipedia.org/wiki/Fajita> . I know a man who grew up in a family of migrant farm workers, and in his youth they would visit slaughterhouses to ask for free throwaway cuts. They often got fajitas.

Back in the ‘80s, the retail price of fajitas skyrocketed. Tex-mex restaurants suddenly made that cut of meat popular. Then, in 1988, a treaty with Japan allowed the Japanese to import American outside skirt steak without the usual 200% tariff. Thus, 90% of our outside skirt steak winds up in Japan. Bottom line, a previously unutilized throwaway cut of meat became a gold mine and boosted the utilization of a side of beef. Consequently, when you order fajitas today, you usually get some sort of substitute beef <http://www.dallasobserver.com/2009-06-18/restaurants/so-what-exactly-are-you-eating-when-you-order-fajitas-in-a-tex-mex-restaurant/1> , not true outside skirt steak.

Next, think about the lowly chicken wing. I just saw an ad for a local chicken wing place offering their “boneless” chicken wings for a special low price. These aren’t really wings. They are pure white tender boneless chicken breast strips – what you would think is the premium cut of a chicken. The fine print on the ad says that bone-in wings may NOT be substituted for this promotion. Huh? You can’t sub a worse cut of meat that’s mostly bone for a premium cut that’s all meat and no bone?

As it turns out, the demand for the formerly throwaway cut of chicken wings has driven up their price such that boneless breast strips yield a higher profit margin <http://www.abc3340.com/news/stories/0310/711570.html> than the bony wings. Once again, a formerly thrown away item becomes a gold mine and allows for higher utilization of the whole bird.

Finally, let’s add in cloud computing to this puzzle. When dedicated servers are used, they each often perform a single task, whether it’s an email server, a web server, an application server, a database server, etc. Such servers frequently have a resource utilization rate of less than 20%, which means that 80% of the server’s processing power is thrown away.

Enter cloud computing. When done correctly, cloud computing increases the utilization rate of each individual server and turns the formerly thrown away processing power into a gold mine. This allows for more efficient capital investments and a higher return on assets.

So what’s the common thread between fajitas, chicken wings, and cloud computing? You’ve probably already figured it out. All three have taken something that previously was almost worthless and thrown away and turned it into something valuable and highly demanded by boosting utilization.

SoftLayer plans to take this to another level later this year when we release BYOC – Build Your Own CloudTM. You’ll then be able to tailor your processing power to exactly what you need. Just select the amount of RAM, number of processors, storage space, an operating system, select hourly or monthly billing, and go. You don’t pay for resources you don’t need or use, and we have less unused processing capacity in our datacenters. It’s a win-win for our customers, our company, and the environment since power and real estate will be used more efficiently.

February 25, 2010

When things get hectic, Cloud computing to the rescue!

Nothing’s worse than trying to use someone’s website when you absolutely need information right now, and it’s unavailable. Last semester when attempting to figure out where the heck my classes were located, the school’s website was crippled by the influx of new freshmen that were trying to do the same. Imagine over 20,000 people trying to access this site at the same time, and because of this, the site is rendered practically useless.

We’ve had customers of ours face all sorts of hardships with their sites. Whether they’re featured on a popular TV show, or they’ve seen an unprecedented rise in traffic due to such sites as www.digg.com and www.Slashdot.org (commonly referred to as the Digg effect, or being ‘slashdotted’ <http://en.wikipedia.org/wiki/Slashdot_effect> ) it’s often difficult to get a new dedicated server online quickly enough to mitigate this effect. Imagine that instead of tens of thousands of college students, its tens of thousands of dollars! Quite the predicament, right?

Not a problem though! Cloud computing to the rescue! CloudLayer computing instances are able to be rapidly deployed to provide additional resources should they be required. Even better, if you only anticipate a short burst, you can grab a few up, use them while they’re needed, and then toss them, all while only being billed by the hour! With cloud computing administrators can quickly react to changing situations. We offer several solutions in our bag of tricks, including Dedicated, Bare Metal Cloud, and CloudLayer computing. With proper planning and deployment, your site can be profitable regardless of the situation. This includes a popular product, blog, or the first day of college.

Subscribe to cloud-computing