Cloud Posts

September 30, 2013

The Economics of Cloud Computing: If It Seems Too Good to Be True, It Probably Is

One of the hosts of a popular Sirius XM radio talk show was recently in the market to lease a car, and a few weeks ago, he shared an interesting story. In his research, he came across an offer he came across that seemed "too good to be true": Lease a new Nissan Sentra with no money due at signing on a 24-month lease for $59 per month. The car would as "base" as a base model could be, but a reliable car that can be driven safely from Point A to Point B doesn't need fancy "upgrades" like power windows or an automatic transmission. Is it possible to lease new car for zero down and $59 per month? What's the catch?

After sifting through all of the paperwork, the host admitted the offer was technically legitimate: He could lease a new Nissan Sentra for $0 down and $59 per month for two years. Unfortunately, he also found that "lease" is just about the extent of what he could do with it for $59 per month. The fine print revealed that the yearly mileage allowance was 0 (zero) — he'd pay a significant per-mile rate for every mile he drove the car.

Let's say the mileage on the Sentra was charged at $0.15 per mile and that the car would be driven a very-conservative 5,000 miles per year. At the end of the two-year lease, the 10,000 miles on the car would amount to a $1,500 mileage charge. Breaking that cost out across the 24 months of the lease, the effective monthly payment would be around $121, twice the $59/mo advertised lease price. Even for a car that would be used sparingly, the numbers didn't add up, so the host wound up leasing a nicer car (that included a non-zero mileage allowance) for the same monthly cost.

The "zero-down, $59/mo" Sentra lease would be a fantastic deal for a person who wants the peace of mind of having a car available for emergency situations only, but for drivers who put the national average of 15,000 miles per year, the economic benefit of such a low lease rate is completely nullified by the mileage cost. If you were in the market to lease a new car, would you choose that Sentra deal?

At this point, you might be wondering why this story found its way onto the SoftLayer Blog, and if that's the case, you don't see the connection: Most cloud computing providers sell cloud servers like that car lease.

The "on demand" and "pay for what you use" aspects of cloud computing make it easy for providers to offer cloud servers exclusively as short-term utilities: "Use this cloud server for a couple of days (or hours) and return it to us. We'll just charge you for what you use." From a buyer's perspective, this approach is easy to justify because it limits the possibility of excess capacity — paying for something you're not using. While that structure is effective (and inexpensive) for customers who sporadically spin up virtual server instances and turn them down quickly, for the average customer looking to host a website or application that won't be turned off in a given month, it's a different story.

Instead of discussing the costs in theoretical terms, let's look at a real world example: One of our competitors offers an entry-level Linux cloud server for just over $15 per month (based on a 730-hour month). When you compare that offer to SoftLayer's least expensive monthly virtual server instance (@ $50/mo), you might think, "OMG! SoftLayer is more than three times as expensive!"

But then you remember that you actually want to use your server.

You see, like the "zero down, $59/mo" car lease that doesn't include any mileage, the $15/mo cloud server doesn't include any bandwidth. As soon as you "drive your server off the lot" and start using it, that "fantastic" rate starts becoming less and less fantastic. In this case, outbound bandwidth for this competitor's cloud server starts at $0.12/GB and is applied to the server's first outbound gigabyte (and every subsequent gigabyte in that month). If your server sends 300GB of data outbound every month, you pay $36 in bandwidth charges (for a combined monthly total of $51). If your server uses 1TB of outbound bandwidth in a given month, you end up paying $135 for that "$15/mo" server.

Cloud servers at SoftLayer are designed to be "driven." Every monthly virtual server instance from SoftLayer includes 1TB of outbound bandwidth at no additional cost, so if your cloud server sends 1TB of outbound bandwidth, your total charge for the month is $50. The "$15/mo v. $50/mo" comparison becomes "$135/mo v. $50/mo" when we realize that these cloud servers don't just sit in the garage. This illustration shows how the costs compare between the two offerings with monthly bandwidth usage up to 1.3TB*:

Cloud Cost v Bandwidth

*The graphic extends to 1.3TB to show how SoftLayer's $0.10/GB charge for bandwidth over the initial 1TB allotment compares with the competitor's $0.12/GB charge.

Most cloud hosting providers sell these "zero down, $59/mo car leases" and encourage you to window-shop for the lowest monthly price based on number of cores, RAM and disk space. You find the lowest price and mentally justify the cost-per-GB bandwidth charge you receive at the end of the month because you know that you're getting value from the traffic that used that bandwidth. But you'd be better off getting a more powerful server that includes a bandwidth allotment.

As a buyer, it's important that you make your buying decisions based on your specific use case. Are you going to spin up and spin down instances throughout the month or are you looking for a cloud server that is going to stay online the entire month? From there, you should estimate your bandwidth usage to get an idea of the actual monthly cost you can expect for a given cloud server. If you don't expect to use 300GB of outbound bandwidth in a given month, your usage might be best suited for that competitor's offering. But then again, it's probably worth mentioning that that SoftLayer's base virtual server instance has twice the RAM, more disk space and higher-throughput network connections than the competitor's offering we compared against. Oh yeah, and all those other cloud differentiators.

-@khazard

July 29, 2013

A Brief History of Cloud Computing

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.

Virtualization

As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.

Clouds

As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.

-James

Categories: 
April 30, 2013

Big Data at SoftLayer: Riak

Big data is only getting bigger. Late last year, SoftLayer teamed up with 10Gen to launch a high-performance MongoDB solution, and since then, many of our customers have been clamoring for us to support other big data platforms in the same way. By automating the provisioning process of a complex big data environment on bare metal infrastructure, we made life a lot easier for developers who demanded performance and on-demand scalability for their big data applications, and it's clear that our simple formula produced amazing results. As Marc mentioned when he started breaking down big data database models, document-oriented databases like MongoDB are phenomenal for certain use-cases, and in other situations, a key-value store might be a better fit. With that in mind, we called up our friends at Basho and started building a high-performance architecture specifically for Riak ... And I'm excited to announce that we're launching it today!

Riak is an open source, distributed database platform based on the principles enumerated in the DynamoDB paper. It uses a simple key/value model for object storage, and it was architected for high availability, fault tolerance, operational simplicity and scalability. A Riak cluster is composed of multiple nodes that are all connected, all communicating and sharing data automatically. If one node were to fail, the other nodes would automatically share the data that the failed node was storing and processing until the node is back up and running or a new node is added. See the diagram below for a simple illustration of how adding a node to a cluster works within Riak.

Riak Nodes

We will support both the open source and the Enterprise versions of Riak. The open source version is a great place to start. It has all of the database functionality of Riak Enterprise, but it is limited to a single cluster. The Enterprise version supports replication between clusters across data centers, giving you lots of architectural options. You can use replication to build highly available, live-live failover applications. You can also use it to distribute your application's data across regions, giving you a global platform that you can update anywhere in the world and know that those modifications will be available anywhere else. Riak Enterprise customers also receive 24×7 coverage, both from SoftLayer and Basho. This includes SoftLayer's one-hour guaranteed response for Severity 1 hardware issues and unlimited support available via our secure web portal, email and phone.

The business use-case for this flexibility is that if you need to scale up or down, nodes can be easily added or taken down as your requirements change. You can opt for a single-data center environment with a few nodes or you can broaden your architecture to a multi-data center deployment with a 40-node cluster. While these capabilities are inherent in Riak, they can be complicated to build and configure, so we spent countless hours working with Basho to streamline Riak deployment on the SoftLayer platform. The fruit of that labor can be found in our Riak Solution Designer:

Riak Solution Designer

The server configurations and packages in the Riak Solution Designer have been selected to deliver the performance, availability and stability that our customers expect from their bare metal and virtual cloud infrastructure at SoftLayer. With a few quick clicks, you can order a fully configured Riak environment, and it'll be provisioned and online for you in two to four hours. And everything you order is on a month-to-month contract.

Thanks to the hard work done by the SoftLayer development group and Basho's team, we're proud to be the first in the marketplace to offer a turn-key Riak solution on bare metal infrastructure. You don't need to sacrifice performance and agility for simplicity.

For more information, visit SoftLayer.com/Riak or contact our sales team.

-Duke

December 31, 2012

FatCloud: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Ian Miller, CEO of FatCloud. FatCloud is a cloud-enabled application platform that allows enterprises to build, deploy and manage next-generation .NET applications.

'The Cloud' and Agility

As the CEO of a cloud-enabled application platform for the .NET community, I get the same basic question all the time: "What is the cloud?" I'm a consumer of cloud services and a supplier of software that helps customers take advantage of the cloud, so my answer to that question has evolved over the years, and I've come to realize that the cloud is fundamentally about agility. The growth, evolution and adoption of cloud technology have been fueled by businesses that don't want to worry about infrastructure and need to pivot or scale quickly as their needs change.

Because FatCloud is a consumer of cloud infrastructure from Softlayer, we are much more nimble than we'd be if we had to worry about building data centers, provisioning hardware, patching software and doing all the other time-consuming tasks that are involved in managing a server farm. My team can focus on building innovative software with confidence that the infrastructure will be ready for us on-demand when we need it. That peace of mind also happens to be one of the biggest reasons developers turn to FatCloud ... They don't want to worry about configuring the fundamental components of the platform under their applications.

Fat Cloud

Our customers trust FatCloud's software platform to help them build and scale their .NET applications more efficiently. To do this, we provide a Core Foundation of .NET WCF services that effectively provides the "plumbing" for .NET cloud computing, and we offer premium features like a a distributed NoSQL database, work queue, file storage/management system, content caching and an easy-to-use administration tool that simplifies managing the cloud for our customers. FatCloud makes developing for hundreds of servers as easy as developing for one, and to prove it, we offer a free 3-node developer edition so that potential customers can see for themselves.

FatCloud Offering

The agility of the cloud has the clearest value for a company like ours. In one heavy-duty testing month, we needed 75 additional servers online, and after that testing was over, we needed the elasticity to scale that infrastructure back down. We're able to adjust our server footprint as we balance our computing needs and work within budget constraints. Ten years ago, that would have been overwhelmingly expensive (if not impossible). Today, we're able to do it economically and in real-time. SoftLayer is helping keep FatCloud agile, and FatCloud passes that agility on to our customers.

Companies developing custom software for the cloud, mobile or web using .NET want a reliable foundation to build from, and they want to be able to bring their applications to market faster. With FatCloud, those developers can complete their projects in about half the time it would take them if they were to develop conventionally, and that speed can be a huge competitive differentiator.

The expensive "scale up" approach of buying and upgrading powerful machines for something like SQL Server is out-of-date now. The new kid in town is the "scale out" approach of using low-cost servers to expand infrastructure horizontally. You'll never run into those "scale up" hardware limitations, and you can build a dynamic, scalable and elastic application much more economically. You can be agile.

If you have questions about how FatCloud and SoftLayer make cloud-enabled .NET development easier, send us an email: sales@fatcloud.com. Our team is always happy to share the easy (and free) steps you can take to start taking advantage of the agility the cloud provides.

-Ian Miller, CEO of FatCloud

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace. These partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New partners will be added to the Marketplace each month, so stay tuned for many more come.
October 8, 2012

Don't Let Your Success Bring You Down

Last week, I got an email from a huge technology conference about their new website, exciting new speaker line up and the availability of early-bird tickets. I clicked on a link from that email, and I find that their fancy new website was down. After giving up on getting my early-bird discount, I surfed over to Facebook, and I noticed a post from one of my favorite blogs, Dutch Cowboys, about another company's interesting new product release. I clicked the link to check out the product, and THAT site was down, too. It's painfully common for some of the world's most popular sites and applications buckle under the strain of their own success ... Just think back to when Diablo III was launched: Demand crushed their servers on release day, and the gamers who waited patiently to get online with their copy turned to the world of social media to express their visceral anger about not being able to play the game.

The question everyone asks is why this kind of thing still happens. To a certain extent, the reality is that most entrepreneurs don't know what they don't know. I spoke with an woman who was going to be featured on BBC's Dragons' Den, and she said that the traffic from the show's viewers crippled most (if not all) of the businesses that were presented on the program. She needed to safeguard from that happening to her site, and she didn't know how to do that.

Fortunately, it's pretty easy to keep sites and applications online with on-demand infrastructure and auto-scaling tools. Unfortunately, most business owners don't know how easy it is, so they don't take advantage of the resources available to them. Preparing a website, game or application for its own success doesn't have to be expensive or time consuming. With pay-for-what-you-use pricing and "off the shelf" cloud management solutions, traffic-caused outages do NOT have to happen.

First impressions are extremely valuable, and if I wasn't really interested in that conference or the new product Dutch Cowboys blogged about, I'd probably never go back to those sites. Most Internet visitors would not. I cringe to think about the potential customers lost.

Businesses spend a lot of time and energy on user experience and design, and they don't think to devote the same level of energy on their infrastructure. In the 90's, sites crashing or slowing was somewhat acceptable since the interwebs were exploding beyond available infrastructure's capabilities. Now, there's no excuse.

If you're launching a new site, product or application, how do you get started?

The first thing you need to do is understand what resources you need and where the potential bottlenecks are when hundreds, thousands or even millions of people want to what you're launching. You don't need to invest in infrastructure to accommodate all of that traffic, but you need to know how you can add that infrastructure when you need it.

One of the easiest ways to prepare for your own success without getting bogged down by the bits and bytes is to take advantage of resources from some of our technology partners (and friends). If you have a PHP, Ruby on Rails or Node.js applications, Engine Yard will help you deploy and manage a specialized hosting environment. When you need a little more flexibility, RightScale's cloud management product lets you easily manage your environment in "a single integrated solution for extreme efficiency, speed and control." If your biggest concern is your database's performance and scalability, Cloudant has an excellent cloud database management service.

Invest a little time in getting ready for your success, and you won't need to play catch-up when that success comes to you. Given how easy it is to prepare and protect your hosting environment these days, outages should go the way of the 8-track player.

-@jpwisler

October 2, 2012

A Catalyst for Success: MODX Cloud

SoftLayer has a passion for social media, online gaming and mobile application developers. We were in "startup mode" just a few years ago, so we know how much work it takes to transform ideas into a commercially viable enterprise, and we want to be the platform on which all of those passionate people build their business. To that end, we set out to find ways we could help the next generation of web-savvy entrepreneurs and digital pioneers.

About a year ago, we kicked off a huge effort to give back to the startup community. We jumped headfirst into the world of startups, incubators, accelerators, angel investors, venture capitalists and private equity firms. This was our new ecosystem. We started to make connections with the likes of TechStars and MassChallenge, and we quickly became a preferred hosting environment for their participants' most promising and ambitious ideas. This ambitious undertaking evolved into our Catalyst Program.

When it came to getting involved, we knew we could give back from an infrastructure perspective. We decided to extend a $1,000/mo hosting credit to each Catalyst company for one full year, and the response was phenomenal. That was just the beginning, though. Beyond the servers, storage and networking, we wanted to be a resource to the entrepreneurs and developers who could learn from our experience, so we committed to mentoring and making ourselves available to answer any and all questions. That's not just lip service ... We pledged access to our entire executive team, and we made engineering resources available for problem-solving technical challenges. We're in a position to broker introductions and provide office space, so we wanted didn't want to pass up that opportunity.

One of the superstars and soon-to-be graduates of Catalyst is MODX, and they have an incredible story. MODX has become leading web content management platform (#4 open source PHP CMS globally) by providing designers, developers, content creators and Unix nerds with all the tools they need to manage, build, protect and scale a web site.

Back in December 2011, the MODX team entered the program as a small company coming out of the open source world, trying to figure out how to monetize and come up with a viable commercial offering. Just over 10 months later, the company has grown to 14+ employees with a new flagship product ready to launch later this month: MODX Cloud. This new cloud-hosting platform, built on SoftLayer's infrastructure, levels the playing field allowing users to scale and reach everyone with just a few clicks of a mouse and not need to worry about IT administration or back-end servers. Everything associated with managing a web site is fully automated with single-click functionality, so designers and small agencies can compete globally.

MODX Cloud

We're proud of what the MODX team has accomplished in such a short period of time, and I would like to think that SoftLayer played a significant role in getting them there. The MODX tag line is "Creative Freedom," and that might be why they were drawn to the Catalyst Program. We want to "liberate" entrepreneurs from distractions and allow them to focus on developing their products – you know, the part of the business that they are most passionate about.

I can't wait to see what comes out of Catalyst next ... We're always looking to recruit innovative, passionate and creative startups who'd love to have SoftLayer as a partner, so if you have a business that fits the bill, let us help!

-@gkdog

September 24, 2012

Cloud Computing is not a 'Thing' ... It's a way of Doing Things.

I like to think that we are beyond 'defining' cloud, but what I find in reality is that we still argue over basics. I have conversations in which people still delineate things like "hosting" from "cloud computing" based degrees of single-tenancy. Now I'm a stickler for definitions just like the next pedantic software-religious guy, but when it comes to arguing minutiae about cloud computing, it's easy to lose the forest for the trees. Instead of discussing underlying infrastructure and comparing hypervisors, we'll look at two well-cited definitions of cloud computing that may help us unify our understanding of the model.

I use the word "model" intentionally there because it's important to note that cloud computing is not a "thing" or a "product." It's a way of doing business. It's an operations model that is changing the fundamental economics of writing and deploying software applications. It's not about a strict definition of some underlying service provider architecture or whether multi-tenancy is at the data center edge, the server or the core. It's about enabling new technology to be tested and fail or succeed in blazing calendar time and being able to support super-fast growth and scale with little planning. Let's try to keep that in mind as we look at how NIST and Gartner define cloud computing.

The National Institute of Standards and Technology (NIST) is a government organization that develops standards, guidelines and minimum requirements as needed by industry or government programs. Given the confusion in the marketplace, there's a huge "need" for a simple, consistent definition of cloud computing, so NIST had a pretty high profile topic on its hands. Their resulting Cloud Computing Definition describes five essential characteristics of cloud computing, three service models, and four deployment models. Let's table the service models and deployment models for now and look at the five essential characteristics of cloud computing. I'll summarize them here; follow the link if you want more context or detail on these points:

  • On-Demand Self Service: A user can automatically provision compute without human interaction.
  • Broad Network Access: Capabilities are available over the network.
  • Resource Pooling: Computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned.
  • Rapid Elasticity: Capabilities can be elastically provisioned and released.
  • Measured Service: Resource usage can be monitored, controlled and reported.

The characteristics NIST uses to define cloud computing are pretty straightforward, but they are still a little ambiguous: How quickly does an environment have to be provisioned for it to be considered "on-demand?" If "broad network access" could just mean "connected to the Internet," why include that as a characteristic? When it comes to "measured service," how granular does the resource monitoring and control need to be for something to be considered "cloud computing?" A year? A minute? These characteristics cast a broad net, and we can build on that foundation as we set out to create a more focused definition.

For our next stop, let's look at Gartner's view: "A style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet infrastructure." From a philosophical perspective, I love their use of "style" when talking about cloud computing. Little differentiates the underlying IT capabilities of cloud computing from other types of computing, so when looking at cloud computing, we really just see a variation on how those capabilities are being leveraged. It's important to note that Gartner's definition includes "elastic" alongside "scalable" ... Cloud computing gets the most press for being able to scale remarkably, but the flip-side of that expansion is that it also needs to contract on-demand.

All of this describes a way of deploying compute power that is completely different than the way we did this in the decades that we've been writing software. It used to take months to get funding and order the hardware to deploy an application. That's a lot of time and risk that startups and enterprises alike can erase from their business plans.

How do we wrap all of those characteristics up into unified of definition of cloud computing? The way I look at it, cloud computing is as an operations model that yields seemingly unlimited compute power when you need it. It enables (scalable and elastic) capacity as you need it, and that capacity's pricing is based on consumption. That doesn't mean a provider should charge by the compute cycle, generator fan RPM or some other arcane measurement of usage ... It means that a customer should understand the resources that are being invoiced, and he/she should have the power to change those resources as needed. A cloud computing environment has to have self-service provisioning that doesn't require manual intervention from the provider, and I'd even push that requirement a little further: A cloud computing environment should have API accessibility so a customer doesn't even have to manually intervene in the provisioning process (The customer's app could use automated logic and API calls to scale infrastructure up or down based on resource usage).

I had the opportunity to speak at Cloud Connect Chicago, and I shared SoftLayer's approach to cloud computing and how it has evolved into a few distinct products that speak directly to our customers' needs:

The session was about 45 minutes, so the video above has been slimmed down a bit for easier consumption. If you're interested in seeing the full session and getting into a little more detail, we've uploaded an un-cut version here.

-Duke

August 17, 2012

SoftLayer Private Clouds - Provisioning Speed

SoftLayer Private Clouds are officially live, and that means you can now order and provision your very own private cloud infrastructure on Citrix CloudPlatform quickly and easily. Chief Scientist Nathan Day introduced private clouds on the blog when it was announced at Cloud Expo East, and CTO Duke Skarda followed up with an explanation of the architecture powering SoftLayer Private Clouds. The most amazing claim: You can order a private cloud infrastructure and spin up its first virtual machines in a matter of hours rather than days, weeks or months.

If you've ever looked at building your own private cloud in the past, the "days, weeks or months" timeline isn't very surprising — you have to get the hardware provisioned, the software installed and the network configured ... and it all has to work together. Hearing that SoftLayer Private Clouds can be provisioned in "hours" probably seems too good to be true to administrators who have tried building a private cloud in the past, so I thought I'd put it to the test by ordering a private cloud and documenting the experience.

At 9:30am, I walked over to Phil Jackson's desk and asked him if he would be interested in helping me out with the project. By 9:35am, I had him convinced (proof), and the clock was started.

When we started the order process, part of our work is already done for us:

SoftLayer Private Clouds

To guarantee peak performance of the CloudPlatform management server, SoftLayer selected the hardware for us: A single processor quad core Xeon 5620 server with 6GB RAM, GigE, and two 2.0TB SATA II HDDs in RAID1. With the management server selected, our only task was choosing our host server and where we wanted the first zone (host server and management server) to be installed:

SoftLayer Private Clouds

For our host server, we opted for a dual processor quad core Xeon 5504 with the default specs, and we decided to spin it up in DAL05. We added (and justified) a block of 16 secondary IP addresses for our first zone, and we submitted the order. The time: 9:38am.

At this point, it would be easy for us to game the system to shave off a few minutes from the provisioning process by manually approving the order we just placed (since we have access to the order queue), but we stayed true to the experiment and let it be approved as it normally would be. We didn't have to wait long:

SoftLayer Private Clouds

At 9:42am, our order was approved, and the pressure was on. How long would it take before we were able to log into the CloudStack portal to create a virtual machine? I'd walked over to Phil's desk 12 minutes ago, and we still had to get two physical servers online and configured to work with each other on CloudPlatform. Luckily, the automated provisioning process took on a the brunt of that pressure.

Both server orders were sent to the data center, and the provisioning system selected two pieces of hardware that best matched what we needed. Our exact configurations weren't available, so a SBT in the data center was dispatched to make the appropriate hardware changes to meet our needs, and the automated system kicked into high gear. IP addresses were assigned to the management and host servers, and we were able to monitor each server's progress in the customer portal. The hardware was tested and prepared for OS install, and when it was ready, the base operating systems were loaded — CentOS 6 on the management server and Citrix XenServer 6 on the host server. After CentOS 6 finished provisioning on the management server, CloudStack was installed. Then we got an email:

SoftLayer Private Clouds

At 11:24am, less than two hours from when I walked over to Phil's desk, we had two servers online and configured with CloudStack, and we were ready to provision our first virtual machines in our private cloud environment.

We log into CloudStack and added our first instance:

SoftLayer Private Clouds

We configured our new instance in a few clicks, and we clicked "Launch VM" at 11:38am. It came online in just over 3 minutes (11:42am):

SoftLayer Private Clouds

I got from "walking to Phil's desk" to having a multi-server private cloud infrastructure running a VM in exactly two hours and twelve minutes. For fun, I created a second VM on the host server, and it was provisioned in 31.7 seconds. It's safe to say that the claim that SoftLayer takes "hours" to provision a private cloud has officially been confirmed, but we thought it would be fun to add one more wrinkle to the system: What if we wanted to add another host server in a different data center?

From the "Hardware" tab in the SoftLayer portal, we selected "Add Zone" to from the "Actions" in the "Private Clouds" section, and we chose a host server with four portable IP addresses in WDC01. The zone was created, and the host server went through the same hardware provisioning process that our initial deployment went through, and our new host server was online in < 2 hours. We jumped into CloudStack, and the new zone was created with our host server ready to provision VMs in Washington, D.C.

Given how quick the instances were spinning up in the first zone, we timed a few in the second zone ... The first instance was online in about 4 minutes, and the second was running in 26.8 seconds.

SoftLayer Private Clouds

By the time I went out for a late lunch at 1:30pm, we'd spun up a new private cloud infrastructure with geographically dispersed zones that launched new cloud instances in under 30 seconds. Not bad.

Don't take my word for it, though ... Order a SoftLayer Private Cloud and see for yourself.

-@khazard

June 13, 2012

SoftLayer Private Clouds - A Cloud to Call Your Own

Those of us who've been in this industry for years have seen computing evolve pretty significantly, especially recently. We started with dedicated servers running a single operating system, and we were floored by innovations that allowed dedicated servers to run a hypervisor with many operating systems. The next big leap brought virtual machine "cloud" instances into the spotlight ... And the resulting marketing shenanigans have been a blessing and a curse. On the positive side, the approachable "cloud" term is a lot easier to talk about with a nontechnical audience, but on the negative side, we see uninformative TV commercials that leverage cloud as a marketing term, and we see products that further obfuscate what cloud technology actually means:

Cloud Phone?

To make sure we're all on the same page, as we continue to talk about "cloud," our definition is pretty straightforward:

  • It's an operations model.
  • It provides capacity on demand.
  • It offers consumption-based pricing.
  • It features self-service provisioning.
  • It can be accessed and managed via an API.

Understanding those characteristics, when you hear about cloud in the hosting industry, you're usually hearing about cloud computing instances in a public cloud environment. An instance in a public cloud is one of many instances operating on a shared cloud infrastructure alongside other similar instances that aren't managed by you. Your data is still secure, and you can still get good performance in a public cloud environment, but you're not managing the cloud infrastructure on which your instance resides ... You're using a piece of a cloud.

What we announced at Cloud Expo East is the next step in the evolution of technology in our industry ... We're providing a turnkey, on-demand way for our customers to provision their own Private Clouds with Citrix CloudPlatform, powered by Apache CloudStack.

You don't get a piece of the cloud. You have your own cloud, provisioned in a matter of hours on a month-to-month contract.

For those who have looked into building a private cloud for their business in the past, it's probably worth reiterating: With SoftLayer and CloudStack, you can have a geographically distributed, secure, private cloud environment provisioned in a matter of hours (not months). Given the complexity of a private cloud environment — involving a management server, private cloud zones, host servers and object storage — this is no small feat.

SoftLayer Private Clouds

Those unbelievable provisioning times are only part of the story ... When that cloud infrastructure is deployed quickly, it's fully integrated into the SoftLayer platform, so it leverages our global private network alongside your existing bare metal, dedicated and virtual servers. Want to add public cloud instances to your private cloud as web heads? You'll log into one portal or use a singular API to have that done in an instant.

Your own cloud infrastructure, fully integrated into SoftLayer's global infrastructure. If you're chomping at the bit to try it out for yourself, email us at privateclouds@softlayer.com, and we'll get you on the "early access" list.

Before I sign off, I want to be sure to thank everyone at SoftLayer and Citrix who worked so hard to make SoftLayer Private Clouds such an amazing new addition to our platform.

-@nday91

June 6, 2012

Today's Technology "Game Changers": IPv6 and Cloud

"Game Changers" in technology force a decision: Adapt or die. When repeating rifles gained popularity in the late 1800s, a business of manufacturing muzzle-loading or breech-loading rifles would have needed to find a way to produce a repeating rifle or it would have lost most (if not all) of it's business to Winchester. If a fresh-faced independent musician is hitting it big on the coffee shop scene in 2012, she probably won't be selling out arenas any time soon if she refuses to make her music available digitally. Just ask any of the old-timers in the print media industry ... "Game Changers" in technology can be disastrous for an established business in an established industry.

That's pretty intimidating ... Even for tech businesses.

Shifts in technology don't have to be as drastic and obvious as a "printed newspaper v. social news site" comparison for them to be disruptive. Even subtle advances can wind up making or breaking a business. In fact, many of today's biggest and most successful tech companies are scrambling to adapt to two simple "game changers" that seem terribly significant:

  • IPv6
  • "The Cloud"

IPv6

A quick search of the SoftLayer Blog reminds me that Lance first brought up the importance of IPv6 adoption in October 2007:

ARIN has publically announced the need to shift to IPv6 and numerous articles have outlined the D-Day for IPv4 space. Most experts agree, its coming fast and that it will occur sometime in 2010 at the current pace (that's about two years for those counting). IPv6 brings enough IP space for an infinite number of users along with improved security features and several other operational efficiencies that will make it very popular. The problem lies between getting from IPv4 to IPv6.

When IPv4 exhaustion was just a blip on the horizon, many businesses probably thought, "Oh, I'll get around to it when I need to. It's not a problem yet." When IANA exhausted the IPv4 pool, they probably started picking up the phone and calling providers to ask what plans they had in place. When some of the Internet's biggest websites completed a trial transition to IPv6 on World IPv6 Day last year, those businesses started feeling the urgency. With today's World IPv6 Launch, they know something has to be done.

World IPv6 Launch Day

Regardless of how conservative providers get with IPv4 space, the 4,294,967,296 IPv4 addresses in existence will not last much longer. Soon, users will be accessing an IPv6 Internet, and IPv4-only websites will lose their opportunity to reach those users. That's a "game changer."

"The Cloud"

The other "game changer" many tech businesses are struggling with these days is the move toward "the cloud." There are a two interesting perspectives in this transition: 1) The challenge many businesses face when choosing whether to adopt cloud computing, and 2) The challenges for businesses that find themselves severing as an integral (sometimes unintentional) part of "the cloud." You've probably seen hundreds of blog posts and articles about the first, so I'll share a little insight on the second.

When you hear all of the hype about cloud computing and cloud storage offering a hardware-agnostic Utopia of scalable, reliable power, it's easy to forget that the building blocks of a cloud infrastructure will usually come from vendors that provided a traditional hosting resources. When a computing instance is abstracted from a hardware device, it's opens up huge variations in usage. It's possible to have dozens of public cloud instances using a single server's multi-proc, multi-core resources at a given time. If a vendor prices a piece of software on a "per server" basis, how do they define a "server" when their users are in the cloud? It can be argued that a cloud computing instance with a single core of power is a "server," and on the flip-side, it's easy to define a "server" as the hardware object on which many cloud instances may run. I don't know that there's an easy way to answer that question, but what I do know is that applying "what used to work" to "what's happening now" isn't the right answer.

The hardware and software providers in the cloud space who are able to come up with new approaches unencumbered by the urge to continue "the way we've always done it" are going to be the ones that thrive when technology "game changers" emerge, and the providers who dig their heels in the dirt or try to put a square peg into a round hole will get the short end of the "adapt or die" stick.

We've tried to innovate and take a fresh look at every opportunity that has come our way, and we do our best to build relationships with agile companies that we see following suit.

I guess a better way to position the decision at the beginning of this post would be to add a little tweak: "Innovate, adapt or die." How you approach technology "game changers" will define your business's success.

-@gkdog

Subscribe to cloud