Posts Tagged 'Cloud Computing'

September 30, 2013

The Economics of Cloud Computing: If It Seems Too Good to Be True, It Probably Is

One of the hosts of a popular Sirius XM radio talk show was recently in the market to lease a car, and a few weeks ago, he shared an interesting story. In his research, he came across an offer he came across that seemed "too good to be true": Lease a new Nissan Sentra with no money due at signing on a 24-month lease for $59 per month. The car would as "base" as a base model could be, but a reliable car that can be driven safely from Point A to Point B doesn't need fancy "upgrades" like power windows or an automatic transmission. Is it possible to lease new car for zero down and $59 per month? What's the catch?

After sifting through all of the paperwork, the host admitted the offer was technically legitimate: He could lease a new Nissan Sentra for $0 down and $59 per month for two years. Unfortunately, he also found that "lease" is just about the extent of what he could do with it for $59 per month. The fine print revealed that the yearly mileage allowance was 0 (zero) — he'd pay a significant per-mile rate for every mile he drove the car.

Let's say the mileage on the Sentra was charged at $0.15 per mile and that the car would be driven a very-conservative 5,000 miles per year. At the end of the two-year lease, the 10,000 miles on the car would amount to a $1,500 mileage charge. Breaking that cost out across the 24 months of the lease, the effective monthly payment would be around $121, twice the $59/mo advertised lease price. Even for a car that would be used sparingly, the numbers didn't add up, so the host wound up leasing a nicer car (that included a non-zero mileage allowance) for the same monthly cost.

The "zero-down, $59/mo" Sentra lease would be a fantastic deal for a person who wants the peace of mind of having a car available for emergency situations only, but for drivers who put the national average of 15,000 miles per year, the economic benefit of such a low lease rate is completely nullified by the mileage cost. If you were in the market to lease a new car, would you choose that Sentra deal?

At this point, you might be wondering why this story found its way onto the SoftLayer Blog, and if that's the case, you don't see the connection: Most cloud computing providers sell cloud servers like that car lease.

The "on demand" and "pay for what you use" aspects of cloud computing make it easy for providers to offer cloud servers exclusively as short-term utilities: "Use this cloud server for a couple of days (or hours) and return it to us. We'll just charge you for what you use." From a buyer's perspective, this approach is easy to justify because it limits the possibility of excess capacity — paying for something you're not using. While that structure is effective (and inexpensive) for customers who sporadically spin up virtual server instances and turn them down quickly, for the average customer looking to host a website or application that won't be turned off in a given month, it's a different story.

Instead of discussing the costs in theoretical terms, let's look at a real world example: One of our competitors offers an entry-level Linux cloud server for just over $15 per month (based on a 730-hour month). When you compare that offer to SoftLayer's least expensive monthly virtual server instance (@ $50/mo), you might think, "OMG! SoftLayer is more than three times as expensive!"

But then you remember that you actually want to use your server.

You see, like the "zero down, $59/mo" car lease that doesn't include any mileage, the $15/mo cloud server doesn't include any bandwidth. As soon as you "drive your server off the lot" and start using it, that "fantastic" rate starts becoming less and less fantastic. In this case, outbound bandwidth for this competitor's cloud server starts at $0.12/GB and is applied to the server's first outbound gigabyte (and every subsequent gigabyte in that month). If your server sends 300GB of data outbound every month, you pay $36 in bandwidth charges (for a combined monthly total of $51). If your server uses 1TB of outbound bandwidth in a given month, you end up paying $135 for that "$15/mo" server.

Cloud servers at SoftLayer are designed to be "driven." Every monthly virtual server instance from SoftLayer includes 1TB of outbound bandwidth at no additional cost, so if your cloud server sends 1TB of outbound bandwidth, your total charge for the month is $50. The "$15/mo v. $50/mo" comparison becomes "$135/mo v. $50/mo" when we realize that these cloud servers don't just sit in the garage. This illustration shows how the costs compare between the two offerings with monthly bandwidth usage up to 1.3TB*:

Cloud Cost v Bandwidth

*The graphic extends to 1.3TB to show how SoftLayer's $0.10/GB charge for bandwidth over the initial 1TB allotment compares with the competitor's $0.12/GB charge.

Most cloud hosting providers sell these "zero down, $59/mo car leases" and encourage you to window-shop for the lowest monthly price based on number of cores, RAM and disk space. You find the lowest price and mentally justify the cost-per-GB bandwidth charge you receive at the end of the month because you know that you're getting value from the traffic that used that bandwidth. But you'd be better off getting a more powerful server that includes a bandwidth allotment.

As a buyer, it's important that you make your buying decisions based on your specific use case. Are you going to spin up and spin down instances throughout the month or are you looking for a cloud server that is going to stay online the entire month? From there, you should estimate your bandwidth usage to get an idea of the actual monthly cost you can expect for a given cloud server. If you don't expect to use 300GB of outbound bandwidth in a given month, your usage might be best suited for that competitor's offering. But then again, it's probably worth mentioning that that SoftLayer's base virtual server instance has twice the RAM, more disk space and higher-throughput network connections than the competitor's offering we compared against. Oh yeah, and all those other cloud differentiators.

-@khazard

July 29, 2013

A Brief History of Cloud Computing

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.

Virtualization

As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.

Clouds

As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.

-James

Categories: 
July 16, 2013

Riak Performance Analysis: Bare Metal v. Virtual

In December, I posted a MongoDB performance analysis that showed the quantitative benefits of using bare metal servers for MongoDB workloads. It should come as no surprise that in the wake of SoftLayer's Riak launch, we've got some similar data to share about running Riak on bare metal.

To run this test, we started by creating five-node clusters with Riak 1.3.1 on SoftLayer bare metal servers and on a popular competitor's public cloud instances. For the SoftLayer environment, we created these clusters using the Riak Solution Designer, so the nodes were all provisioned, configured and clustered for us automatically when we ordered them. For the public cloud virtual instance Riak cluster, each node was provisioned indvidually using a Riak image template and manually configured into a cluster after all had come online. To optimize for Riak performance, I made a few tweaks at the OS level of our servers (running CentOS 64-bit):

Noatime
Nodiratime
barrier=0
data=writeback
ulimit -n 65536

The common Noatime and Nodiratime settings eliminate the need for writes during reads to help performance and disk wear. The barrier and writeback settings are a little less common and may not be what you'd normally set. Although those settings present a very slight risk for loss of data on disk failure, remember that the Riak solution is deployed in five-node rings with data redundantly available across multiple nodes in the ring. With that in mind and considering each node also being deployed with a RAID10 storage array, you can see that the minor risk for data loss on the failure of a single disk in the entire solution would have no impact on the entire data set (as there are plenty of redundant copies for that data available). Given the minor risk involved, the performance increases of those two settings justify their use.

With all of the nodes tweaked and configured into clusters, we set up Basho's test harness — Basho Bench — to remotely simulate load on the deployments. Basho Bench allows you to create a configurable test plan for a Riak cluster by configuring a number of workers to utilize a driver type to generate load. It comes packaged as an Erlang application with a config file example that you can alter to create the specifics for the concurrency, data set size, and duration of your tests. The results can be viewed as CSV data, and there is an optional graphics package that allows you to generate the graphs that I am posting in this blog. A simplified graphic of our test environment would look like this:

Riak Test Environment

The following Basho Bench config is what we used for our testing:

{mode, max}.
{duration, 120}.
{concurrent, 8}.
{driver, basho_bench_driver_riakc_pb}.
{key_generator,{int_to_bin,{uniform_int,1000000}}}.
{value_generator,{exponential_bin,4098,50000}}.
{riakc_pb_ips, [{10,60,68,9},{10,40,117,89},{10,80,64,4},{10,80,64,8},{10,60,68,7}]}.
{riakc_pb_replies, 2}.
{operations, [{get, 10},{put, 1}]}.

To spell it out a little simpler:

Tests Performed

Data Set: 400GB
10:1 Query-to-Update Operations
8 Concurrent Client Connections
Test Duration: 2 Hours

You may notice that in the test cases that use SoftLayer "Medium" Servers, the virtual provider nodes are running 26 virtual compute units against our dual proc hex-core servers (12 cores total). In testing with Riak, memory is important to the operations than CPU resources, so we provisioned the virtual instances to align with the 36GB of memory in each of the "Medium" SoftLayer servers. In the public cloud environment, the higher level of RAM was restricted to packages with higher CPU, so while the CPU counts differ, the RAM amounts are as close to even as we could make them.

One final "housekeeping" note before we dive into the results: The graphs below are pulled directly from the optional graphics package that displays Basho Bench results. You'll notice that the scale on the left-hand side of graphs differs dramatically between the two environments, so a cursory look at the results might not tell the whole story. Click any of the graphs below for a larger version. At the end of each test case, we'll share a few observations about the operations per second and latency results from each test. When we talk about latency in the "key observation" sections, we'll talk about the 99th percentile line — 99% of the results had latency below this line. More simply you could say, "This is the highest latency we saw on this platform in this test." The primary reason we're focusing on this line is because it's much easier to read on the graphs than the mean/median lines in the bottom graphs.

Riak Test 1: "Small" Bare Metal 5-Node Cluster vs Virtual 5-Node Cluster

Servers

SoftLayer Small Riak Server Node
Single 4-core Intel 1270 CPU
64-bit CentOS
8GB RAM
4 x 500GB SATAII – RAID10
1Gb Bonded Network
Virtual Provider Node
4 Virtual Compute Units
64-bit CentOS
7.5GB RAM
4 x 500GB Network Storage – RAID10
1Gb Network
 

Results

Riak Performance Analysis

Riak Performance Analysis

Key Observations

The SoftLayer environment showed much more consistency in operations per second with an average throughput around 450 Op/sec. The virtual environment throughput varied significantly between about 50 operations per second to more than 600 operations per second with the trend line fluctuating slightly between about 220 Op/sec and 350 Op/sec.

Comparing the latency of get and put requests, the 99th percentile of results in the SoftLayer environment stayed around 50ms for gets and under 200ms for puts while the same metric for the virtual environment hovered around 800ms in gets and 4000ms in puts. The scale of the graphs is drastically different, so if you aren't looking closely, you don't see how significantly the performance varies between the two.

Riak Test 2: "Medium" Bare Metal 5-Node Cluster vs Virtual 5-Node Cluster

Servers

SoftLayer Medium Riak Server Node
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
4 x 300GB 15K SAS – RAID10
1Gb Network – Bonded
Virtual Provider Node
26 Virtual Compute Units
64-bit CentOS
30GB RAM
4 x 300GB Network Storage
1Gb Network
 

Results

Riak Performance Analysis

Riak Performance Analysis

Key Observations

Similar to the results of Test 1, the throughput numbers from the bare metal environment are more consistent (and are consistently higher) than the throughput results from the virtual instance environment. The SoftLayer environment performed between 1500 and 1750 operations per second on average while the virtual provider environment averaged around 1200 operations per second throughout the test.

The latency of get and put requests in Test 2 also paints a similar picture to Test 1. The 99th percentile of results in the SoftLayer environment stayed below 50ms and under 400ms for puts while the same metric for the virtual environment averaged about 250ms in gets and over 1000ms in puts. Latency in a big data application can be a killer, so the results from the virtual provider might be setting off alarm bells in your head.

Riak Test 3: "Medium" Bare Metal 5-Node Cluster vs Virtual 5-Node Cluster

Servers

SoftLayer Medium Riak Server Node
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
4 x 128GB SSD – RAID10
1Gb Network – Bonded
Virtual Provider Node
26 Virtual Compute Units
64-bit CentOS
30GB RAM
4 x 300GB Network Storage
1Gb Network
 

Results

Riak Performance Analysis

Riak Performance Analysis

Key Observations

In Test 3, we're using the same specs in our virtual provider nodes, so the results for the virtual node environment are the same in Test 3 as they are in Test 2. In this Test, the SoftLayer environment substitutes SSD hard drives for the 15K SAS drives used in Test 2, and the throughput numbers show the impact of that improved I/O. The average throughput of the bare metal environment with SSDs is between 1750 and 2000 operations per second. Those numbers are slightly higher than the SoftLayer environment in Test 2, further distancing the bare metal results from the virtual provider results.

The latency of gets for the SoftLayer environment is very difficult to see in this graph because the latency was so low throughout the test. The 99th percentile of puts in the SoftLayer environment settled between 500ms and 625ms, which was a little higher than the bare metal results from Test 2 but still well below the latency from the virtual environment.

Summary

The results show that — similar to the majority of data-centric applications that we have tested — Riak has more consistent, better performing, and lower latency results when deployed onto bare metal instead of a cluster of public cloud instances. The stark differences in consistency of the results and the latency are noteworthy for developers looking to host their big data applications. We compared the 99th percentile of latency, but the mean/median results are worth checking out as well. Look at the mean and median results from the SoftLayer SSD Node environment: For gets, the mean latency was 2.5ms and the median was somewhere around 1ms. For puts, the mean was between 7.5ms and 11ms and the median was around 5ms. Those kinds of results are almost unbelievable (and that's why I've shared everything involved in completing this test so that you can try it yourself and see that there's no funny business going on).

It's commonly understood that local single-tenant resources that bare metal will always perform better than network storage resources, but by putting some concrete numbers on paper, the difference in performance is pretty amazing. Virtualizing on multi-tenant solutions with network attached storage often introduces latency issues, and performance will vary significantly depending on host load. These results may seem obvious, but sometimes the promise of quick and easy deployments on public cloud environments can lure even the sanest and most rational developer. Some applications are suited for public cloud, but big data isn't one of them. But when you have data-centric apps that require extreme I/O traffic to your storage medium, nothing can beat local high performance resources.

-Harold

September 24, 2012

Cloud Computing is not a 'Thing' ... It's a way of Doing Things.

I like to think that we are beyond 'defining' cloud, but what I find in reality is that we still argue over basics. I have conversations in which people still delineate things like "hosting" from "cloud computing" based degrees of single-tenancy. Now I'm a stickler for definitions just like the next pedantic software-religious guy, but when it comes to arguing minutiae about cloud computing, it's easy to lose the forest for the trees. Instead of discussing underlying infrastructure and comparing hypervisors, we'll look at two well-cited definitions of cloud computing that may help us unify our understanding of the model.

I use the word "model" intentionally there because it's important to note that cloud computing is not a "thing" or a "product." It's a way of doing business. It's an operations model that is changing the fundamental economics of writing and deploying software applications. It's not about a strict definition of some underlying service provider architecture or whether multi-tenancy is at the data center edge, the server or the core. It's about enabling new technology to be tested and fail or succeed in blazing calendar time and being able to support super-fast growth and scale with little planning. Let's try to keep that in mind as we look at how NIST and Gartner define cloud computing.

The National Institute of Standards and Technology (NIST) is a government organization that develops standards, guidelines and minimum requirements as needed by industry or government programs. Given the confusion in the marketplace, there's a huge "need" for a simple, consistent definition of cloud computing, so NIST had a pretty high profile topic on its hands. Their resulting Cloud Computing Definition describes five essential characteristics of cloud computing, three service models, and four deployment models. Let's table the service models and deployment models for now and look at the five essential characteristics of cloud computing. I'll summarize them here; follow the link if you want more context or detail on these points:

  • On-Demand Self Service: A user can automatically provision compute without human interaction.
  • Broad Network Access: Capabilities are available over the network.
  • Resource Pooling: Computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned.
  • Rapid Elasticity: Capabilities can be elastically provisioned and released.
  • Measured Service: Resource usage can be monitored, controlled and reported.

The characteristics NIST uses to define cloud computing are pretty straightforward, but they are still a little ambiguous: How quickly does an environment have to be provisioned for it to be considered "on-demand?" If "broad network access" could just mean "connected to the Internet," why include that as a characteristic? When it comes to "measured service," how granular does the resource monitoring and control need to be for something to be considered "cloud computing?" A year? A minute? These characteristics cast a broad net, and we can build on that foundation as we set out to create a more focused definition.

For our next stop, let's look at Gartner's view: "A style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet infrastructure." From a philosophical perspective, I love their use of "style" when talking about cloud computing. Little differentiates the underlying IT capabilities of cloud computing from other types of computing, so when looking at cloud computing, we really just see a variation on how those capabilities are being leveraged. It's important to note that Gartner's definition includes "elastic" alongside "scalable" ... Cloud computing gets the most press for being able to scale remarkably, but the flip-side of that expansion is that it also needs to contract on-demand.

All of this describes a way of deploying compute power that is completely different than the way we did this in the decades that we've been writing software. It used to take months to get funding and order the hardware to deploy an application. That's a lot of time and risk that startups and enterprises alike can erase from their business plans.

How do we wrap all of those characteristics up into unified of definition of cloud computing? The way I look at it, cloud computing is as an operations model that yields seemingly unlimited compute power when you need it. It enables (scalable and elastic) capacity as you need it, and that capacity's pricing is based on consumption. That doesn't mean a provider should charge by the compute cycle, generator fan RPM or some other arcane measurement of usage ... It means that a customer should understand the resources that are being invoiced, and he/she should have the power to change those resources as needed. A cloud computing environment has to have self-service provisioning that doesn't require manual intervention from the provider, and I'd even push that requirement a little further: A cloud computing environment should have API accessibility so a customer doesn't even have to manually intervene in the provisioning process (The customer's app could use automated logic and API calls to scale infrastructure up or down based on resource usage).

I had the opportunity to speak at Cloud Connect Chicago, and I shared SoftLayer's approach to cloud computing and how it has evolved into a few distinct products that speak directly to our customers' needs:

The session was about 45 minutes, so the video above has been slimmed down a bit for easier consumption. If you're interested in seeing the full session and getting into a little more detail, we've uploaded an un-cut version here.

-Duke

September 12, 2012

How Can I Use SoftLayer Message Queue?

One of the biggest challenges developers run into when coding large, scalable systems is automating batch processes and distributing workloads to optimize compute resource usage. More simply, intra-application and inter-system communications tend to become a bottleneck that affect the user experience, and there is no easy way to get around it. Well ... There *was* no easy way around it.

Meet SoftLayer Message Queue.

As the name would suggest, Message Queue allows you to create one or more "queues" or containers which contain "messages" — strings of text that you can assign attributes to. The queues pass along messages in first-in-first-out order, and in doing so, they allow for parallel processing of high-volume workflows.

That all sounds pretty complex and "out there," but you might be surprised to learn that you're probably using a form of message queuing right now. Message queuing allows for discrete threads or applications to share information with one another without needing to be directly integrated or even operating concurrently. That functionality is at the heart of many of the most common operating systems and applications on the market.

What does it mean in a cloud computing context? Well, Message Queue facilitates more efficient interaction between different pieces of your application or independent software systems. The easiest way demonstrate how that happens is by sharing a quick example:

Creating a Video-Sharing Site

Let's say we have a mobile application providing the ability to upload video content to your website: sharevideoswith.phil. The problem we have is that our webserver and CMS can only share videos in a specific format from a specific location on a CDN. Transcoding the videos on the mobile device before it uploads proves to be far too taxing, what with all of the games left to complete from the last Humble Bundle release. Having the videos transcoded on our webserver would require a lot of time/funds/patience/knowledge, and we don't want to add infrastructure to our deployment for transcoding app servers, so we're faced with a conundrum. A conundrum that's pretty easily answered with Message Queue and SoftLayer's (free) video transcoding service.

What We Need

  • Our Video Site
  • The SoftLayer API Transcoding Service
  • SoftLayer Object Storage
    • A "New Videos" Container
    • A "Transcoded Videos" Container with CDN Enabled
  • SoftLayer Message Queue
    • "New Videos" Queue
    • "Transcoding Jobs" Queue

The Process

  1. Your user uploads the video to sharevideoswith.phil. Your web app creates a page for the video and populates the content with a "processing" message.
  2. The web application saves the video file into the "New Vidoes" container on object storage.
  3. When the video is saved into that container, it creates a new message in the "New Videos" message queue with the video file name as the body.
  4. From here, we have two worker functions. These workers work independently of each other and can be run at any comfortable interval via cron or any scheduling agent:
Worker One: Looks for messages in the "New Videos" message queue. If a message is found, Worker One transfers the video file to the SoftLayer Transcoding Service, starts the transcoding process and creates a message in the "Transcoding Jobs" message queue with the Job ID of the newly created transcoding job. Worker One then deletes the originating message from the "New Videos" message queue to prevent the process from happening again the next time Worker One runs.

Worker Two: Looks for messages in the "Transcoding Jobs" queue. If a message is found, Worker Two checks if the transcoding job is complete. If not, it does nothing with the message, and that message is be placed back into the queue for the next Worker Two to pick up and check. When Worker Two finds a completed job, the newly-transcoded video is pushed to the "Transcoded Videos" container on object storage, and Worker Two updates the page our web app created for the video to display an embedded media player using the CDN location for our transcoded video on object storage.

Each step in the process is handled by an independent component. This allows us to scale or substitute each piece as necessary without needing to refactor the other portions. As long as each piece receives and sends the expected message, its colleague components will keep doing their jobs.

Video transcoding is a simple use-case that shows some of the capabilities of Message Queue. If you check out the Message Queue page on our website, you can see a few other examples — from online banking to real-time stock, score and weather services.

Message Queue leverages Cloudant as the highly scalable low latency data layer for storing and distributing messages, and SoftLayer customers get their first 100,000 messages free every month (with additional messages priced at $0.01 for every 10,000).

What are you waiting for? Go get started with Message Queue!

-Phil (@SoftLayerDevs)

August 17, 2012

SoftLayer Private Clouds - Provisioning Speed

SoftLayer Private Clouds are officially live, and that means you can now order and provision your very own private cloud infrastructure on Citrix CloudPlatform quickly and easily. Chief Scientist Nathan Day introduced private clouds on the blog when it was announced at Cloud Expo East, and CTO Duke Skarda followed up with an explanation of the architecture powering SoftLayer Private Clouds. The most amazing claim: You can order a private cloud infrastructure and spin up its first virtual machines in a matter of hours rather than days, weeks or months.

If you've ever looked at building your own private cloud in the past, the "days, weeks or months" timeline isn't very surprising — you have to get the hardware provisioned, the software installed and the network configured ... and it all has to work together. Hearing that SoftLayer Private Clouds can be provisioned in "hours" probably seems too good to be true to administrators who have tried building a private cloud in the past, so I thought I'd put it to the test by ordering a private cloud and documenting the experience.

At 9:30am, I walked over to Phil Jackson's desk and asked him if he would be interested in helping me out with the project. By 9:35am, I had him convinced (proof), and the clock was started.

When we started the order process, part of our work is already done for us:

SoftLayer Private Clouds

To guarantee peak performance of the CloudPlatform management server, SoftLayer selected the hardware for us: A single processor quad core Xeon 5620 server with 6GB RAM, GigE, and two 2.0TB SATA II HDDs in RAID1. With the management server selected, our only task was choosing our host server and where we wanted the first zone (host server and management server) to be installed:

SoftLayer Private Clouds

For our host server, we opted for a dual processor quad core Xeon 5504 with the default specs, and we decided to spin it up in DAL05. We added (and justified) a block of 16 secondary IP addresses for our first zone, and we submitted the order. The time: 9:38am.

At this point, it would be easy for us to game the system to shave off a few minutes from the provisioning process by manually approving the order we just placed (since we have access to the order queue), but we stayed true to the experiment and let it be approved as it normally would be. We didn't have to wait long:

SoftLayer Private Clouds

At 9:42am, our order was approved, and the pressure was on. How long would it take before we were able to log into the CloudStack portal to create a virtual machine? I'd walked over to Phil's desk 12 minutes ago, and we still had to get two physical servers online and configured to work with each other on CloudPlatform. Luckily, the automated provisioning process took on a the brunt of that pressure.

Both server orders were sent to the data center, and the provisioning system selected two pieces of hardware that best matched what we needed. Our exact configurations weren't available, so a SBT in the data center was dispatched to make the appropriate hardware changes to meet our needs, and the automated system kicked into high gear. IP addresses were assigned to the management and host servers, and we were able to monitor each server's progress in the customer portal. The hardware was tested and prepared for OS install, and when it was ready, the base operating systems were loaded — CentOS 6 on the management server and Citrix XenServer 6 on the host server. After CentOS 6 finished provisioning on the management server, CloudStack was installed. Then we got an email:

SoftLayer Private Clouds

At 11:24am, less than two hours from when I walked over to Phil's desk, we had two servers online and configured with CloudStack, and we were ready to provision our first virtual machines in our private cloud environment.

We log into CloudStack and added our first instance:

SoftLayer Private Clouds

We configured our new instance in a few clicks, and we clicked "Launch VM" at 11:38am. It came online in just over 3 minutes (11:42am):

SoftLayer Private Clouds

I got from "walking to Phil's desk" to having a multi-server private cloud infrastructure running a VM in exactly two hours and twelve minutes. For fun, I created a second VM on the host server, and it was provisioned in 31.7 seconds. It's safe to say that the claim that SoftLayer takes "hours" to provision a private cloud has officially been confirmed, but we thought it would be fun to add one more wrinkle to the system: What if we wanted to add another host server in a different data center?

From the "Hardware" tab in the SoftLayer portal, we selected "Add Zone" to from the "Actions" in the "Private Clouds" section, and we chose a host server with four portable IP addresses in WDC01. The zone was created, and the host server went through the same hardware provisioning process that our initial deployment went through, and our new host server was online in < 2 hours. We jumped into CloudStack, and the new zone was created with our host server ready to provision VMs in Washington, D.C.

Given how quick the instances were spinning up in the first zone, we timed a few in the second zone ... The first instance was online in about 4 minutes, and the second was running in 26.8 seconds.

SoftLayer Private Clouds

By the time I went out for a late lunch at 1:30pm, we'd spun up a new private cloud infrastructure with geographically dispersed zones that launched new cloud instances in under 30 seconds. Not bad.

Don't take my word for it, though ... Order a SoftLayer Private Cloud and see for yourself.

-@khazard

June 28, 2012

Never Break Up with Your Data Again

Wouldn't it be nice if you could keep the parts of a relationship that you like and "move on" from the parts you don't? You'd never have to go through the awkward "getting to know each other" phase where you accidentally order food the other person is allergic to, and you'd never have to experience a break up. As it is, we're faced with a bit of a paradox: Relationships are a lot of work, and "Breaking up is hard to do."

I could tell you story after story about the break ups I experienced in my youth. From the Ghostbuster-jumpsuited boyfriend I had in kindergarten who stole my heart (and my barrettes) to until it was time to take my had-to-have "My Little Pony" thermos lunchbox to another table at lunch after a dramatic recess exchange to the middle school boyfriend who took me to see Titanic in the theater four times (yes, you read that correctly), my early "romantic" relationships didn't pan out in the "happily ever after" way I'd hoped they would. Whether the result of an me unwelcome kiss under the monkey bars or a move to a different school (which might as well have been on Mars), I had to break up with each of the boys.

Why are you reading about my lost loves on the SoftLayer Blog? Simple: Relationships with IT environments — specifically applications and data — are not much different from romantic relationships. You might want to cut ties with a high maintenance piece of equipment that you've been with for years because its behavior is getting erratic, and it doesn't look like it'll survive forever. Maybe you've outgrown what your existing infrastructure can provide for you, and you need to move along. Perhaps you just want some space and need to take a break from a project for six months.

If you feel like telling your infrastructure, "It's not you, it's me," what are your options? Undo all of your hard work, schedule maintenance and stay up in the dead of a weeknight to migrate, backup and restore all of your data locally?

When I talk to SoftLayer customers, I get to be a relationship therapist. Because we've come out with some pretty innovative tools, we can help our customers avoid ever having to break up with their data again. Two of the coolest "infrastructure relationship"-saving releases: Flex Images (currently in public beta) and portable storage volumes for cloud computing instances (CCIs).

With Flex Images, customers using RedHat, CentOS or Windows systems can create and move server images between physical and virtual environments to seamlessly transition from one platform to the other. With about three clicks, a customer-created image is quickly and uniformly delivered to a new dedicated or cloud server. The idea behind Flex Images is to blur the line between physical and virtual environments so that if you feel the need to break up with one of the two, the other is able to take you in.

Portable storage volumes (PSVs) are secondary CCI volumes that can be added onto any public or private CCI. Users can detach a PSV from any CCI and have it persist in the cloud, unattached to any compute resource, for as long as necessary. When that storage volume is needed again, it can be re-attached as secondary storage on any other CCI across all of SoftLayer's facilities. The best relationship parallel would be "baggage," but that's got a negative connotation, so we'll have to come up with something else to call it ... "preparedness."

We want to help you avoid break ups and provide you easy channels to make up with your old infrastructure if you have a change of heart. The result is an infrastructure that's much easier to manage, more fluid and less dramatic.

Now if I can only figure out a way to make Flex Images and portable storage volumes available for real-life relationships .... I'd make millions! :-)

-Arielle

June 6, 2012

Today's Technology "Game Changers": IPv6 and Cloud

"Game Changers" in technology force a decision: Adapt or die. When repeating rifles gained popularity in the late 1800s, a business of manufacturing muzzle-loading or breech-loading rifles would have needed to find a way to produce a repeating rifle or it would have lost most (if not all) of it's business to Winchester. If a fresh-faced independent musician is hitting it big on the coffee shop scene in 2012, she probably won't be selling out arenas any time soon if she refuses to make her music available digitally. Just ask any of the old-timers in the print media industry ... "Game Changers" in technology can be disastrous for an established business in an established industry.

That's pretty intimidating ... Even for tech businesses.

Shifts in technology don't have to be as drastic and obvious as a "printed newspaper v. social news site" comparison for them to be disruptive. Even subtle advances can wind up making or breaking a business. In fact, many of today's biggest and most successful tech companies are scrambling to adapt to two simple "game changers" that seem terribly significant:

  • IPv6
  • "The Cloud"

IPv6

A quick search of the SoftLayer Blog reminds me that Lance first brought up the importance of IPv6 adoption in October 2007:

ARIN has publically announced the need to shift to IPv6 and numerous articles have outlined the D-Day for IPv4 space. Most experts agree, its coming fast and that it will occur sometime in 2010 at the current pace (that's about two years for those counting). IPv6 brings enough IP space for an infinite number of users along with improved security features and several other operational efficiencies that will make it very popular. The problem lies between getting from IPv4 to IPv6.

When IPv4 exhaustion was just a blip on the horizon, many businesses probably thought, "Oh, I'll get around to it when I need to. It's not a problem yet." When IANA exhausted the IPv4 pool, they probably started picking up the phone and calling providers to ask what plans they had in place. When some of the Internet's biggest websites completed a trial transition to IPv6 on World IPv6 Day last year, those businesses started feeling the urgency. With today's World IPv6 Launch, they know something has to be done.

World IPv6 Launch Day

Regardless of how conservative providers get with IPv4 space, the 4,294,967,296 IPv4 addresses in existence will not last much longer. Soon, users will be accessing an IPv6 Internet, and IPv4-only websites will lose their opportunity to reach those users. That's a "game changer."

"The Cloud"

The other "game changer" many tech businesses are struggling with these days is the move toward "the cloud." There are a two interesting perspectives in this transition: 1) The challenge many businesses face when choosing whether to adopt cloud computing, and 2) The challenges for businesses that find themselves severing as an integral (sometimes unintentional) part of "the cloud." You've probably seen hundreds of blog posts and articles about the first, so I'll share a little insight on the second.

When you hear all of the hype about cloud computing and cloud storage offering a hardware-agnostic Utopia of scalable, reliable power, it's easy to forget that the building blocks of a cloud infrastructure will usually come from vendors that provided a traditional hosting resources. When a computing instance is abstracted from a hardware device, it's opens up huge variations in usage. It's possible to have dozens of public cloud instances using a single server's multi-proc, multi-core resources at a given time. If a vendor prices a piece of software on a "per server" basis, how do they define a "server" when their users are in the cloud? It can be argued that a cloud computing instance with a single core of power is a "server," and on the flip-side, it's easy to define a "server" as the hardware object on which many cloud instances may run. I don't know that there's an easy way to answer that question, but what I do know is that applying "what used to work" to "what's happening now" isn't the right answer.

The hardware and software providers in the cloud space who are able to come up with new approaches unencumbered by the urge to continue "the way we've always done it" are going to be the ones that thrive when technology "game changers" emerge, and the providers who dig their heels in the dirt or try to put a square peg into a round hole will get the short end of the "adapt or die" stick.

We've tried to innovate and take a fresh look at every opportunity that has come our way, and we do our best to build relationships with agile companies that we see following suit.

I guess a better way to position the decision at the beginning of this post would be to add a little tweak: "Innovate, adapt or die." How you approach technology "game changers" will define your business's success.

-@gkdog

April 24, 2012

RightScale + SoftLayer: The Power of Cloud Automation

SoftLayer's goal is to provide unparalleled value to the customers who entrust their business-critical computing to us — whether via dedicated hosting, managed hosting, cloud computing or a hybrid environment of all three. We provide the best platform on the market, delivering convenience, ease of use, compelling return on investment (ROI), significant competitive advantage, and consistency in a world where the only real constant seems to be change.

That value proposition is one of the biggest driving forces behind our partnership with RightScale. We're cloud computing soul mates.

RightScale

RightScale understands the power of automation, and as a result, they've created a cloud management platform that they like to say delivers "abstraction with complete customization." RightScale customers can easily deploy and manage applications across public, private and hybrid cloud environments, unencumbered by the underlying details. They are free to run efficient, scalable, highly available applications with visibility into and control over their computing resources available in one place.

As you know, SoftLayer is fueled by automation as well, and it's one of our primary differentiators. We're able to deliver a phenomenal customer experience because every aspect of our platform is fully and seamlessly automated to accelerate provisioning, mitigate human error and provide customers with access and features that our competitors can only dream of. Our customers get simple and total control over an ever-expanding number of back-end services and functions through our easy-to-use Customer Portal and via an open, robust API.

The compatibility between SoftLayer and RightScale is probably pretty clear already, but if you needed another point to ponder, you can ruminate on the fact that we both share expertise and focus across a number of vertical markets. The official announcement of the SoftLayer and RightScale partnership will be particularly noteworthy and interesting in the Internet-based business and online gaming market segments.

It didn't take long to find an amazing customer success story that demonstrated the value of the new SoftLayer-RightScale partnership. Broken Bulb Game Studios — the developer of social games such as My Town, Braaains, Ninja Warz and Miscrits — is already harnessing the combined feature sets made possible by our partnership with RightScale to simplify its deployment process and scale to meet its customers' expectations as its games find audiences and growing favor on Facebook. Don't take our word for it, though ... Check out the Broken Bulb quote in today's press release announcing the partnership.

Broken Bulb Game Studios

Broken Bulb and other developers of social games recognize the importance of getting concepts to market at breakneck speed. They also understand the critical importance of intelligently managing IT resources throughout a game's life cycle. What they want is fully automated control over computing resources so that they can be allocated dynamically and profitably in immediate response to market signals, and they're not alone.

Game developers of all sorts — and companies in a growing number of vertical markets — will need and want the same fundamental computing-infrastructure agility.

Our partnership with RightScale is only beginning. You're going to see some crazy innovation happening now that our cloud computing mad scientists are all working together.

-Marc

February 1, 2012

Flex Images: Blur the Line Between Cloud and Dedicated

Our customers are not concerned with technology for technology's sake. Information technology should serve a purpose; it should function as an integral means to a desired end. Understandably, our customers are focused, first and foremost, on their application architecture and infrastructure. They want, and need, the freedom and flexibility to design their applications to their specifications.

Many companies leverage the cloud to take advantage of core features that enable robust, agile architectures. Elasticity (ability to quickly increase or decrease compute capacity) and flexibility (choice such as cores, memory and storage) combine to provide solutions that scale to meet the demands of modern applications.

Another widely used feature of cloud computing is image-based provisioning. Rapid provisioning of cloud resources is accomplished, in part, through the use of images. Imaging capability extends beyond the use of base images, allowing users to create customized images that preserve their software installs and configurations. The images persist in an image library, allowing users to launch new cloud instances based their images.

But why should images only be applicable to virtualized cloud resources?

Toward that end, we're excited to introduce SoftLayer Flex Images, a new capability that allows us to capture images of physical and virtual servers, store them all in one library, and rapidly deploy those images on either platform.

SoftLayer Flex Images

Physical servers now share the core features of virtual servers—elasticity and flexibility. With Flex Images, you can move seamlessly between and environments as your needs change.

Let's say you're running into resource limits in a cloud server environment—your data-intensive server is I/O bound—and you want to move the instance to a more powerful dedicated server. Using Flex Images, you can create an image of your cloud server and, extending our I/O bound example, deploy it to a custom dedicated server with SSD drives.

Conversely, a dedicated environment can be quickly replicated on multiple cloud instances if you want the scaling capability of the cloud to meet increased demand. Maybe your web heads run on dedicated servers, but you're starting to see periods of usage that stress your servers. Create a Flex Image from your dedicated server and use it to deploy cloud instances to meet demand.

Flex Image technology blurs the distinctions—and breaks down the walls—between virtual and physical computing environments.

We don't think of Flex Images as new product. Instead—like our network, our portal, our automated platform, and our globe-spanning geographic diversity—Flex Image capability is a free resource for our customers (with the exception of standard nominal costs in storing the Flex Images).

We think Flex Images represents not only great value, but also provides a further example of how SoftLayer innovates continually to bring new capabilities and the highest possible level of customer control to our automated services platform.

To sum up, here are some of the key features and benefits of SoftLayer Flex Images:

  • Universal images that can be used interchangeably on dedicated or cloud systems
  • Unified image library for archiving, managing, sharing, and publishing images
  • Greater flexibility and higher scalability
  • Rapid provisioning of new dedicated and cloud environments
  • Available via SoftLayer's management portal and API

In public beta, Flex Images are available now. We invite you to try them out, and, as always, we want to hear what you think.

-Marc

Subscribe to cloud-computing