Cloud Posts

March 7, 2014

Why the Cloud Scares Traditional IT

My background is "traditional IT." I've been architecting and promoting enterprise virtualization solutions since 2002, and over the past few years, public and hybrid cloud solutions have become a serious topic of discussion ... and in many cases, contention. The customers who gasped with excitement when VMware rolled out a new feature for their on-premises virtualized environments would dismiss any recommendations of taking a public cloud or a hybrid cloud approach. Off-premises cloud environments were surrounded by marketing hype, and the IT departments considering them had legitimate concerns, especially around security and compliance.

I completely understood their concerns, and until recently, I often agreed with them. The cloud model is intimidating. If you've had control over every aspect of your IT environment for a few decades, you don't want to give up access to your infrastructure, much less have to trust another company to protect your business-critical information. But now, I think about those concerns as the start of a conversation about cloud, rather than a "no-go" zone. The cloud is different, but a company's approach to it should still be the same.

What do I mean by that? Enterprise developers and engineers still have to serve as architects to determine the functional and operational requirements for their services. In that process, they need to determine the suitability of a given platform for the computing workload and the company's business objectives and core competencies. Unfortunately, many of the IT decision-makers don't consider the bigger business context, and they choose to build their own "public" IaaS offerings to accommodate internal workloads, and in many cases, their own external clients.

This approach might makes sense for service providers, integrators and telcos because infrastructure resources are core components of their businesses, but I've seen the same thing happen at financial institutions, rental companies, and even an airline. Over time, internal IT departments carved out infrastructure-services revenue streams that are totally unrelated to the company's core business. The success of enterprise virtualization often empowered IT departments through cost savings and automation — making the promise of delivering public cloud “in-house” a natural extension and seemingly attractive proposition. Reshaping their perspectives around information security and compliance in that way is often a functional approach, but is it money well spent?

Instead of spending hundreds of thousands or millions of dollars in capital to build out (often commoditized) infrastructure, these businesses could be investing those resources in developing and marketing their core business areas. To give you an example of how a traditional IT task is performed in the cloud, I can share my experience from when I first accessed my SoftLayer account: I deployed a physical ESX host alongside a virtual compute instance, fully pre-configured with OS and vCenter, and I connected it via VPN to my existing (on-prem) vCenter environment. In the old model, that process would have probably taken a couple of days to complete, and I got it done in 3 hours.

Now more than ever, it is the responsibility of the core business line to validate internal IT strategies and evaluate alternatives. Public cloud is not always the right answer for all workloads, but driven by the rapidly evolving maturity and proliferation of IaaS, PaaS and SaaS offerings, most organizations will see significant benefits from it. Ultimately, the best way to understand the potential value is just to give it a try.

-Andy

Andreas Groth is an IBM worldwide channel solutions architect, focusing primarily on SoftLayer. Follow him on Twitter: @andreasgroth

February 6, 2014

Building a Bridge to the OpenStack API

OpenStack is experiencing explosive growth in the cloud market. With more than 200 companies contributing code to the source and new installations coming online every day, OpenStack is pushing hard to become a global standard for cloud computing. Dozens of useful tools and software products have been developed using the OpenStack API, so a growing community of administrators, developers and IT organizations have access to easy-to-use, powerful cloud resources. This kind of OpenStack integration is great for users on a full OpenStack cloud, but it introduces a challenge to providers and users on other cloud platforms: Should we consider deploying or moving to an OpenStack environment to take advantage of these tools?

If a cloud provider spends years developing a unique platform with a proprietary API, implementing native support for the OpenStack API or deploying a full OpenStack solution may be cost prohibitive, even with significant customer and market demand. The provider can either bite the bullet to implement OpenStack compatibility, hope that a third party library like libclouds or fog is updated to support its API, or choose to go it alone and develop an ecosystem of products around its own API.

Introducing Jumpgate

When we were faced with this situation at SoftLayer, we chose a fourth option. We wanted to make the process of creating an OpenStack-compatible API simpler and more modular. That's where Jumpgate was born. Jumpgate is a middleware that acts as a compatibility layer between the OpenStack API and a provider's proprietary API. Externally, it exposes endpoints that adhere to OpenStack's published and accepted API specification, which it then translates into the provider's API using a series of drivers. Think of it as a mechanism to enable passing from one realm/space into another — like the jumpgates featured in science fiction works.

Connection

How Jumpgate Works
Let's take a look at a high-level example: When you want to create a new virtual instance on OpenStack, you might use the Horizon dashboard or the Nova command line client. When you issue the request, the tool first makes a REST call to a Keystone endpoint for authentication, which returns an authorization token. The client then makes another REST call to a Nova endpoint, which manages the computing instances, to create the actual virtual instance. Nova may then make calls to other tools within the cluster for networking (Quantum), image information (Glance), block storage (Cinder), or more. In addition, your client may also send requests directly to some of these endpoints to query for status updates, information about available resources, and so on.

With Jumpgate, your tool first hits the Jumpgate middleware, which exposes a Keystone endpoint. Jumpgate takes the request, breaks it apart into its relevant pieces, then loads up your provider's appropriate API driver. Next, Jumpgate reformats your request into a form that the driver supports and sends it to the provider's API endpoint. Once the response comes back, Jumpgate again uses the driver to break apart the proprietary API response, reformats it into an OpenStack compatible JSON payload, and sends it back to your client. The result is that you interact with an OpenStack-compatible API, and your cloud provider processes those interactions on their own backend infrastructure.

Internally, Jumpgate is a lightweight middleware built in Python using the Falcon Framework. It provides endpoints for nearly every documented OpenStack API call and allows drivers to attach handlers to these endpoints. This modular approach allows providers to implement only the endpoints that are of the highest importance, rolling out OpenStack API compatibility in stages rather than in one monumental effort. Since it sits alongside the provider's existing API, Jumpgate provides a new API interface without risking the stability already provided by the existing API. It's a value-add service that increases customer satisfaction without a huge increase in cost. Once full implementations is finished, a provider with a proprietary cloud platform can benefit from and offer all the tools that are developed to work with the OpenStack API.

Jumpgate allows providers to test the proper OpenStack compatibility of their drivers by leveraging the OpenStack Tempest test suite. With these tests, developers run the full suite of calls used by OpenStack itself, highlighting edge cases or gaps in functionality. We've even included a helper script that allows Tempest to only run a subset of tests rather than the entire suite to assist with a staged rollout.

Current Development
Jumpgate is currently in an early alpha stage. We've built the compatibility framework itself and started on the SoftLayer drivers as a reference. So far, we've implemented key endpoints within Nova (computing instances), Keystone (identification and authorization), and Glance (image management) to get most of the basic functionality within Horizon (the web dashboard) working. We've heard that several groups outside SoftLayer are successfully using Jumpgate to drive products like Trove and Heat directly on SoftLayer, which is exciting and shows that we're well beyond the "proof of concept" stage. That being said, there's still a lot of work to be done.

We chose to develop Jumpgate in the open with a tool set that would be familiar to developers working with OpenStack. We're excited to debut this project for the broader OpenStack community, and we're accepting pull requests if you're interested in contributing. Making more clouds compatible with the OpenStack API is important and shouldn’t be an individual undertaking. If you're interested in learning more or contributing, head over to our in-flight project page on GitHub: SoftLayer Jumpgate. There, you'll find everything you need to get started along with the updates to our repository. We encourage everyone to contribute code or drivers ... or even just open issues with feature requests. The more community involvement we get, the better.

-Nathan

Categories: 
February 3, 2014

Risk Management: 5 Tips for Managing Risk in the Cloud

Security breaches have made front-page news in recent months. With stories about Target, Neiman Marcus, Yahoo! and GoDaddy in the headlines recently, the importance of good information security practices is becoming harder and harder to ignore — even for smaller businesses. Moving your business into the cloud offers a plethora of benefits; however, those benefits do not come without their challenges. Moving your business into the cloud involves risks such as multi-tenancy, so it's important to be able to properly manage and identify these risks.

1. Know the Security Your Provider Offers
While some SaaS providers may have security baked-in, most IaaS providers (including SoftLayer) leave much of the logical security responsibility of a customer's systems to the customer. For the security measures that an infrastructure provider handles, the provider should be able to deliver documentation attesting these controls. We perform an annual SOC2 audit, so we can attest to the status of our security and availability controls as a service organization. With this information, our customers use controls from our report as part of their own compliance requirements. Knowing a provider's security controls (and seeing proof of that security) allows business owners and Chief Information Security Officers (CISO) to have peace-of-mind that they can properly plan their control activities to better prevent or respond to a breach.

2. Use the Cloud to Distribute and Replicate Your Presence
The incredible scalability and geographical distribution of operating in the cloud can yield some surprising payoff. Experts in the security industry are leveraging the cloud to reduce their patch cycles to days, not weeks or months. Most cloud providers have multiple sites so that you can spread your presence nationally, or even globally. With this kind of infrastructure footprint, businesses can replicate failover systems and accommodate regional demand across multiple facilities with the minimal incremental investment (and with nearly identical security controls).

3. Go Back to the Basics
Configuration management. Asset management. Separation of duties. Strong passwords. Many organizations get so distracted by the big picture of their security measures that they fail to manage these basic rights. Take advantage of any of your provider's tools to assist in the ‘mundane’ tasks that are vitally important to your business's overall security posture. For example, you can use image templates or post-provisioning scripts to deploy a standard baseline configuration to your systems, then track them down to the specific server room. You’ll know what hardware is in your server at all times, and if you're using SoftLayer, you can even drill down to the serial numbers of your hard drives.

4. Have Sound Incident Response Plans
The industry is becoming increasingly cognizant of the fact that it’s not a matter of if, but when a security threat will present itself. Even with exceedingly high levels of baked-in security, most of the recent breaches resulted from a compromised employee. Be prepared to respond to security incidents with confidence. While you may be physically distanced from your systems, you should be able to meet defined Recovery Time Objectives (RTOs) for your services.

5. Maintain Constant Contact with Your Cloud Provider
Things happen. No amount of planning can completely halt every incident, whether it be a natural disaster or a determined attacker. Know that your hosting provider has your back when things take an unexpected turn.

With proper planning and good practice, the cloud isn't as risky and frightening as most think. If you're interested in learning a little more about the best practices around security in the cloud, check out the Cloud Security Alliance (CSA). The CSA provides a wealth of knowledge to assist business owners and security professionals alike. Build on the strengths, compensate for the weaknesses, and you and your CISO will be able to sleep at night (and maybe even sneak in a beer after work).

-Matt

January 31, 2014

Simplified OpenStack Deployment on SoftLayer

"What is SoftLayer doing with OpenStack?" I can't even begin to count the number of times I've been asked that question over the last few years. In response, I'll usually explain how we've built our object storage platform on top of OpenStack Swift, or I'll give a few examples of how our customers have used SoftLayer infrastructure to build and scale their own OpenStack environments. Our virtual and bare metal cloud servers provide a powerful and flexible foundation for any OpenStack deployment, and our unique three-tiered network integrates perfectly with OpenStack's Compute and Network node architecture, so it's high time we make it easier to build an OpenStack environment on SoftLayer infrastructure.

To streamline and simplify OpenStack deployment for the open source community, we've published Opscode Chef recipes for both OpenStack Grizzly and OpenStack Havana on GitHub: SoftLayer Chef-Openstack. With Chef and SoftLayer, your own OpenStack cloud is a cookbook away. These recipes were designed with the needs of growth and scalability in mind. Let's take a deeper look into what exactly that means.

OpenStack has adopted a three-node design whereby a controller, compute, and network node make up its architecture:

OpenStack Architecture on SoftLayer

Looking more closely at any one node reveal the services it provides. Scaling the infrastructure beyond a few dozen nodes, using this model, could create bottlenecks in services such as your block store, OpenStack Cinder, and image store, OpenStack Glance, since they are traditionally located on the controller node. Infrastructure requirements change from service to service as well. For example OpenStack Neutron, the networking service, does not need much disk I/O while the Cinder storage service might heavily rely on a node's hard disk. Our cookbook allows you to choose how and where to deploy the services, and it even lets you break apart the MySQL backend to further improve platform performance.

Quick Start: Local Demo Environment

To make it easy to get started, we've created a rapid prototype and sandbox script for use with Vagrant and Virtual Box. With Vagrant, you can easily spin up a demo environment of Chef Server and OpenStack in about 15 minutes on moderately good laptops or desktops. Check it out here. This demo environment is an all-in-one installation of our Chef OpenStack deployment. It also installs a basic Chef server as a sandbox to help you see how the SoftLayer recipes were deployed.

Creating a Custom OpenStack Deployment

The thee-node OpenStack model does well in small scale and meets the needs of many consumers; however, control and customizability are the tenants for the design of the SoftLayer OpenStack Chef cookbook. In our model, you have full control over the configuration and location of eleven different components in your deployed environment:

Our Chef recipes will take care of populating the configuration files with the necessary information so you won't have to. When deploying, you merely add the role for the matching service to a hardware or virtual server node, and Chef will deploy the service to it with all the configuration done automatically, including adding multiple Neutron, Nova, and Cinder nodes. This approach allows you to tailor the needs of each service to the hardware it will be deployed to--you might put your Neutron hardware node on a server with 10-gigabit network interfaces and configure your Cinder hardware node with RAID 1+0 15k SAS drives.

OpenStack is a fast growing project for the implementation of IaaS in public and private clouds, but its deployment and configuration can be overwhelming. We created this cookbook to make the process of deploying a full OpenStack environment on SoftLayer quick and straightforward. With the simple configuration of eleven Chef roles, your OpenStack cloud can be deployed onto as little as one node and scaled up to many as hundreds (or thousands).

To follow this project, visit SoftLayer on GitHub. Check out some of our other projects on GitHub, and let us know if you need any help or want to contribute.

-@marcalanjones

January 29, 2014

Get Your Pulse Racing

What will the future bring for SoftLayer and IBM? Over the past six months, you've probably asked that question more than a few times, and the answer you got may have been incomplete. You know that IBM is supercharging SoftLayer expansion and that our platform will be the foundation for IBM's most popular enterprise cloud products and services, but you've really only seen a glimpse of the big picture. At IBM Pulse, you'll get a much better view.

SoftLayer is no stranger to conferences and events. Last year alone, we were involved in around 70 different trade shows, and that number doesn't include the dozens of meetups, events, and parties we participated in without an official booth presence. It's pretty safe to say that Pulse is more important to us than any of the shows we've attended in the past. Why? Because Pulse is the first major conference where SoftLayer will be in the spotlight.

As a major component in IBM's cloud strategy, it's safe to assume that every attendee at IBM's "Premier Cloud Conference" will hear all about SoftLayer's platform and capabilities. We'll have the Server Challenge on the expo hall floor, we're going to play a huge part in connecting with developers at dev@Pulse, a number of SLayers are slated to lead technical sessions, and Wednesday's general session will be presented by our CEO, Lance Crosby.

If you're interested in what's next for IBM in the cloud, join us at Pulse 2014. SoftLayer customers are eligible for a significant discount on registration for the full conference, so if you need details on how to sign up, leave a comment on this blog or contact a SoftLayer sales rep, and we'll make sure you get all the information you need. To make it easier for first-time attendees to experience Pulse, IBM offers a special Pulse Peek pass that will get you into the general sessions and expo hall for free!

If you're a developer, we need to see you at dev@Pulse. Happening in parallel with the main Pulse show, dev@Pulse is focused on helping attendees design, develop, and deploy the next generation of cloud-based systems and applications. In addition to the lightning talks, hands-on labs, free certification testing, and code jam competition, you'll get to try out the Oculus Rift, meet a ton of brilliant people, and party with Elvis Costello and Fall Out Boy. The cost? A whopping $0.

Whether you're chairman of the board or a front-line application developer, you'll get a lot out of IBM Pulse. What happens in Vegas ... could change the way you do business. (Note: The parties, however, will stay in Vegas.)

-@khazard

October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

September 30, 2013

The Economics of Cloud Computing: If It Seems Too Good to Be True, It Probably Is

One of the hosts of a popular Sirius XM radio talk show was recently in the market to lease a car, and a few weeks ago, he shared an interesting story. In his research, he came across an offer he came across that seemed "too good to be true": Lease a new Nissan Sentra with no money due at signing on a 24-month lease for $59 per month. The car would as "base" as a base model could be, but a reliable car that can be driven safely from Point A to Point B doesn't need fancy "upgrades" like power windows or an automatic transmission. Is it possible to lease new car for zero down and $59 per month? What's the catch?

After sifting through all of the paperwork, the host admitted the offer was technically legitimate: He could lease a new Nissan Sentra for $0 down and $59 per month for two years. Unfortunately, he also found that "lease" is just about the extent of what he could do with it for $59 per month. The fine print revealed that the yearly mileage allowance was 0 (zero) — he'd pay a significant per-mile rate for every mile he drove the car.

Let's say the mileage on the Sentra was charged at $0.15 per mile and that the car would be driven a very-conservative 5,000 miles per year. At the end of the two-year lease, the 10,000 miles on the car would amount to a $1,500 mileage charge. Breaking that cost out across the 24 months of the lease, the effective monthly payment would be around $121, twice the $59/mo advertised lease price. Even for a car that would be used sparingly, the numbers didn't add up, so the host wound up leasing a nicer car (that included a non-zero mileage allowance) for the same monthly cost.

The "zero-down, $59/mo" Sentra lease would be a fantastic deal for a person who wants the peace of mind of having a car available for emergency situations only, but for drivers who put the national average of 15,000 miles per year, the economic benefit of such a low lease rate is completely nullified by the mileage cost. If you were in the market to lease a new car, would you choose that Sentra deal?

At this point, you might be wondering why this story found its way onto the SoftLayer Blog, and if that's the case, you don't see the connection: Most cloud computing providers sell cloud servers like that car lease.

The "on demand" and "pay for what you use" aspects of cloud computing make it easy for providers to offer cloud servers exclusively as short-term utilities: "Use this cloud server for a couple of days (or hours) and return it to us. We'll just charge you for what you use." From a buyer's perspective, this approach is easy to justify because it limits the possibility of excess capacity — paying for something you're not using. While that structure is effective (and inexpensive) for customers who sporadically spin up virtual server instances and turn them down quickly, for the average customer looking to host a website or application that won't be turned off in a given month, it's a different story.

Instead of discussing the costs in theoretical terms, let's look at a real world example: One of our competitors offers an entry-level Linux cloud server for just over $15 per month (based on a 730-hour month). When you compare that offer to SoftLayer's least expensive monthly virtual server instance (@ $50/mo), you might think, "OMG! SoftLayer is more than three times as expensive!"

But then you remember that you actually want to use your server.

You see, like the "zero down, $59/mo" car lease that doesn't include any mileage, the $15/mo cloud server doesn't include any bandwidth. As soon as you "drive your server off the lot" and start using it, that "fantastic" rate starts becoming less and less fantastic. In this case, outbound bandwidth for this competitor's cloud server starts at $0.12/GB and is applied to the server's first outbound gigabyte (and every subsequent gigabyte in that month). If your server sends 300GB of data outbound every month, you pay $36 in bandwidth charges (for a combined monthly total of $51). If your server uses 1TB of outbound bandwidth in a given month, you end up paying $135 for that "$15/mo" server.

Cloud servers at SoftLayer are designed to be "driven." Every monthly virtual server instance from SoftLayer includes 1TB of outbound bandwidth at no additional cost, so if your cloud server sends 1TB of outbound bandwidth, your total charge for the month is $50. The "$15/mo v. $50/mo" comparison becomes "$135/mo v. $50/mo" when we realize that these cloud servers don't just sit in the garage. This illustration shows how the costs compare between the two offerings with monthly bandwidth usage up to 1.3TB*:

Cloud Cost v Bandwidth

*The graphic extends to 1.3TB to show how SoftLayer's $0.10/GB charge for bandwidth over the initial 1TB allotment compares with the competitor's $0.12/GB charge.

Most cloud hosting providers sell these "zero down, $59/mo car leases" and encourage you to window-shop for the lowest monthly price based on number of cores, RAM and disk space. You find the lowest price and mentally justify the cost-per-GB bandwidth charge you receive at the end of the month because you know that you're getting value from the traffic that used that bandwidth. But you'd be better off getting a more powerful server that includes a bandwidth allotment.

As a buyer, it's important that you make your buying decisions based on your specific use case. Are you going to spin up and spin down instances throughout the month or are you looking for a cloud server that is going to stay online the entire month? From there, you should estimate your bandwidth usage to get an idea of the actual monthly cost you can expect for a given cloud server. If you don't expect to use 300GB of outbound bandwidth in a given month, your usage might be best suited for that competitor's offering. But then again, it's probably worth mentioning that that SoftLayer's base virtual server instance has twice the RAM, more disk space and higher-throughput network connections than the competitor's offering we compared against. Oh yeah, and all those other cloud differentiators.

-@khazard

July 29, 2013

A Brief History of Cloud Computing

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.

Virtualization

As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.

Clouds

As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.

-James

Categories: 
April 30, 2013

Big Data at SoftLayer: Riak

Big data is only getting bigger. Late last year, SoftLayer teamed up with 10Gen to launch a high-performance MongoDB solution, and since then, many of our customers have been clamoring for us to support other big data platforms in the same way. By automating the provisioning process of a complex big data environment on bare metal infrastructure, we made life a lot easier for developers who demanded performance and on-demand scalability for their big data applications, and it's clear that our simple formula produced amazing results. As Marc mentioned when he started breaking down big data database models, document-oriented databases like MongoDB are phenomenal for certain use-cases, and in other situations, a key-value store might be a better fit. With that in mind, we called up our friends at Basho and started building a high-performance architecture specifically for Riak ... And I'm excited to announce that we're launching it today!

Riak is an open source, distributed database platform based on the principles enumerated in the DynamoDB paper. It uses a simple key/value model for object storage, and it was architected for high availability, fault tolerance, operational simplicity and scalability. A Riak cluster is composed of multiple nodes that are all connected, all communicating and sharing data automatically. If one node were to fail, the other nodes would automatically share the data that the failed node was storing and processing until the node is back up and running or a new node is added. See the diagram below for a simple illustration of how adding a node to a cluster works within Riak.

Riak Nodes

We will support both the open source and the Enterprise versions of Riak. The open source version is a great place to start. It has all of the database functionality of Riak Enterprise, but it is limited to a single cluster. The Enterprise version supports replication between clusters across data centers, giving you lots of architectural options. You can use replication to build highly available, live-live failover applications. You can also use it to distribute your application's data across regions, giving you a global platform that you can update anywhere in the world and know that those modifications will be available anywhere else. Riak Enterprise customers also receive 24×7 coverage, both from SoftLayer and Basho. This includes SoftLayer's one-hour guaranteed response for Severity 1 hardware issues and unlimited support available via our secure web portal, email and phone.

The business use-case for this flexibility is that if you need to scale up or down, nodes can be easily added or taken down as your requirements change. You can opt for a single-data center environment with a few nodes or you can broaden your architecture to a multi-data center deployment with a 40-node cluster. While these capabilities are inherent in Riak, they can be complicated to build and configure, so we spent countless hours working with Basho to streamline Riak deployment on the SoftLayer platform. The fruit of that labor can be found in our Riak Solution Designer:

Riak Solution Designer

The server configurations and packages in the Riak Solution Designer have been selected to deliver the performance, availability and stability that our customers expect from their bare metal and virtual cloud infrastructure at SoftLayer. With a few quick clicks, you can order a fully configured Riak environment, and it'll be provisioned and online for you in two to four hours. And everything you order is on a month-to-month contract.

Thanks to the hard work done by the SoftLayer development group and Basho's team, we're proud to be the first in the marketplace to offer a turn-key Riak solution on bare metal infrastructure. You don't need to sacrifice performance and agility for simplicity.

For more information, visit SoftLayer.com/Riak or contact our sales team.

-Duke

December 31, 2012

FatCloud: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Ian Miller, CEO of FatCloud. FatCloud is a cloud-enabled application platform that allows enterprises to build, deploy and manage next-generation .NET applications.

'The Cloud' and Agility

As the CEO of a cloud-enabled application platform for the .NET community, I get the same basic question all the time: "What is the cloud?" I'm a consumer of cloud services and a supplier of software that helps customers take advantage of the cloud, so my answer to that question has evolved over the years, and I've come to realize that the cloud is fundamentally about agility. The growth, evolution and adoption of cloud technology have been fueled by businesses that don't want to worry about infrastructure and need to pivot or scale quickly as their needs change.

Because FatCloud is a consumer of cloud infrastructure from Softlayer, we are much more nimble than we'd be if we had to worry about building data centers, provisioning hardware, patching software and doing all the other time-consuming tasks that are involved in managing a server farm. My team can focus on building innovative software with confidence that the infrastructure will be ready for us on-demand when we need it. That peace of mind also happens to be one of the biggest reasons developers turn to FatCloud ... They don't want to worry about configuring the fundamental components of the platform under their applications.

Fat Cloud

Our customers trust FatCloud's software platform to help them build and scale their .NET applications more efficiently. To do this, we provide a Core Foundation of .NET WCF services that effectively provides the "plumbing" for .NET cloud computing, and we offer premium features like a a distributed NoSQL database, work queue, file storage/management system, content caching and an easy-to-use administration tool that simplifies managing the cloud for our customers. FatCloud makes developing for hundreds of servers as easy as developing for one, and to prove it, we offer a free 3-node developer edition so that potential customers can see for themselves.

FatCloud Offering

The agility of the cloud has the clearest value for a company like ours. In one heavy-duty testing month, we needed 75 additional servers online, and after that testing was over, we needed the elasticity to scale that infrastructure back down. We're able to adjust our server footprint as we balance our computing needs and work within budget constraints. Ten years ago, that would have been overwhelmingly expensive (if not impossible). Today, we're able to do it economically and in real-time. SoftLayer is helping keep FatCloud agile, and FatCloud passes that agility on to our customers.

Companies developing custom software for the cloud, mobile or web using .NET want a reliable foundation to build from, and they want to be able to bring their applications to market faster. With FatCloud, those developers can complete their projects in about half the time it would take them if they were to develop conventionally, and that speed can be a huge competitive differentiator.

The expensive "scale up" approach of buying and upgrading powerful machines for something like SQL Server is out-of-date now. The new kid in town is the "scale out" approach of using low-cost servers to expand infrastructure horizontally. You'll never run into those "scale up" hardware limitations, and you can build a dynamic, scalable and elastic application much more economically. You can be agile.

If you have questions about how FatCloud and SoftLayer make cloud-enabled .NET development easier, send us an email: sales@fatcloud.com. Our team is always happy to share the easy (and free) steps you can take to start taking advantage of the agility the cloud provides.

-Ian Miller, CEO of FatCloud

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace. These partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New partners will be added to the Marketplace each month, so stay tuned for many more come.
Subscribe to cloud