Author Archive: Duke Skarda

April 30, 2013

Big Data at SoftLayer: Riak

Big data is only getting bigger. Late last year, SoftLayer teamed up with 10Gen to launch a high-performance MongoDB solution, and since then, many of our customers have been clamoring for us to support other big data platforms in the same way. By automating the provisioning process of a complex big data environment on bare metal infrastructure, we made life a lot easier for developers who demanded performance and on-demand scalability for their big data applications, and it's clear that our simple formula produced amazing results. As Marc mentioned when he started breaking down big data database models, document-oriented databases like MongoDB are phenomenal for certain use-cases, and in other situations, a key-value store might be a better fit. With that in mind, we called up our friends at Basho and started building a high-performance architecture specifically for Riak ... And I'm excited to announce that we're launching it today!

Riak is an open source, distributed database platform based on the principles enumerated in the DynamoDB paper. It uses a simple key/value model for object storage, and it was architected for high availability, fault tolerance, operational simplicity and scalability. A Riak cluster is composed of multiple nodes that are all connected, all communicating and sharing data automatically. If one node were to fail, the other nodes would automatically share the data that the failed node was storing and processing until the node is back up and running or a new node is added. See the diagram below for a simple illustration of how adding a node to a cluster works within Riak.

Riak Nodes

We will support both the open source and the Enterprise versions of Riak. The open source version is a great place to start. It has all of the database functionality of Riak Enterprise, but it is limited to a single cluster. The Enterprise version supports replication between clusters across data centers, giving you lots of architectural options. You can use replication to build highly available, live-live failover applications. You can also use it to distribute your application's data across regions, giving you a global platform that you can update anywhere in the world and know that those modifications will be available anywhere else. Riak Enterprise customers also receive 24×7 coverage, both from SoftLayer and Basho. This includes SoftLayer's one-hour guaranteed response for Severity 1 hardware issues and unlimited support available via our secure web portal, email and phone.

The business use-case for this flexibility is that if you need to scale up or down, nodes can be easily added or taken down as your requirements change. You can opt for a single-data center environment with a few nodes or you can broaden your architecture to a multi-data center deployment with a 40-node cluster. While these capabilities are inherent in Riak, they can be complicated to build and configure, so we spent countless hours working with Basho to streamline Riak deployment on the SoftLayer platform. The fruit of that labor can be found in our Riak Solution Designer:

Riak Solution Designer

The server configurations and packages in the Riak Solution Designer have been selected to deliver the performance, availability and stability that our customers expect from their bare metal and virtual cloud infrastructure at SoftLayer. With a few quick clicks, you can order a fully configured Riak environment, and it'll be provisioned and online for you in two to four hours. And everything you order is on a month-to-month contract.

Thanks to the hard work done by the SoftLayer development group and Basho's team, we're proud to be the first in the marketplace to offer a turn-key Riak solution on bare metal infrastructure. You don't need to sacrifice performance and agility for simplicity.

For more information, visit SoftLayer.com/Riak or contact our sales team.

-Duke

December 4, 2012

Big Data at SoftLayer: MongoDB

In one day, Facebook's databases ingest more than 500 terabytes of data, Twitter processes 500 million Tweets and Tumblr users publish more than 75 million posts. With such an unprecedented volume of information, developers face significant challenges when it comes to building an application's architecture and choosing its infrastructure. As a result, demand has exploded for "big data" solutions — resources that make it possible to process, store, analyze, search and deliver data from large, complex data sets. In light of that demand, SoftLayer has been working in strategic partnership with 10gen — the creators of MongoDB — to develop a high-performance, on-demand, big data solution. Today, we're excited to announce the launch of specialized MongoDB servers at SoftLayer.

If you've configured an infrastructure to accommodate big data, you know how much of a pain it can be: You choose your hardware, you configure it to run NoSQL, you install an open source NoSQL project that you think will meet your needs, and you keep tweaking your environment to optimize its performance. Assuming you have the resources (and patience) to get everything running efficiently, you'll wind up with the horizontally scalable database infrastructure you need to handle the volume of content you and your users create and consume. SoftLayer and 10gen are making that process a whole lot easier.

Our new MongoDB solutions take the time and guesswork out of configuring a big data environment. We give you an easy-to-use system for designing and ordering everything you need. You can start with a single server or roll out multiple servers in a single replica set across multiple data centers, and in under two hours, an optimized MongoDB environment is provisioned and ready to be used. I stress that it's an "optimized" environment because that's been our key focus. We collaborated with 10gen engineers on hardware and software configurations that provide the most robust performance for MongoDB, and we incorporated many of their MongoDB best practices. The resulting "engineered servers" are big data powerhouses:

MongoDB Configs

From each engineered server base configuration, you can customize your MongoDB server to meet your application's needs, and as you choose your upgrades from the base configuration, you'll see the thresholds at which you should consider upgrading other components. As your data set's size and the number of indexes in your database increase, you'll need additional RAM, CPU, and storage resources, but you won't need them in the same proportions — certain components become bottlenecks before others. Sure, you could upgrade all of the components in a given database server at the same rate, but if, say, you update everything when you only need to upgrade RAM, you'd be adding (and paying for) unnecessary CPU and storage capacity.

Using our new Solution Designer, it's very easy to graphically design a complex multi-site replica set. Once you finalize your locations and server configurations, you'll click "Order," and our automated provisioning system will kick into high gear. It deploys your server hardware, installs CentOS (with OS optimizations to provide MongoDB performance enhancements), installs MongoDB, installs MMS (MongoDB Monitoring Service) and configures the network connection on each server to cluster it with the other servers in your environment. A process that may have taken days of work and months of tweaking is completed in less than four hours. And because everything is standardized and automated, you run much less risk of human error.

MongoDB Configs

One of the other massive benefits of working so closely with 10gen is that we've been able to integrate 10gen's MongoDB Cloud Subscriptions into our offering. Customers who opt for a MongoDB Cloud Subscription get additional MongoDB features (like SSL and SNMP support) and support direct from the MongoDB authority. As an added bonus, since the 10gen team has an intimate understanding of the SoftLayer environment, they'll be able to provide even better support to SoftLayer customers!

You shouldn't have to sacrifice agility for performance, and you shouldn't have to sacrifice performance for agility. Most of the "big data" offerings in the market today are built on virtual servers that can be provisioned quickly but offer meager performance levels relative to running the same database on bare metal infrastructure. To get the performance benefits of dedicated hardware, many users have chosen to build, roll out and tweak their own configurations. With our MongoDB offering, you get the on-demand availability and flexibility of a cloud infrastructure with the raw power and full control of dedicated hardware.

If you've been toying with the idea of rolling out your own big data infrastructure, life just got a lot better for you.

-Duke

September 24, 2012

Cloud Computing is not a 'Thing' ... It's a way of Doing Things.

I like to think that we are beyond 'defining' cloud, but what I find in reality is that we still argue over basics. I have conversations in which people still delineate things like "hosting" from "cloud computing" based degrees of single-tenancy. Now I'm a stickler for definitions just like the next pedantic software-religious guy, but when it comes to arguing minutiae about cloud computing, it's easy to lose the forest for the trees. Instead of discussing underlying infrastructure and comparing hypervisors, we'll look at two well-cited definitions of cloud computing that may help us unify our understanding of the model.

I use the word "model" intentionally there because it's important to note that cloud computing is not a "thing" or a "product." It's a way of doing business. It's an operations model that is changing the fundamental economics of writing and deploying software applications. It's not about a strict definition of some underlying service provider architecture or whether multi-tenancy is at the data center edge, the server or the core. It's about enabling new technology to be tested and fail or succeed in blazing calendar time and being able to support super-fast growth and scale with little planning. Let's try to keep that in mind as we look at how NIST and Gartner define cloud computing.

The National Institute of Standards and Technology (NIST) is a government organization that develops standards, guidelines and minimum requirements as needed by industry or government programs. Given the confusion in the marketplace, there's a huge "need" for a simple, consistent definition of cloud computing, so NIST had a pretty high profile topic on its hands. Their resulting Cloud Computing Definition describes five essential characteristics of cloud computing, three service models, and four deployment models. Let's table the service models and deployment models for now and look at the five essential characteristics of cloud computing. I'll summarize them here; follow the link if you want more context or detail on these points:

  • On-Demand Self Service: A user can automatically provision compute without human interaction.
  • Broad Network Access: Capabilities are available over the network.
  • Resource Pooling: Computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned.
  • Rapid Elasticity: Capabilities can be elastically provisioned and released.
  • Measured Service: Resource usage can be monitored, controlled and reported.

The characteristics NIST uses to define cloud computing are pretty straightforward, but they are still a little ambiguous: How quickly does an environment have to be provisioned for it to be considered "on-demand?" If "broad network access" could just mean "connected to the Internet," why include that as a characteristic? When it comes to "measured service," how granular does the resource monitoring and control need to be for something to be considered "cloud computing?" A year? A minute? These characteristics cast a broad net, and we can build on that foundation as we set out to create a more focused definition.

For our next stop, let's look at Gartner's view: "A style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet infrastructure." From a philosophical perspective, I love their use of "style" when talking about cloud computing. Little differentiates the underlying IT capabilities of cloud computing from other types of computing, so when looking at cloud computing, we really just see a variation on how those capabilities are being leveraged. It's important to note that Gartner's definition includes "elastic" alongside "scalable" ... Cloud computing gets the most press for being able to scale remarkably, but the flip-side of that expansion is that it also needs to contract on-demand.

All of this describes a way of deploying compute power that is completely different than the way we did this in the decades that we've been writing software. It used to take months to get funding and order the hardware to deploy an application. That's a lot of time and risk that startups and enterprises alike can erase from their business plans.

How do we wrap all of those characteristics up into unified of definition of cloud computing? The way I look at it, cloud computing is as an operations model that yields seemingly unlimited compute power when you need it. It enables (scalable and elastic) capacity as you need it, and that capacity's pricing is based on consumption. That doesn't mean a provider should charge by the compute cycle, generator fan RPM or some other arcane measurement of usage ... It means that a customer should understand the resources that are being invoiced, and he/she should have the power to change those resources as needed. A cloud computing environment has to have self-service provisioning that doesn't require manual intervention from the provider, and I'd even push that requirement a little further: A cloud computing environment should have API accessibility so a customer doesn't even have to manually intervene in the provisioning process (The customer's app could use automated logic and API calls to scale infrastructure up or down based on resource usage).

I had the opportunity to speak at Cloud Connect Chicago, and I shared SoftLayer's approach to cloud computing and how it has evolved into a few distinct products that speak directly to our customers' needs:

The session was about 45 minutes, so the video above has been slimmed down a bit for easier consumption. If you're interested in seeing the full session and getting into a little more detail, we've uploaded an un-cut version here.

-Duke

June 20, 2012

How Do You Build a Private Cloud?

If you read Nathan's "A Cloud to Call Your Own" blog, and you wanted to learn a little more about private clouds in general or SoftLayer Private Clouds specifically, this post is for you. We're going take a little time to dive deeper into the technology behind SoftLayer Private Clouds, and in the process, I'll talk a little about why particular platforms/hardware/configurations were chosen.

The Platform: Citrix CloudPlatform

There are several cloud infrastructure frameworks to choose from these days. We have surveyed a number of them and actively work with several of them. We are active members of the happenings around OpenStack and we have working implementations of vSphere, Nimula, Eucalyptus and other stacks in our data centers. So why CloudPlatform by Citrix?

First off, it's one of the most mature of these options. It's been around for several years and now has the substantial backing of Citrix. That backing includes investment, support organizations and the multitude of other products managed by Citrix. There are also some futuristic ideas we have regarding how to leverage products like CloudBridge and Netscaler with Private Clouds. Second, CloudPlatform operates in accordance with how we believe a private cloud should work: It's simple, it doesn't have a huge management infrastructure and we can charge for it by the CPU per month, just like all of our other products. Finally, CloudPlatform has made good inroads with enterprise customers. We love the idea that an enterprise ops team could leverage CloudPlatform as the management platform for both their on-premise and their off-premise private cloud.

So, we selected CloudPlatform for a multitude of reasons; not just one.

Another huge key was our ability to integrate CloudPlatform into the SoftLayer portals/mobile apps/API. Because many SoftLayer customers manage their environments exclusively through the SoftLayer API, we knew that a seamless integration there was an absolute necessity. With the help of the SoftLayer dev team and the CloudStack folks, we've been able to automate private clouds the same way we did for public cloud instances and dedicated servers.

The Hardware

When it came to choosing what hardware the private clouds would use, the decision was pretty simple. Given our need for automation, SoftLayer Private Clouds would need to be indistinguishable from a standard dedicated server or CloudLayer environment. We use the latest and greatest server hardware available on the market, and every month, you can see thousands of new SuperMicro boxes being delivered to our data centers around the world. Because we know we have a reliable, powerful and consistent hardware foundation on which we can build the private clouds product, it makes the integration of the system even easier.

When it comes to the specs of the hardware provided for a private cloud environment, we provide as much transparency and flexibility as we can for a customer to build exactly what he or she needs. Let's look into what that means...

The Hardware Configurations

A CloudPlatform environment can be broken down into these components:

  • A single management server (that can manage multiple zones across layer 2 networks)
  • One or more zones
  • One or more clusters in a zone
  • One or more hosts in a cluster
  • Storage shared by a cluster (which can be a single server)

A simple diagram of a two-zone private cloud might look like this:

SoftLayer Private Clouds

We've set a standard "management server" configuration that we know will be able to accommodate all of your needs when it comes to running CloudPlatform, and how you build and configure the rest of your private cloud infrastructure is up to you. Whether you want simple dual proc, quad core Nehalem box with a lot of local disk space for a dev cloud or an environment made up of quad proc 10-core Westmeres with SSDs, you have the freedom to choose exactly what you want.

Oh, and everything can be online in two to four hours, and it's offered on a month-to-month contract.

The Network Configuration

When it comes to where the hardware is provisioned, you have the ability to deploy zones in multiple geographies and manage them all through a single CloudPlatform management node. Given the way the SoftLayer three-tier network is built, the management node and host nodes do not even need to be accessible by our public network. You can choose to make accessible only the IPs used by the VMs you create. If your initial private cloud infrastructure is in Dallas and you want a node online in Singapore, you can just click a few buttons, and the new node will be provisioned and configured securely by CloudPlatform in a couple of hours.

Imagine how long it would have taken you to build this kind of infrastructure in the past:

SoftLayer Private Clouds

It doesn't take days or weeks now. It takes hours.

As you can see, when we approached the challenge of bringing private clouds to the SoftLayer platform, we had to innovate. In Texas, that would be roughly translated as "Go big or go home." Given the response we've seen from customers and partners since the announcement of SoftLayer Private Clouds, we know the industry has taken notice.

Will all of our customers need their own private cloud infrastructure? Probably not. But will the customers who've been looking for this kind of functionality be ecstatic with the CloudPlatform environment on SoftLayer's network? Absolutely.

-Duke

May 30, 2012

What Does Automation Look Like?

Innovation. Automation. Innovation. Automation. Innovation. Automation. That's been our heartbeat since SoftLayer was born on May 5, 2005. The "Innovation" piece is usually the most visible component of that heartbeat while "Automation" usually hangs out behind the scenes (enabling the "Innovation"). When we launch a new product line like Object Storage, add new functionality to the SoftLayer API, announce a partnership with a service provider like RightScale, or simply receive and rack the latest and greatest server hardware from our vendors, our automated platform allows us to do it quickly and seamlessly. Because our platform is built to do exactly what it's supposed to without any manual intervention, it's easily overlooked.

But what if we wanted to show what automation actually looks like?

It seems like a silly question to ask. If our automated platform is powered by software built by the SoftLayer development team, there's no easy way to show what that automation looks like ... At least not directly. While the bits and bytes aren't easily visible, the operational results of automation are exceptionally photogenic. Let's take a look at a few examples of what automation enables to get an indirect view of what it actually looks like.

Example: A New Server Order

A customer orders a dedicated server. That customer wants a specific hardware configuration with a specific suite of software in a specific data center, and it needs to be delivered within four hours. What does that usually look like from an operations perspective?

SoftLayer Server Rack

If you want to watch those blinking lights for two or three hours, you'll have effectively watched a new server get provisioned at SoftLayer. When an order comes in, the automated provisioning system will find a server matching the order's hardware requirements in the requested data center facility, and the software will be installed before it is handed over to the the customer.

Example: Server Reboot or Operating System Reload

A customer needs to reboot a server or install a new operating system. Whether they want a soft reboot, a hard reboot with a full power cycle or a blank operating system install, the scene in the data center will look eerily familiar:

SoftLayer Server Rack

Gone are the days of server build technicians wheeling a terminal over to every server that needs work done. From thousands of miles away, a customer can remotely "unplug" his or her server via the rack's power strip, initiate a soft reboot or reinstall an operating system. But what if they want even more accessibility?

Example: What's on the Screen?

When remotely rebooting or power cycling a server isn't enough, a customer might want someone in the data center to wheel over to their server in the rack to look at any of the messages that can only be read with a monitor attached. This would generally happen behind the server, but for the sake of this example, we'll just watch the data center technician pass in front of the servers to get to the back:

SoftLayer Server Rack

Yeah, you probably could have seen that one coming.

Because KVM over IP is included on every server, physical carts carrying "keyboard, video and mouse" are few and far between. By automating customers' access to their server and providing as much virtual access as we possibly can, we're able to "get out of the way" of our technical users and only step in to help when that help is needed.

I could go on and on with examples of cloud computing upgrades and downgrades, provisioning a firewall or adding a load balancers, but I'll practice a little restraint. If you want the full effect, you can scroll up and watch the blinking lights a little while longer.

Automation looks like what you don't see. No humanoid robots or needlessly complex machines (that I know of) ... Just a data center humming along with some beautiful flashing server lights.

-Duke

P.S. If you want to be able to remotely bask in the glow of some blinking server lights, bookmark the larger-sized SoftLayer Rack animated gif ... You could even title the bookmark, "Check on the Servers."

May 9, 2011

Will Write Poetry for Servers

Two weeks ago, I inadvertently opened the floodgates to a wave of creativity from the SoftLayer Development/Technology organization. Lance came by my office and dropped off a server he was given, and while I would have taken it home, souped it up and done something cool with it in previous years (or decades) in my life, I find myself in more of a "just buy an iMac" camp now.

Rather than endanger the safety of our employees by sending out a "First one to grab the server from my office gets to keep it" email, I sent out more of a challenge: "Write a haiku or limerick stating why you want the server. If I get more than one submission, I’ll pick the best poem. Oh ... And no Nantucket limericks."

I expected one or two entries to come in, but to my surprise, I was greeted with dozens almost immediately:

Windows NT crashed.
I am the Blue Screen of Death.
No one hears your screams.

There was a young man with a lance
Who had three kids to finance
Yes they look and they see
Asking for a PC
But their dad said no not a chance

Linux or Windows
Not up to how the wind blows
The penguin's a go

When you’re whipping your verse into shape
And are caught in a verse-challenged scrape,
The delete key is handy.
Assisted by brandy,
And last, but not least, try escape.

Given the overwhelming initial response, we sweetened the deal a little by adding a second server to the mix (from George). When it came time to judge and announce the winners, I had to do so with my own poem ... which killed me because I hadn't written a poem in years.

My inbox laden
Server Poets bring me pride
Rewards were doubled

There once was a SLayer named Bradley
Whose poem was flattering badly
He said 3BFL
We said ‘Oh, What the Hell’
And gave him a server quite gladly

Among numerous entries we found
That nerdy rhymes and rhymers abound
And so many came forth
Our hand was quite forced
So to the contest more servers were bound

Thus also a Slayer named Hemsell
Was chosen to leave with a morsel
Wash the zeros away
Rip and store CDs today
Make this sad server sing loud and fell

With generous swagger Karidis did add
A prize sure to make the cable co mad
For Scott Thompson’s poem
Was moving and solemn
An Apple TV should not make him sad

And finally the team of Hannon and Chong
Grammar and spelling and format all wrong
But their desire so true
And coding poetry new
Request will be supported so strong

Translation:

Server Winner: Bradley Johnson

One, two, three bar life
Free drink, free shirt, free server
Movie files need home

Server Winner: David Hemsell

CDs sit offline
Once proud server is no more
Fill barren zeros

Apple TV Winner: Scott Thompson

Your free server will
fail to bring much joy to me
I use Macintosh

Additional Computer-Related Award of Some Kind: Chong and Harold

import com.softlayer.server;
public class freeAssetReserver{
   int count = 0;
   String you = “hero”;
   function void vmBoxOursObserver();
}

Congratulations to Bradley and David for winning the servers and to Scott Thompson for walking away with the unadvertised Apple TV! When we were going through the submissions, we couldn't help but reward the submission from Chong and Harold - A coding limerick!

We'll post more of the submissions in the comments on this post, so be sure to scroll down and add your own!

-Duke

February 9, 2011

3 Bars | 3 Questions: Hybrid Hosting

In a new SoftLayer series, Duke Skarda sits in the hot seat to answer some hot topic hosting questions from Kevin Hazard.

This session's topic: Hybrid Computing.

Who would you like to see in the hot seat, and what hot topics do you want to hear about?

-Duke

January 20, 2011

Blurring the Line Between Dedicated and Cloud Service

What does "the cloud" mean to you right now? Does it mean "the Internet?" Is it how you think of outsourced IT? Does the nephologist in you immediately think of the large cumulonimubus creeping up the sky from the South? We read about how businesses are adopting cloud-this and cloud-that, but under many definitions we have been using cloud servers for years.

A couple years ago, Kevin wrote a post that gave a little context to the "cloud" terminology confusion:

The Internet is everywhere and the Internet is nowhere.

The fact that we can't point to anything tangible to define the Internet forces us to conceptualize an image that helps us understand how this paradox is possible. A lot of information is sitting around on servers somewhere out there, and when we connect to it, we have access to it all. Cloud, web, dump truck, tubes ... It doesn't matter what we call it because we're not defining the mechanics, we're defining the concepts.

For years, hosting companies have offered compute resources over the Internet for a monthly fee, but as new technologies emerge, it seems we have painted ourselves into a corner with our terminology. For the sake of this discussion, we'll differentiate dedicated servers as single-tenant hardware-dependent servers and cloud servers as multi-tenant hardware-independent servers.

Dedicated servers have some advantages that cloud servers typically haven't had in the past. If you wanted full OS support and control, predictable CPU and disk performance, big Internet pipes, multiple storage options and more powerful networking support, you were in the market for a dedicated server. If your priorities were hourly rates, instant turn-up, image-based provisioning and control via API, cloud servers were probably at the top of your shopping list.

Some competitive advantages of one over the other are fading: SoftLayer has a bare metal product that supports hourly rates for dedicated resources, and we can reliably turn up dedicated servers in under 2 hours. If you select a ready-made box, you might have it up and running in under 30 minutes. Our development team has also built a great API that allows unparalleled control for our dedicated servers.

On the flip-side, our cloud servers are supported just like our dedicated servers: You get the same great network, the ability to connect with other cloud and dedicated instances via private network, and predictable CPU usage with virtual machines pinned to a specific number of CPU cores.

Soon enough, deltas between dedicated performance and cloud functionality will be virtually eliminated and we'll all be able to adopt a unified understanding of what this "cloud" thing is, but until then, we'll do our best to express the competitive advantages of each platform so you can incorporate the right solutions for your needs into your infrastructure.

Engage ...

-Duke

Subscribe to Author Archive: %