Posts Tagged 'Virtual'

October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

July 16, 2013

Riak Performance Analysis: Bare Metal v. Virtual

In December, I posted a MongoDB performance analysis that showed the quantitative benefits of using bare metal servers for MongoDB workloads. It should come as no surprise that in the wake of SoftLayer's Riak launch, we've got some similar data to share about running Riak on bare metal.

To run this test, we started by creating five-node clusters with Riak 1.3.1 on SoftLayer bare metal servers and on a popular competitor's public cloud instances. For the SoftLayer environment, we created these clusters using the Riak Solution Designer, so the nodes were all provisioned, configured and clustered for us automatically when we ordered them. For the public cloud virtual instance Riak cluster, each node was provisioned indvidually using a Riak image template and manually configured into a cluster after all had come online. To optimize for Riak performance, I made a few tweaks at the OS level of our servers (running CentOS 64-bit):

Noatime
Nodiratime
barrier=0
data=writeback
ulimit -n 65536

The common Noatime and Nodiratime settings eliminate the need for writes during reads to help performance and disk wear. The barrier and writeback settings are a little less common and may not be what you'd normally set. Although those settings present a very slight risk for loss of data on disk failure, remember that the Riak solution is deployed in five-node rings with data redundantly available across multiple nodes in the ring. With that in mind and considering each node also being deployed with a RAID10 storage array, you can see that the minor risk for data loss on the failure of a single disk in the entire solution would have no impact on the entire data set (as there are plenty of redundant copies for that data available). Given the minor risk involved, the performance increases of those two settings justify their use.

With all of the nodes tweaked and configured into clusters, we set up Basho's test harness — Basho Bench — to remotely simulate load on the deployments. Basho Bench allows you to create a configurable test plan for a Riak cluster by configuring a number of workers to utilize a driver type to generate load. It comes packaged as an Erlang application with a config file example that you can alter to create the specifics for the concurrency, data set size, and duration of your tests. The results can be viewed as CSV data, and there is an optional graphics package that allows you to generate the graphs that I am posting in this blog. A simplified graphic of our test environment would look like this:

Riak Test Environment

The following Basho Bench config is what we used for our testing:

{mode, max}.
{duration, 120}.
{concurrent, 8}.
{driver, basho_bench_driver_riakc_pb}.
{key_generator,{int_to_bin,{uniform_int,1000000}}}.
{value_generator,{exponential_bin,4098,50000}}.
{riakc_pb_ips, [{10,60,68,9},{10,40,117,89},{10,80,64,4},{10,80,64,8},{10,60,68,7}]}.
{riakc_pb_replies, 2}.
{operations, [{get, 10},{put, 1}]}.

To spell it out a little simpler:

Tests Performed

Data Set: 400GB
10:1 Query-to-Update Operations
8 Concurrent Client Connections
Test Duration: 2 Hours

You may notice that in the test cases that use SoftLayer "Medium" Servers, the virtual provider nodes are running 26 virtual compute units against our dual proc hex-core servers (12 cores total). In testing with Riak, memory is important to the operations than CPU resources, so we provisioned the virtual instances to align with the 36GB of memory in each of the "Medium" SoftLayer servers. In the public cloud environment, the higher level of RAM was restricted to packages with higher CPU, so while the CPU counts differ, the RAM amounts are as close to even as we could make them.

One final "housekeeping" note before we dive into the results: The graphs below are pulled directly from the optional graphics package that displays Basho Bench results. You'll notice that the scale on the left-hand side of graphs differs dramatically between the two environments, so a cursory look at the results might not tell the whole story. Click any of the graphs below for a larger version. At the end of each test case, we'll share a few observations about the operations per second and latency results from each test. When we talk about latency in the "key observation" sections, we'll talk about the 99th percentile line — 99% of the results had latency below this line. More simply you could say, "This is the highest latency we saw on this platform in this test." The primary reason we're focusing on this line is because it's much easier to read on the graphs than the mean/median lines in the bottom graphs.

Riak Test 1: "Small" Bare Metal 5-Node Cluster vs Virtual 5-Node Cluster

Servers

SoftLayer Small Riak Server Node
Single 4-core Intel 1270 CPU
64-bit CentOS
8GB RAM
4 x 500GB SATAII – RAID10
1Gb Bonded Network
Virtual Provider Node
4 Virtual Compute Units
64-bit CentOS
7.5GB RAM
4 x 500GB Network Storage – RAID10
1Gb Network
 

Results

Riak Performance Analysis

Riak Performance Analysis

Key Observations

The SoftLayer environment showed much more consistency in operations per second with an average throughput around 450 Op/sec. The virtual environment throughput varied significantly between about 50 operations per second to more than 600 operations per second with the trend line fluctuating slightly between about 220 Op/sec and 350 Op/sec.

Comparing the latency of get and put requests, the 99th percentile of results in the SoftLayer environment stayed around 50ms for gets and under 200ms for puts while the same metric for the virtual environment hovered around 800ms in gets and 4000ms in puts. The scale of the graphs is drastically different, so if you aren't looking closely, you don't see how significantly the performance varies between the two.

Riak Test 2: "Medium" Bare Metal 5-Node Cluster vs Virtual 5-Node Cluster

Servers

SoftLayer Medium Riak Server Node
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
4 x 300GB 15K SAS – RAID10
1Gb Network – Bonded
Virtual Provider Node
26 Virtual Compute Units
64-bit CentOS
30GB RAM
4 x 300GB Network Storage
1Gb Network
 

Results

Riak Performance Analysis

Riak Performance Analysis

Key Observations

Similar to the results of Test 1, the throughput numbers from the bare metal environment are more consistent (and are consistently higher) than the throughput results from the virtual instance environment. The SoftLayer environment performed between 1500 and 1750 operations per second on average while the virtual provider environment averaged around 1200 operations per second throughout the test.

The latency of get and put requests in Test 2 also paints a similar picture to Test 1. The 99th percentile of results in the SoftLayer environment stayed below 50ms and under 400ms for puts while the same metric for the virtual environment averaged about 250ms in gets and over 1000ms in puts. Latency in a big data application can be a killer, so the results from the virtual provider might be setting off alarm bells in your head.

Riak Test 3: "Medium" Bare Metal 5-Node Cluster vs Virtual 5-Node Cluster

Servers

SoftLayer Medium Riak Server Node
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
4 x 128GB SSD – RAID10
1Gb Network – Bonded
Virtual Provider Node
26 Virtual Compute Units
64-bit CentOS
30GB RAM
4 x 300GB Network Storage
1Gb Network
 

Results

Riak Performance Analysis

Riak Performance Analysis

Key Observations

In Test 3, we're using the same specs in our virtual provider nodes, so the results for the virtual node environment are the same in Test 3 as they are in Test 2. In this Test, the SoftLayer environment substitutes SSD hard drives for the 15K SAS drives used in Test 2, and the throughput numbers show the impact of that improved I/O. The average throughput of the bare metal environment with SSDs is between 1750 and 2000 operations per second. Those numbers are slightly higher than the SoftLayer environment in Test 2, further distancing the bare metal results from the virtual provider results.

The latency of gets for the SoftLayer environment is very difficult to see in this graph because the latency was so low throughout the test. The 99th percentile of puts in the SoftLayer environment settled between 500ms and 625ms, which was a little higher than the bare metal results from Test 2 but still well below the latency from the virtual environment.

Summary

The results show that — similar to the majority of data-centric applications that we have tested — Riak has more consistent, better performing, and lower latency results when deployed onto bare metal instead of a cluster of public cloud instances. The stark differences in consistency of the results and the latency are noteworthy for developers looking to host their big data applications. We compared the 99th percentile of latency, but the mean/median results are worth checking out as well. Look at the mean and median results from the SoftLayer SSD Node environment: For gets, the mean latency was 2.5ms and the median was somewhere around 1ms. For puts, the mean was between 7.5ms and 11ms and the median was around 5ms. Those kinds of results are almost unbelievable (and that's why I've shared everything involved in completing this test so that you can try it yourself and see that there's no funny business going on).

It's commonly understood that local single-tenant resources that bare metal will always perform better than network storage resources, but by putting some concrete numbers on paper, the difference in performance is pretty amazing. Virtualizing on multi-tenant solutions with network attached storage often introduces latency issues, and performance will vary significantly depending on host load. These results may seem obvious, but sometimes the promise of quick and easy deployments on public cloud environments can lure even the sanest and most rational developer. Some applications are suited for public cloud, but big data isn't one of them. But when you have data-centric apps that require extreme I/O traffic to your storage medium, nothing can beat local high performance resources.

-Harold

February 3, 2012

Server Hardware "Show and Tell" at Cloud Expo Europe

Bringing server hardware to a "Cloud Expo" is like bringing a knife to a gun fight. Why would anyone care about hardware? Isn't "the cloud" a magical land where servers and data centers cease to exist and all that matters is that your hardware-abstracted hypervisor can scale elastically on demand?

You might be surprised how many attendees at Cloud Expo Europe expressed that sentiment in one way or another when SoftLayer showed up in London with the infamous Server Challenge last week. Based on many of the conversations I had with attendees, some of the most basic distinctions and characteristics of physical and virtual environments are widely misunderstood. Luckily, we had a nice little server rack to use as a visual while talking about how SoftLayer fits in (and stands out) when it comes to "the cloud."

When we didn't have a line of participants waiting to try their hand at our in-booth competition, we were able to use it to "show and tell" what a cloud hardware architecture might look like and what distinguishes SoftLayer from some of the other infrastructure providers in the industry. We're able to show our network-within-a-newtork topology, we explain the pod concept of our data centers and how that streamlines our operations, and we talk about our system automation and how that speeds up the provisioning of both physical and virtual environments. Long-term memory is aided by the use of multiple senses, so when each attendee can see and touch what they're hearing about in our booth, they have a much better chance to remember the conversation in the midst of dozens (if not hundreds) they have before and after they talk to us.

And by the time we finish using the Server Challenge as a visual, the attendee is usually ready to compete. As you probably noticed if you caught the Cloud Expo Europe album at Facebook.com/SoftLayer, the competition was pretty intense. In fact, the winning time of 1:08.16 was set just about twenty minutes before the conference ended ... In the short video below, Phil presents the winner of the Cloud Expo Europe Server Challenge with his iPad 2 and asks for some insight about how he was able to pull off the victory:

Being the international debut of the Server Challenge, we were a bit nervous that the competition wouldn't have as much appeal as we've seen in the past, but given the response we received from attendees, it's pretty safe to say it's not the last time you'll see the Server Challenge abroad.

To all of the participants who competed last week, thanks for stopping by our booth, and we hope you're enjoying your "torch" (if you beat the 2:00.00 flashlight-winning time)!

-@khazard

August 3, 2011

CyberlinkASP: Tech Partner Spotlight

This is a guest blog from Chris Lantrip, CEO of CyberlinkASP, an application service provider focused on hosting, upgrading and managing the industry's best software.

The DesktopLayer from CyberlinkASP

Hosted virtual desktops – SoftLayer style.

In early 2006, we were introduced to SoftLayer. In 2007, they brought us StorageLayer, and in 2009, CloudLayer. Each of those solutions met a different kind of need in the Application Service Provider (ASP) world, and by integrating those platforms into our offering, DesktopLayer was born: The on-demand anytime, anywhere virtual desktop hosted on SoftLayer and powered by CyberlinkASP.

CyberlinkASP was originally established to instantly web-enable software applications that were not online in the past. Starting off as a Citrix integration firm in the early days, we were approached by multiple independent software vendors asking us to host, manage and deliver their applications from a centralized database platform to their users across multiple geographic locations. With the robust capabilities of Citrix, we were able to revolutionize application delivery and management for several ISV's.

Over time, more ISV's starting showing up at our doorstep, and application delivery was becoming a bigger and bigger piece of our business. Our ability to provision users on a specific platform in minutes, delete them in minutes, perform updates and maintain hundreds of customers and thousands of users all at one time from a centralized platform was very attractive.

Our users began asking us, "Is it possible to put our payroll app on this platform too?" "What about Exchange and Office?" They loved the convenience of not managing the DBs for individual applications, and they obviously wanted more. Instead of providing one-off solutions for individual applications, we built the DesktopLayer, a hosted environment for virtual desktops.

We deliver a seamless and integrated user experience utilizing SoftLayer, Citrix XenApp and XenDesktop. When our users log in they see the same screen, the same applications and the same performance they received on their local machine. The Citrix experience takes over the entire desktop, and the look and feel is indistinguishable. It's exactly what they are accustomed to.

Our services always include the Microsoft suite (Exchange, Office, Sharepoint) and is available on any device, from your PC to your Mac to your iPad. To meet the needs of our customers, we also integrate all 3rd party apps and non-Microsoft software into the virtual desktop – if our customers are using Peachtree or Quickbooks for accounting and Kronos for HR, they are all seamlessly published to the users who access them, and unavailable to those that do not.

We hang our hat on our unique ability to tie all of a company's applications into one centralized user experience and support it. Our Dallas-based call center is staffed with a team of knowledgeable engineers who are always ready to help troubleshoot and can add/delete and customize new users in minutes. We take care of everything ... When someone needs help setting up a printer or they bought a new scanner, they call our helpdesk and we take it from there. Users can call us directly for support and leave the in-house IT team to focus on other areas, not desktop management.

With the revolution of cloud computing, many enterprises are trending toward the eradication of physical infrastructure in their IT environments. Every day, we see more and more demand from IT managers who want us to assume the day-to-day management of their end user's entire desktop, and over the past few years, the application stack that we deliver to each of our end users has grown significantly.

As Citrix would say "the virtual desktop revolution is here." The days of having to literally touch hundreds of devices at users' workstations are over. Servers in the back closet are gone. End users have become much more unique and mobile ... They want the same access, performance and capabilities regardless of geography. That's what we provide. DesktopLayer, with instant computing resources available from SoftLayer, is the future.

I remember someone telling me in 2006 that it was time for the data center to "grow up". It has. We now have hundreds of SMB clients and thousands of virtual desktops in the field today, and we love having a chance to share a little about how we see the IT landscape evolving. Thanks to our friends at SoftLayer, we get to tell that story and boast a little about what we're up to!

- Chris M. Lantrip, Chief Executive, CyberlinkASP

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
June 8, 2007

Your Datacenter is Obsolete

By 2010, the datacenter as we know it today will be dead. Datacenters of the future will be ultra high-density geographically-dispersed IT utility centers. Datacenters will be focused on maximizing all the facets of the IT environment including floor space, HVAC, power, server form factor, security, storage, networking, bandwidth, personnel and preventive maintenance. Physically, I envision 5,000 square foot facilities installed across the globe that are relatively small, lights-out bunkers utilizing commodity infrastructures, owned or leased footprints, and housing servers at a rate of 10 per square foot.

The datacenters will be designed, built, and fully functional on day one -- including the installation of all IT equipment. There will be no movement of physical components as everything will be managed virtually through a series of networks and management tools -- a datacenter grid, if you will. These datacenters will only require personnel for failure-replacement or maintenance. Hardware node failures would automatically route to other nodes in the same datacenter. The failure of a datacenter would result in a re-route of data to other facilities. A series of failsafe datacenters, with all data, will be sitting on the edge near the end user for maximum performance and efficiency. Companies would select geographical regions for their installations of IT services.

The datacenter of the future is indifferent to the technology of the day. Dedicated hosting, virtualization, grid computing or the next emerging technology all work in the datacenter of the future because they will be designed as an IT utility. It's time for the datacenter to grow up.

-@lavosby

Subscribe to virtual