Posts Tagged 'Hypervisor'

November 2, 2015

The multitenant problem solver is here: VMWare 6 NSX on SoftLayer

We’re very excited to tell you about what’s coming down the pike here at SoftLayer: VMWare NSX 6! This is something that I’ve personally been anticipating for a while now, because it solves so many issues that are confronted on the multitenant platform. Here’s a diagram to explain exactly how it works:

As you can see, it uses the SoftLayer network, the underlay network and fabric, and uses NSX as the overlay network to create the SDN (Software Defined Network).

What is it?
VMware NSX is a virtual networking and security software product from VMware's vCloud Networking and Security (vCNS) and Nicira Network Virtualization Platform (NVP). NSX software-defined networking is part of VMware's software-defined data center concept, which offers cloud computing on VMware virtualization technologies. VMware's stated goal with NSX is to provision virtual networking environments without command line interfaces or other direct administrator intervention. Network virtualization abstracts network operations from the underlying hardware onto a distributed virtualization layer, much like server virtualization does for processing power and operating systems. VMware vCNS (formerly called vShield) virtualizes L4-L7 of the network. Nicira's NVP virtualizes the network fabric, L2 and L3. VMware says that NSX will expose logical firewalls, switches, routers, ports, and other networking elements to allow virtual networking among vendor-agnostic hypervisors, cloud management systems, and associated network hardware. It also will support external networking and security ecosystem services.

How does it work?
NSX network virtualization is an architecture that enables the full potential of a software-defined data center (SDDC), making it possible to create and run entire networks in parallel on top of existing network hardware. This results in faster deployment of workloads and greater agility in creating dynamic data centers.

This means you can create a flexible pool of network capacity that can be allocated, utilized, and repurposed on demand. You can decouple the network from underlying hardware and apply virtualization principles to network infrastructure. You’re able to deploy networks in software that are fully isolated from each other, as well as from other changes in the data center. NSX reproduces the entire networking environment in software, including L2, L3 and L4–L7 network services within each virtual network. NSX offers a distributed logical architecture for L2–L7 services, provisioning them programmatically when virtual machines are deployed and moving them with the virtual machines. With NSX, you already have the physical network resources you need for a next-generation data center.

What are some major features?
NSX brings an SDDC approach to network security. Its network virtualization capabilities enable the three key functions of micro-segmentation: isolation (no communication across unrelated networks), segmentation (controlled communication within a network), and security with advanced services (tight integration with leading third-party security solutions).

The key benefits of micro-segmentation include:

  1. Network security inside the data center: Fine-grained policies enable firewall controls and advanced security down to the level of the virtual NIC.
  2. Automated security for speed and agility in the data center: Security policies are automatically applied when a virtual machine spins up, moved when a virtual machine is migrated, and removed when a virtual machine is deprovisioned—eliminating the problem of stale firewall rules.
  3. Integration with the industry’s leading security products: NSX provides a platform for technology partners to bring their solutions to the SDDC. With NSX security tags, these solutions can adapt to constantly changing conditions in the data center for enhanced security.

As you can see, there are lots of great features and benefits for our customers.

You can find more great resources about NSX on SoftLayer here. Make sure to keep your eyes peeled for more great NSX news!


October 20, 2015

What’s in a hypervisor? More than you think

Virtualization has always been a key tenet of enabling cloud-computing services. From the get-go, SoftLayer has offered a variety of options, including Citrix XenServer, Microsoft Hyper-V, and Parallels Cloud Server, just to name a few. It’s all about enabling choice.

But what about VMware—the company that practically pioneered virtualization, making it commonplace?

Well, we have some news to share. SoftLayer has always supported VMware ESX and ESXi—your basic, run-of-the mill hypervisor—but now we’re enabling enterprise customers to run VMware vSphere on our bare metal servers.

This collaboration is significant for SoftLayer and IBM because it gives our customers tremendous flexibility and transparency when moving workloads into the public cloud. Enterprises already familiar with VMware can easily extend their existing on-premises VMware infrastructure into the IBM Cloud with simplified, monthly pricing. This makes transitioning into a hybrid model easier because it results in greater workload mobility and application continuity.

But the real magic happens when you couple our bare metal performance with VMware vSphere. Users can complete live workload migrations between data centers across continents. Users can easily move and implement enterprise applications and disaster recovery solutions across our global network of cloud data centers—with just a few clicks of a mouse. Take a look at this demo and judge for yourself.

What’s in a hypervisor? For some, it’s an on-ramp to the cloud and a way to make hybrid computing a reality. When you pair the flexibility of VMware with our bare metal servers, users get a combination that’s hard to beat.

We’re innovating to help companies make the transition to hybrid cloud, one hypervisor at a time. For more details, visit

-Jack Beech, VP of Business Development

July 29, 2013

A Brief History of Cloud Computing

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.


As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.


As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.


February 3, 2012

Server Hardware "Show and Tell" at Cloud Expo Europe

Bringing server hardware to a "Cloud Expo" is like bringing a knife to a gun fight. Why would anyone care about hardware? Isn't "the cloud" a magical land where servers and data centers cease to exist and all that matters is that your hardware-abstracted hypervisor can scale elastically on demand?

You might be surprised how many attendees at Cloud Expo Europe expressed that sentiment in one way or another when SoftLayer showed up in London with the infamous Server Challenge last week. Based on many of the conversations I had with attendees, some of the most basic distinctions and characteristics of physical and virtual environments are widely misunderstood. Luckily, we had a nice little server rack to use as a visual while talking about how SoftLayer fits in (and stands out) when it comes to "the cloud."

When we didn't have a line of participants waiting to try their hand at our in-booth competition, we were able to use it to "show and tell" what a cloud hardware architecture might look like and what distinguishes SoftLayer from some of the other infrastructure providers in the industry. We're able to show our network-within-a-newtork topology, we explain the pod concept of our data centers and how that streamlines our operations, and we talk about our system automation and how that speeds up the provisioning of both physical and virtual environments. Long-term memory is aided by the use of multiple senses, so when each attendee can see and touch what they're hearing about in our booth, they have a much better chance to remember the conversation in the midst of dozens (if not hundreds) they have before and after they talk to us.

And by the time we finish using the Server Challenge as a visual, the attendee is usually ready to compete. As you probably noticed if you caught the Cloud Expo Europe album at, the competition was pretty intense. In fact, the winning time of 1:08.16 was set just about twenty minutes before the conference ended ... In the short video below, Phil presents the winner of the Cloud Expo Europe Server Challenge with his iPad 2 and asks for some insight about how he was able to pull off the victory:

Being the international debut of the Server Challenge, we were a bit nervous that the competition wouldn't have as much appeal as we've seen in the past, but given the response we received from attendees, it's pretty safe to say it's not the last time you'll see the Server Challenge abroad.

To all of the participants who competed last week, thanks for stopping by our booth, and we hope you're enjoying your "torch" (if you beat the 2:00.00 flashlight-winning time)!


Subscribe to hypervisor