Posts Tagged 'Virtualization'

November 2, 2015

The multitenant problem solver is here: VMWare 6 NSX on SoftLayer

We’re very excited to tell you about what’s coming down the pike here at SoftLayer: VMWare NSX 6! This is something that I’ve personally been anticipating for a while now, because it solves so many issues that are confronted on the multitenant platform. Here’s a diagram to explain exactly how it works:

As you can see, it uses the SoftLayer network, the underlay network and fabric, and uses NSX as the overlay network to create the SDN (Software Defined Network).

What is it?
VMware NSX is a virtual networking and security software product from VMware's vCloud Networking and Security (vCNS) and Nicira Network Virtualization Platform (NVP). NSX software-defined networking is part of VMware's software-defined data center concept, which offers cloud computing on VMware virtualization technologies. VMware's stated goal with NSX is to provision virtual networking environments without command line interfaces or other direct administrator intervention. Network virtualization abstracts network operations from the underlying hardware onto a distributed virtualization layer, much like server virtualization does for processing power and operating systems. VMware vCNS (formerly called vShield) virtualizes L4-L7 of the network. Nicira's NVP virtualizes the network fabric, L2 and L3. VMware says that NSX will expose logical firewalls, switches, routers, ports, and other networking elements to allow virtual networking among vendor-agnostic hypervisors, cloud management systems, and associated network hardware. It also will support external networking and security ecosystem services.

How does it work?
NSX network virtualization is an architecture that enables the full potential of a software-defined data center (SDDC), making it possible to create and run entire networks in parallel on top of existing network hardware. This results in faster deployment of workloads and greater agility in creating dynamic data centers.

This means you can create a flexible pool of network capacity that can be allocated, utilized, and repurposed on demand. You can decouple the network from underlying hardware and apply virtualization principles to network infrastructure. You’re able to deploy networks in software that are fully isolated from each other, as well as from other changes in the data center. NSX reproduces the entire networking environment in software, including L2, L3 and L4–L7 network services within each virtual network. NSX offers a distributed logical architecture for L2–L7 services, provisioning them programmatically when virtual machines are deployed and moving them with the virtual machines. With NSX, you already have the physical network resources you need for a next-generation data center.

What are some major features?
NSX brings an SDDC approach to network security. Its network virtualization capabilities enable the three key functions of micro-segmentation: isolation (no communication across unrelated networks), segmentation (controlled communication within a network), and security with advanced services (tight integration with leading third-party security solutions).

The key benefits of micro-segmentation include:

  1. Network security inside the data center: Fine-grained policies enable firewall controls and advanced security down to the level of the virtual NIC.
  2. Automated security for speed and agility in the data center: Security policies are automatically applied when a virtual machine spins up, moved when a virtual machine is migrated, and removed when a virtual machine is deprovisioned—eliminating the problem of stale firewall rules.
  3. Integration with the industry’s leading security products: NSX provides a platform for technology partners to bring their solutions to the SDDC. With NSX security tags, these solutions can adapt to constantly changing conditions in the data center for enhanced security.

As you can see, there are lots of great features and benefits for our customers.

You can find more great resources about NSX on SoftLayer here. Make sure to keep your eyes peeled for more great NSX news!


October 20, 2015

What’s in a hypervisor? More than you think

Virtualization has always been a key tenet of enabling cloud-computing services. From the get-go, SoftLayer has offered a variety of options, including Citrix XenServer, Microsoft Hyper-V, and Parallels Cloud Server, just to name a few. It’s all about enabling choice.

But what about VMware—the company that practically pioneered virtualization, making it commonplace?

Well, we have some news to share. SoftLayer has always supported VMware ESX and ESXi—your basic, run-of-the mill hypervisor—but now we’re enabling enterprise customers to run VMware vSphere on our bare metal servers.

This collaboration is significant for SoftLayer and IBM because it gives our customers tremendous flexibility and transparency when moving workloads into the public cloud. Enterprises already familiar with VMware can easily extend their existing on-premises VMware infrastructure into the IBM Cloud with simplified, monthly pricing. This makes transitioning into a hybrid model easier because it results in greater workload mobility and application continuity.

But the real magic happens when you couple our bare metal performance with VMware vSphere. Users can complete live workload migrations between data centers across continents. Users can easily move and implement enterprise applications and disaster recovery solutions across our global network of cloud data centers—with just a few clicks of a mouse. Take a look at this demo and judge for yourself.

What’s in a hypervisor? For some, it’s an on-ramp to the cloud and a way to make hybrid computing a reality. When you pair the flexibility of VMware with our bare metal servers, users get a combination that’s hard to beat.

We’re innovating to help companies make the transition to hybrid cloud, one hypervisor at a time. For more details, visit

-Jack Beech, VP of Business Development

September 2, 2015

Backup and Restore in a Cloud and DevOps World

Virtualization has brought many improvements to the compute infrastructure, including snapshots and live migration1. When an infrastructure moves to the cloud, these options often become a client’s primary backup strategy. While snapshots and live migration are also part of a successful strategy, backing up on the cloud may need additional tools.

First, a basic question: Why do we take backups? They’re taken to recover from

  • The loss of an entire machine
  • Partially corrupted files
  • A complete data loss (either through hardware or human error)

While losing an entire machine is frightening, corrupted files or data loss are the more common reasons for data backups.

Snapshots are useful when the snapshot and restore occur in close proximity to each other, e.g., when you’re migrating middleware or an operating system and want to fall back quickly if something goes wrong. If you need to restore after extensive changes (hardware or data), a snapshot isn’t an adequate resource. The restore may require restoring to a new machine, selecting files to be restored, and moving data back to the original machine.

So if a snapshot isn’t the silver bullet for backing up in the cloud, what are the effective backup alternatives? The solution needs to handle a full system loss, partial data loss, or corruption, and ideally work for both virtualized and non-virtualized environments.

What to back up

There are three types of files that you’ll want to consider when backing up an active machine’s disks:

  • Binary files: Changed by operating system and middleware updates; can be easily stored and recovered.
  • Configuration files: Defined by how the binary files are connected, configured, and what data is accessible to them.
  • Data files: Generated by users and unrecoverable if not backed up. Data files are the most precious part of the disk content and losing them may result in a financial impact on the client’s business.

Keep in mind when determining your backup strategy that each file type has a different change rate—data files change faster than configuration files, which are more fluid than binary files. So, what are your options for backing up and restoring each type of file?

Binary files
In the case of a system failure, DevOps advocates (see Phoenix Servers from Martin Fowler) propose getting a new machine, which all cloud providers can automatically provision, including middleware. Automated provisioning processes are available for both bare metal and virtual machines.

Note that most Open Source products only require an Internet connection and a single command line for installation, while commercial products can be provisioned through automation.

Configuration files
Cloud-centric operations have a distinct advantage over traditional operations when it comes to backing up configuration files. With traditional operations, each element is configured manually, which has several drawbacks such as being time-consuming and error-prone. Cloud-centric operations, or DevOps, treat each configuration as code, which allows an environment to be built from a source configuration via automated tools and procedures. Tools such as Chef, Puppet, Ansible, and SaltStack show their power with central configuration repositories that are used to drive the composition of an environment. A central repository works well with another component of automated provisioning—changing the IP address and hostname.

You have limited control of how the cloud will allocate resources, so you need an automated method to collect the information and apply it to all the machines being provisioned.

In a cloud context, it’s suboptimal to manage machines individually; instead, the machines have to be seen as part of a cluster of servers, managed via automation. Cluster automation is one the core tenants of solutions like CoreOS’ Fleet and Apache Mesos. Resources are allocated and managed as a single entity via API, configuration repositories, and automation.

You can attain automation in small steps. Start by choosing an automation tool and begin converting your existing environment one file at a time. Soon, your entire configuration is centrally available and recovering a machine or deploying a full environment is possible with a single automated process.

In addition to being able to quickly provision new machines with your binary and configuration files, you are also able to create parallel environments, such as disaster recovery, test and development, and quality assurance. Using the same provisioning process for all of your environments assures consistent environments and early detection of potential production problems. Packages, binaries, and configuration files can be treated as data and stored in something similar to object stores, which are available in some form with all cloud solutions.

Data files
The final files to be backed up and restored are the data files. These files are the most important part of a backup and restore and the hardest ones to replace. Part of the challenge is the volume of data as well as access to it. Data files are relatively easy to back up; the exception being files that are in transition, e.g., files being uploaded. Data file backups can be done with several tools, including synchronization tools or a full file backup solution. Another option is object stores, which is the natural repository for relatively static files, and allows for a pay–as-you-go model.

Database content is a bit harder to back up. Even with instant snapshots on storage, backing up databases can be challenging. A snapshot at the storage level is an option, but it doesn’t allow for a partial database restore. Also, a snapshot can capture inflight transactions that can cause issues during a restore; which is why most database systems provide a mechanism for online backups. The online backups should be leveraged in combination with tools for file backups.

Something to remember about databases: many solutions end up accumulating data even after the data is no longer used by users. The data within an active database includes data currently being used and historical data. Having current and historical data allows for data analytics on the same database, it also increases the size of the database, making database-related operations harder. It may make sense to archive older data in either other databases or flat files, which makes the database volumes manageable.


To recap, because cloud provides rapid deployment of your operating system and convenient places to store data (such as object stores), it’s easy to factor cloud into your backup and recovery strategy. By leveraging the containerization approach, you should split the content of your machines—binary, configuration, and data. Focus on automating the deployment of binaries and configuration; it allows easier delivery of an environment, including quality assurance, test, and disaster recovery. Finally, use traditional backup tools for backing up data files. These tools make it possible to rapidly and repeatedly recover complete environments while controlling the amount of backed up data that has to be managed.


1 Snapshots are not available on bare metal servers that have no virtualization capability.

July 29, 2013

A Brief History of Cloud Computing

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.


As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.


As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.


August 31, 2011

Verecloud: Tech Partner Spotlight

This is a guest blog from Verecloud, a technology partner that makes it easier for small- and medium-sized businesses to shop for, select, purchase, manage and monitor the performance of their cloud services and related spending.

Cloudwrangler from Verecloud

Ubiquitous Internet access and technological advances in virtualization and IT management have caused an explosion in the availability and adoption of cloud services. Just a few years ago, it would take hours – if not days – to activate a new cloud service for a customer. SoftLayer can now perform this feat with servers in minutes, and other providers of email, CRM and accounting solutions have equally fast turn-up times.

The cloud gives small- and medium-sized businesses (SMBs) access to enterprise grade technology so that they can compete more effectively with little, if any, capital investment, so those SMBs are prime consumers of cloud services. By moving to cloud services, their businesses gains flexibility and affordable scalability to throttle their infrastructure and services up and down as their business grows, changes, moves locations or becomes more mobile.

Even with all of those benefits, adding a little cloud here and a little cloud there ends up making it difficult for these SMBs to manage all of the disparate services. Who is paying for what? Are they accounted for in expense reports? How can you allocate the costs to your sales, marketing, operations or support departments? Is IT aware of all of the cloud services? What happens if someone leaves the company and you need to deactivate their access and reassign all of their data to other employees?

Verecloud's answer to all of these questions is the Cloudwrangler app store for small businesses. Simply put, it is a single source for SMBs to discover, buy, use and manage their cloud services. This platform makes finance happy since they can properly track and manage costs. IT is happy because they are aware of all the services being used in the company and can manage them from a single control panel. HR is happy because they can monitor and regulate employee access when necessary. Everyone is happy.

Verecloud is proud to feature SoftLayer as a key partner and suppler in the Cloudwrangler marketplace (which also happens to be powered by SoftLayer's CloudLayer Computing). In addition to the infrastructure piece, we offer business class email, backup and recovery, and collaboration capabilities that can be incorporated quickly, seamlessly and affordably into any business:

Cloudwrangler Services

We're staying busy building out more features and functionality to the Cloudwrangler marketplace, and we're excited about the partnerships we'll make as we keep the community growing. If you're interested in learning more about Cloudwrangler, visit at today.

-Russel Wurth, Verecloud

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
August 3, 2011

CyberlinkASP: Tech Partner Spotlight

This is a guest blog from Chris Lantrip, CEO of CyberlinkASP, an application service provider focused on hosting, upgrading and managing the industry's best software.

The DesktopLayer from CyberlinkASP

Hosted virtual desktops – SoftLayer style.

In early 2006, we were introduced to SoftLayer. In 2007, they brought us StorageLayer, and in 2009, CloudLayer. Each of those solutions met a different kind of need in the Application Service Provider (ASP) world, and by integrating those platforms into our offering, DesktopLayer was born: The on-demand anytime, anywhere virtual desktop hosted on SoftLayer and powered by CyberlinkASP.

CyberlinkASP was originally established to instantly web-enable software applications that were not online in the past. Starting off as a Citrix integration firm in the early days, we were approached by multiple independent software vendors asking us to host, manage and deliver their applications from a centralized database platform to their users across multiple geographic locations. With the robust capabilities of Citrix, we were able to revolutionize application delivery and management for several ISV's.

Over time, more ISV's starting showing up at our doorstep, and application delivery was becoming a bigger and bigger piece of our business. Our ability to provision users on a specific platform in minutes, delete them in minutes, perform updates and maintain hundreds of customers and thousands of users all at one time from a centralized platform was very attractive.

Our users began asking us, "Is it possible to put our payroll app on this platform too?" "What about Exchange and Office?" They loved the convenience of not managing the DBs for individual applications, and they obviously wanted more. Instead of providing one-off solutions for individual applications, we built the DesktopLayer, a hosted environment for virtual desktops.

We deliver a seamless and integrated user experience utilizing SoftLayer, Citrix XenApp and XenDesktop. When our users log in they see the same screen, the same applications and the same performance they received on their local machine. The Citrix experience takes over the entire desktop, and the look and feel is indistinguishable. It's exactly what they are accustomed to.

Our services always include the Microsoft suite (Exchange, Office, Sharepoint) and is available on any device, from your PC to your Mac to your iPad. To meet the needs of our customers, we also integrate all 3rd party apps and non-Microsoft software into the virtual desktop – if our customers are using Peachtree or Quickbooks for accounting and Kronos for HR, they are all seamlessly published to the users who access them, and unavailable to those that do not.

We hang our hat on our unique ability to tie all of a company's applications into one centralized user experience and support it. Our Dallas-based call center is staffed with a team of knowledgeable engineers who are always ready to help troubleshoot and can add/delete and customize new users in minutes. We take care of everything ... When someone needs help setting up a printer or they bought a new scanner, they call our helpdesk and we take it from there. Users can call us directly for support and leave the in-house IT team to focus on other areas, not desktop management.

With the revolution of cloud computing, many enterprises are trending toward the eradication of physical infrastructure in their IT environments. Every day, we see more and more demand from IT managers who want us to assume the day-to-day management of their end user's entire desktop, and over the past few years, the application stack that we deliver to each of our end users has grown significantly.

As Citrix would say "the virtual desktop revolution is here." The days of having to literally touch hundreds of devices at users' workstations are over. Servers in the back closet are gone. End users have become much more unique and mobile ... They want the same access, performance and capabilities regardless of geography. That's what we provide. DesktopLayer, with instant computing resources available from SoftLayer, is the future.

I remember someone telling me in 2006 that it was time for the data center to "grow up". It has. We now have hundreds of SMB clients and thousands of virtual desktops in the field today, and we love having a chance to share a little about how we see the IT landscape evolving. Thanks to our friends at SoftLayer, we get to tell that story and boast a little about what we're up to!

- Chris M. Lantrip, Chief Executive, CyberlinkASP

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
April 26, 2011

Hybrid Hosting - What Does it Really Mean?

In our first 3 Bars - 3 Questions video interview, SoftLayer CTO Duke Skarda talked about Hybrid Hosting with Kevin, and last week, I tackled the topic in a session the Texas Technology Summit in Houston. If you have a few minutes and want to learn a little more about SoftLayer's take on hybrid computing and hybrid hosting, you can pull up a virtual chair and see my presentation here:

Even though hybrid hosting is relatively young, it has a great deal of potential. Unlike some of the hyped technologies and developments we hear about all the time, hybrid hosting isn't going to replace everything that came before it ... On the contrary, hybrid hosting encompasses everything that came before it, allowing for flexibility and functionality that you can't find in any of the individual component technologies.

We weren't able to record all of the questions and answers at the end of the session, but one of the most surprising themes I noticed was a misunderstanding of what "Cloud Infrastructure" meant. Those questions reminded me of a fantastic BrightTALK Cloud Infrastructure Online Summit that featured several interesting and informative session about how cloud computing is changing the way businesses are thinking about deploying and managing their IT infrastructure. I know it seems like we're preaching to the choir by posting this on the SoftLayer Blog, but take a look at the BrightTALK Summit's webcast topics to see if any would be helpful to you as you talk about this mysterious "cloud" thing.


February 25, 2011

Hosting is Dead. Long Live Hosting!

If you weren't able to join us in Orlando for Parallels Summit 2011, you missed out on a great conference. More than 1,500 peers, partners and industry influencers shared a wealth of knowledge, some great recommendations and a drink or two. SoftLayer got to share in some of the spotlight when we announced Parallels Automation on-Demand, and I was honored to speak in one of the keynote sessions on Wednesday.

Rather than bore you with bullet points about what I shared, I thought it might be easier to bring you into the room so you can hear the whole session yourself:

Thanks for the opportunity, Parallels! We're already looking forward to next year.


May 4, 2009

Paradigm Shift

From the beginning of my coming of age in the IT industry, It’s been one thing – Windows. As a system administrator in a highly mobile Windows environment, you learn a thing or two to make things tick, and to make them keep ticking. I had become quite proficient with the Active Directory environment, and was able to keep a domain going. While windows is a useful enterprise-grade server solution, it’s certainly not the only solution. Unfortunately when I made my departure from that particular environment, I hadn’t had much exposure to the plethora of options available to an administrator.

Then Along comes SoftLayer, and opens my eyes to an array of new (well, at least to me) operating systems. Now, I had begun my ‘new’ IT life, with exposure to the latest and greatest, to include Windows, as well as virtualization software such as Xen and Virtuozzo, and great open source operating systems such as CentOS, and FreeBSD. With the new exposure to all these high-speed technologies, I felt that maybe it was time for me to let the de-facto home operating system take a break, and kick the tires on a new installation.

I can say that while switching to open source was a bit nerve racking, it ended up being quick and painless, and I’m not looking back. I’ve lost a few hours of sleep here and there trying to dive in and learn a thing or two about the new operating system, as well as making some tweaks to get it just like I like it. The process was certainly a learning experience, and I’ve become much more familiar with an operating system that, at first, can seem rather intimidating. I went through a few different distributions till I settled on one that’s perfect for what I do (like reading the InnerLayer, and finishing the multitude of college papers).

The only problem with always reloading a PC is you have to sit there and watch it. It doesn’t hurt to have a TV and an MP3 player sitting around while you configure everything and get the reload going, but you still have to be around to make sure everything goes as planned. Imagine this… You click a button, and check back in a few. Sound Familiar? Yep, it would have been nice to have an automated reload system much like we have here at SoftLayer. Not to mention, if something goes awry, there’s the assurance that someone will be there to investigate and correct the issue. That way, I can open a cold one, and watch the game, or attend to other matters more important than telling my computer my time zone.

February 19, 2009

Virtualized Datacenters

It shouldn’t be any surprise to people who know SoftLayer that we follow the "Virtual Datacenter" discussions quite closely. In fact, it is awesome to see people discussing what sounds a lot like what SoftLayer already is.

The concept of Virtual Datacenter is that you have all the power of a datacenter at your command without having to worry about the details of actually running a datacenter. Chad Sakac from EMC wrote an excellent post in his personal blog about the transformation to a Virtual Datacenter.

One of the points Chad makes is the abstraction of the physical infrastructure. Quoting Chad:

"Every Layer of the physical infrastructure (CPU, Memory, Network, Storage) need to be transparent. Transparency means 'invisible'. This implies a lot, and implies that the glue in the middle, like a general purpose OS, needs to provide the "API models" for those hardware elements to be transparent. "

I latched on to this point because that is what we have been building at SoftLayer for the last few years. We realize that the abstraction of the physical infrastructure not only means that end-users don’t need to know how to manage the physical infrastructure, but that the abstraction can make more efficient use of resources (= money!).

Let’s talk about the advantages of virtualized infrastructure. Without virtualization, provisioning a web-facing server on the network would involve obtaining rack space, a server, licensing and loading an OS, finding a switch port, physically connecting a cable or three, setting up the switch port (I hope you know IOS), getting IP Addresses (hopefully you don’t have to go get more from ARIN), and adding a firewall and/or load balancer (more procurement, cabling, and configuration). Adding storage could be just as complex – also involving procurement, racking, cabling, and configuration. This doesn’t sound very efficient. In fact, it sounds a lot like creating a “circular device that is capable of rotating on its axis, facilitating movement or transportation whilst supporting a load”. It's been done before and I'll bet it’s been done better by people other than you.

Using virtualized infrastructure you should be able to perform the task with a few clicks of a mouse or a few API calls and have the functionality you need set up in a few minutes instead of days, weeks, or months. No worrying about procurement, physical constraints, or learning the specifics of network and storage devices from different vendors. All you should have to focus on is the running of your particular application. You shouldn’t have to worry about configuring servers, networking, and storage any more than you should have to worry about chillers, HVAC, generators, and UPS batteries.


Subscribe to virtualization