Posts Tagged 'Deployment'

January 31, 2014

Simplified OpenStack Deployment on SoftLayer

"What is SoftLayer doing with OpenStack?" I can't even begin to count the number of times I've been asked that question over the last few years. In response, I'll usually explain how we've built our object storage platform on top of OpenStack Swift, or I'll give a few examples of how our customers have used SoftLayer infrastructure to build and scale their own OpenStack environments. Our virtual and bare metal cloud servers provide a powerful and flexible foundation for any OpenStack deployment, and our unique three-tiered network integrates perfectly with OpenStack's Compute and Network node architecture, so it's high time we make it easier to build an OpenStack environment on SoftLayer infrastructure.

To streamline and simplify OpenStack deployment for the open source community, we've published Opscode Chef recipes for both OpenStack Grizzly and OpenStack Havana on GitHub: SoftLayer Chef-Openstack. With Chef and SoftLayer, your own OpenStack cloud is a cookbook away. These recipes were designed with the needs of growth and scalability in mind. Let's take a deeper look into what exactly that means.

OpenStack has adopted a three-node design whereby a controller, compute, and network node make up its architecture:

OpenStack Architecture on SoftLayer

Looking more closely at any one node reveal the services it provides. Scaling the infrastructure beyond a few dozen nodes, using this model, could create bottlenecks in services such as your block store, OpenStack Cinder, and image store, OpenStack Glance, since they are traditionally located on the controller node. Infrastructure requirements change from service to service as well. For example OpenStack Neutron, the networking service, does not need much disk I/O while the Cinder storage service might heavily rely on a node's hard disk. Our cookbook allows you to choose how and where to deploy the services, and it even lets you break apart the MySQL backend to further improve platform performance.

Quick Start: Local Demo Environment

To make it easy to get started, we've created a rapid prototype and sandbox script for use with Vagrant and Virtual Box. With Vagrant, you can easily spin up a demo environment of Chef Server and OpenStack in about 15 minutes on moderately good laptops or desktops. Check it out here. This demo environment is an all-in-one installation of our Chef OpenStack deployment. It also installs a basic Chef server as a sandbox to help you see how the SoftLayer recipes were deployed.

Creating a Custom OpenStack Deployment

The thee-node OpenStack model does well in small scale and meets the needs of many consumers; however, control and customizability are the tenants for the design of the SoftLayer OpenStack Chef cookbook. In our model, you have full control over the configuration and location of eleven different components in your deployed environment:

Our Chef recipes will take care of populating the configuration files with the necessary information so you won't have to. When deploying, you merely add the role for the matching service to a hardware or virtual server node, and Chef will deploy the service to it with all the configuration done automatically, including adding multiple Neutron, Nova, and Cinder nodes. This approach allows you to tailor the needs of each service to the hardware it will be deployed to--you might put your Neutron hardware node on a server with 10-gigabit network interfaces and configure your Cinder hardware node with RAID 1+0 15k SAS drives.

OpenStack is a fast growing project for the implementation of IaaS in public and private clouds, but its deployment and configuration can be overwhelming. We created this cookbook to make the process of deploying a full OpenStack environment on SoftLayer quick and straightforward. With the simple configuration of eleven Chef roles, your OpenStack cloud can be deployed onto as little as one node and scaled up to many as hundreds (or thousands).

To follow this project, visit SoftLayer on GitHub. Check out some of our other projects on GitHub, and let us know if you need any help or want to contribute.

-@marcalanjones

December 20, 2012

MongoDB Performance Analysis: Bare Metal v. Virtual

Developers can be cynical. When "the next great thing in technology" is announced, I usually wait to see how it performs before I get too excited about it ... Show me how that "next great thing" compares apples-to-apples with the competition, and you'll get my attention. With the launch of MongoDB at SoftLayer, I'd guess a lot of developers outside of SoftLayer and 10gen have the same "wait and see" attitude about the new platform, so I put our new MongoDB engineered servers to the test.

When I shared MongoDB architectural best practices, I referenced a few of the significant optimizations our team worked with 10gen to incorporate into our engineered servers (cheat sheet). To illustrate the impact of these changes in MongoDB performance, we ran 10gen's recommended benchmarking harness (freely available for download and testing of your own environment) on our three tiers of engineered servers alongside equivalent shared virtual environments commonly deployed by the MongoDB community. We've made a pretty big deal about the performance impact of running MongoDB on optimized bare metal infrastructure, so it's time to put our money where our mouth is.

The Testing Environment

For each of the available SoftLayer MongoDB engineered servers, data sets of 512kb documents were preloaded onto single MongoDB instances. The data sets were created with varying size compared to available memory to allow for data sets that were both larger (2X) and smaller than available memory. Each test also ensured that the data set was altered during the test run frequently enough to prevent the queries from caching all of the data into memory.

Once the data sets were created, JMeter server instances with 4 cores and 16GB of RAM were used to drive 'benchrun' from the 10gen benchmarking harness. This diagram illustrates how we set up the testing environment (click for a better look):

MongoDB Performance Analysis Setup

These Jmeter servers function as the clients generating traffic on the MongoDB instances. Each client generated random query and update requests with a ratio of six queries per update (The update requests in the test were to ensure that data was not allowed to fully cache into memory and never exercise reads from disk). These tests were designed to create an extreme load on the servers from an exponentially increasing number of clients until the system resources became saturated, and we recorded the resulting performance of the MongoDB application.

At the Medium (MD) and Large (LG) engineered server tiers, performance metrics were run separately for servers using 15K SAS hard drive data mounts and servers using SSD hard drive data mounts. If you missed the post comparing the IOPS statistics between different engineered server hard drive configurations, be sure to check it out. For a better view of the results in a given graph, click the image included in the results below to see a larger version.

Test Case 1: Small MongoDB Engineered Servers vs Shared Virtual Instance

Servers

Small (SM) MongoDB Engineered Server
Single 4-core Intel 1270 CPU
64-bit CentOS
8GB RAM
2 x 500GB SATAII - RAID1
1Gb Network
Virtual Provider Instance
4 Virtual Compute Units
64-bit CentOS
7.5GB RAM
2 x 500GB Network Storage - RAID1
1Gb Network
 

Tests Performed

Small Data Set (8GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 32
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Test Case 2: Medium MongoDB Engineered Servers vs Shared Virtual Instance

Servers (15K SAS Data Mount Comparison)

Medium (MD) MongoDB Engineered Server
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
4 x 300GB 15K SAS - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
30GB RAM
2 x 64GB Network Storage - RAID1 (Journal Mount)
4 x 300GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (32GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Servers (SSD Data Mount Comparison)

Medium (MD) MongoDB Engineered Server
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
4 x 400GB SSD - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
30GB RAM
2 x 64GB Network Storage - RAID1 (Journal Mount)
4 x 300GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (32GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Test Case 3: Large MongoDB Engineered Servers vs Shared Virtual Instance

Servers (15K SAS Data Mount Comparison)

Large (LG) MongoDB Engineered Server
Dual 8-core Intel E5-2620 CPUs
64-bit CentOS
128GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
6 x 600GB 15K SAS - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
64GB RAM (Maximum available on this provider)
2 x 64GB Network Storage - RAID1 (Journal Mount)
6 x 600GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (64GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Servers (SSD Data Mount Comparison)

Large (LG) MongoDB Engineered Server
Dual 8-core Intel E5-2620 CPUs
64-bit CentOS
128GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
6 x 400GB SSD - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
64GB RAM (Maximum available on this provider)
2 x 64GB Network Storage - RAID1 (Journal Mount)
6 x 600GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (64GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned over 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Impressions from Performance Testing

The results speak for themselves. Running a Mongo DB big data solution on a shared virtual environment has significant drawbacks when compared to running MongoDB on a single-tenant bare metal offering. Disk I/O is by far the most limiting resource for MongoDB, and relying on shared network-attached storage (with much lower disk I/O) makes this limitation very apparent. Beyond the average and peak statistics above, performance varied much more significantly in the virtual instance environment, so it's not as consistent and predictable as a bare metal.

Highlights:

  • When a working data set is smaller than available memory, query performance increases.
  • The number of clients performing queries has an impact on query performance because more data is being actively cached at a rapid rate.
  • The addition of a separate Journal Mount volume significantly improves performance. Because the Small (SM) engineered server does not include a secondary mount for Journals, whenever MongoDB began to journal, the disk I/O associated with journalling was disruptive to the query and update operations performed on the Data Mount.
  • The best deployments in terms of operations per second, stability and control were the configurations with a RAID10 SSD Data Mount and a RAID1 SSD Journal Mount. These configurations are available in both our Medium and Large offerings, and I'd highly recommend them.

-Harold

December 6, 2012

MongoDB: Architectural Best Practices

With the launch of our MongoDB solutions, developers can provision powerful, optimized, horizontally scaling NoSQL database clusters in real-time on bare metal infrastructure in SoftLayer data centers around the world. We worked tirelessly with our friends at 10gen — the creators of MongoDB — to build and tweak hardware and software configurations that enable peak MongoDB performance, and the resulting platform is pretty amazing. As Duke mentioned in his blog post, those efforts followed 10Gen's MongoDB best practices, but what he didn't mention was that we created some architectural best practices of our own for MongoDB in deployments on our platform.

The MongoDB engineered servers that you order from SoftLayer already implement several of the recommendations you'll see below, and I'll note which have been incorporated as we go through them. Given the scope of the topic, it's probably easiest to break down this guide into a few sections to make it a little more digestible. Let's take a look at the architectural best practices of running MongoDB through the phases of the roll-out process: Selecting a deployment strategy to prepare for your MongoDB installation, the installation itself, and the operational considerations of running it in production.

Deployment Strategy

When planning your MongoDB deployment, you should follow Sun Tzu's (modified) advice: "If you know the [friend] and know yourself, you need not fear the result of a hundred battles." "Friend" was substituted for the "enemy" in this advice because the other party is MongoDB. If you aren't familiar with MongoDB, the top of your to-do list should be to read MongoDB's official documentation. That information will give you the background you'll need as you build and use your database. When you feel comfortable with what MongoDB is all about, it's time to "know yourself."

Your most important consideration will be the current and anticipated sizes of your data set. Understanding the volume of data you'll need to accommodate will be the primary driver for your choice of individual physical nodes as well as your sharding plans. Once you've established an expected size of your data set, you need to consider the importance of your data and how tolerant you are of the possibility of lost or lagging data (especially in replicated scenarios). With this information in hand, you can plan and start testing your deployment strategy.

It sounds a little strange to hear that you should test a deployment strategy, but when it comes to big data, you want to make sure your databases start with a strong foundation. You should perform load testing scenarios on a potential deployment strategy to confirm that a given architecture will meet your needs, and there are a few specific areas that you should consider:

Memory Sizing
MongoDB (like many data-oriented applications) works best when the data set can reside in memory. Nothing performs better than a MongoDB instance that does not require disk I/O. Whenever possible, select a platform that has more available RAM than your working data set size. If your data set exceeds the available RAM for a single node, then consider using sharding to increase the amount of available RAM in a cluster to accommodate the larger data set. This will maximize the overall performance of your deployment. If you notice page faults when you put your database under production load, they may indicate that you are exceeding the available RAM in your deployment.

Disk Type
If speed is not your primary concern or if you have a data set that is far larger than any available in memory strategy can support, selecting the proper disk type for your deployment is important. IOPS will be key in selecting your disk type and obviously the higher the IOPS the better the performance of MongoDB. Local disks should be used whenever possible (as network storage can cause high latency and poor performance for your deployment). It's also advised that you use RAID 10 when creating disk arrays.

To give you an idea of what kind of IOPS to expect from a given type of drive, these are the approximate ranges of IOPS per drive in SoftLayer MongoDB engineered servers:

SATA II – 100-200 IOPS
15K SAS – 300-400 IOPS
SSD – 7,000-8,000 IOPS (read) 19,000-20,000 IOPS (write)

CPU
Clock speed and the amount of available processors becomes a consideration if you anticipate using MapReduce. It has also been noted that when running a MongoDB instance with the majority of the data in memory, clock speed can have a major impact on overall performance. If you are planning to use MapReduce or you're able to operate with a majority of your data in memory, consider a deployment strategy that includes a CPU with a high clock/bus speed to maximize your operations per second.

Replication
Replication provides high availability of your data if a node fails in your cluster. It should be standard to replicate with at least three nodes in any MongoDB deployment. The most common configuration for replication with three nodes is a 2x1 deployment — having two primary nodes in a single data center with a backup server in a secondary data center:

MongoDB Replication

Sharding
If you anticipate a large, active data set, you should deploy a sharded MongoDB deployment. Sharding allows you to partition a single data set across multiple nodes. You can allow MongoDB to automatically distribute the data across nodes in the cluster or you may elect to define a shard key and create range-based sharding for that key.

Sharding may also help write performance, so you can also elect to shard even if your data set is small but requires a high amount of updates or inserts. It's important to note that when you deploy a sharded set, MongoDB will require three (and only three) config server instances which are specialized Mongo runtimes to track the current shard configuration. Loss of one of these nodes will cause the cluster to go into a read-only mode (for the configuration only) and will require that all nodes be brought back online before any configuration changes can be made.

Write Safety Mode
There are several write safety modes that govern how MongoDB will handle the persistence of the data to disk. It is important to consider which mode best fits your needs for both data integrity and performance. The following write safety modes are available:

None – This mode provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency. The 'None' strategy will also not provide any sort of protection in the case of network failures. That lack of protection makes this mode highly unreliable and should only be used when performance is a priority and data integrity is not a concern.

Normal – This is the default for MongoDB if you do not select any other mode. It provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency.

Safe – This mode will block until MongoDB has acknowledged that it has received the write request but will not block until the write is actually performed. This provides a better level of data integrity and will ensure that read consistency is achieved within a cluster.

Journal Safe – Journals provide a recovery option for MongoDB. Using this mode will ensure that the data has been acknowledged and a Journal update has been performed before returning.

Fsync - This mode provides the highest level of data integrity and blocks until a physical write of the data has occurred. This comes with a degradation in performance and should be used only if data integrity is the primary concern for your application.

Testing the Deployment
Once you've determined your deployment strategy, test it with a data set similar to your production data. 10gen has several tools to help you with load testing your deployment, and the console has a tool named 'benchrun' which can execute operations from within a JavaScript test harness. These tools will return operation information as well as latency numbers for each of those operations. If you require more detailed information about the MongoDB instance, consider using the mongostat command or MongoDB Monitoring Service (MMS) to monitor your deployment during the testing.

Installation

When performing the installation of MongoDB, a few considerations can help create both a stable and performance-oriented solution. 10gen recommends the use CentOS (64-bit) as the base operating system if at all possible. If you try installing MongoDB on a 32-bit operating system, you might run into file size limits that cause issues, and if you feel the urge to install it on Windows, you'll see performance issues if virtual memory begins to be utilized by the OS to make up for a lack of RAM in your deployment. As a result, 32-bit operating systems and Windows operating systems should be avoided on MongoDB servers. SoftLayer provisions CentOS 6.X 64-bit operating systems by default on all of our MongoDB engineered server deployments.

When you've got CentOS 64-bit installed, you should also make the following changes to maximize your performance (all of which are included by default on all SoftLayer engineered servers):

Set SSD Read Ahead Defaults to 16 Blocks - SSD drives have excellent seek times allowing for shrinking the Read Ahead to 16 blocks. Spinning disks might require slight buffering so these have been set to 32 blocks.

noatime - Adding the noatime option eliminates the need for the system to make writes to the file system for files which are simply being read — or in other words: Faster file access and less disk wear.

Turn NUMA Off in BIOS - Linux, NUMA and MongoDB tend not to work well together. If you are running MongoDB on NUMA hardware, we recommend turning it off (running with an interleave memory policy). If you don't, problems will manifest in strange ways like massive slow downs for periods of time or high system CPU time.

Set ulimit - We have set the ulimit to 64000 for open files and 32000 for user processes to prevent failures due to a loss of available file handles or user processes.

Use ext4 - We have selected ext4 over ext3. We found ext3 to be very slow in allocating files (or removing them). Additionally, access within large files is poor with ext3.

One last tip on installation: Make the Journal and Data volumes be distinct physical volumes. If the Journal and Data directories reside on a single physical volume, flushes to the Journal will interrupt the access of data and provide spikes of high latency within your MongoDB deployment.

Operations

Once a MongoDB deployment has been promoted to production, there are a few recommendations for monitoring and optimizing performance. You should always have the MMS agent running on all MongoDB instances to help monitor the health and performance of your deployment. Additionally, this tool is also very useful if you have 10gen MongoDB Cloud Subscriptions because it provides useful debugging data for the 10gen team during support interactions. In addition to MMS, you can use the mongostat command (mentioned in the deployment section) to see runtime information about the performance of a MongoDB node. If either of these tools flags performance issues, sharding or indexing are first-line options to resolve them:

Indexes - Indexes should be created for a MongoDB deployment if monitoring tools indicate that field based queries are performing poorly. Always use indexes when you are querying data based on distinct fields to help boost performance.

Sharding - Sharding can be leveraged when the overall performance of the node is suffering because of a large operating data set. Be sure to shard before you get in the red; the system only splits chunks for sharding on insert or update so if you wait too long to shard you may have some uneven distribution for a period of time or forever depending on your data set and sharding key strategy.

I know it seems like we've covered a lot over the course of this blog post, but this list of best practices is far from exhaustive. If you want to learn more, the MongoDB forums are a great resource to connect with the rest of the MongoDB community and learn from their experiences, and the documentation on MongoDB's site is another phenomenal resource. The best people to talk to when it comes to questions about MongoDB are the folks at 10gen, so I also highly recommend taking advantage of MongoDB Cloud Subscriptions to get their direct support for your one-off questions and issues.

-Harold

October 8, 2012

Don't Let Your Success Bring You Down

Last week, I got an email from a huge technology conference about their new website, exciting new speaker line up and the availability of early-bird tickets. I clicked on a link from that email, and I find that their fancy new website was down. After giving up on getting my early-bird discount, I surfed over to Facebook, and I noticed a post from one of my favorite blogs, Dutch Cowboys, about another company's interesting new product release. I clicked the link to check out the product, and THAT site was down, too. It's painfully common for some of the world's most popular sites and applications buckle under the strain of their own success ... Just think back to when Diablo III was launched: Demand crushed their servers on release day, and the gamers who waited patiently to get online with their copy turned to the world of social media to express their visceral anger about not being able to play the game.

The question everyone asks is why this kind of thing still happens. To a certain extent, the reality is that most entrepreneurs don't know what they don't know. I spoke with an woman who was going to be featured on BBC's Dragons' Den, and she said that the traffic from the show's viewers crippled most (if not all) of the businesses that were presented on the program. She needed to safeguard from that happening to her site, and she didn't know how to do that.

Fortunately, it's pretty easy to keep sites and applications online with on-demand infrastructure and auto-scaling tools. Unfortunately, most business owners don't know how easy it is, so they don't take advantage of the resources available to them. Preparing a website, game or application for its own success doesn't have to be expensive or time consuming. With pay-for-what-you-use pricing and "off the shelf" cloud management solutions, traffic-caused outages do NOT have to happen.

First impressions are extremely valuable, and if I wasn't really interested in that conference or the new product Dutch Cowboys blogged about, I'd probably never go back to those sites. Most Internet visitors would not. I cringe to think about the potential customers lost.

Businesses spend a lot of time and energy on user experience and design, and they don't think to devote the same level of energy on their infrastructure. In the 90's, sites crashing or slowing was somewhat acceptable since the interwebs were exploding beyond available infrastructure's capabilities. Now, there's no excuse.

If you're launching a new site, product or application, how do you get started?

The first thing you need to do is understand what resources you need and where the potential bottlenecks are when hundreds, thousands or even millions of people want to what you're launching. You don't need to invest in infrastructure to accommodate all of that traffic, but you need to know how you can add that infrastructure when you need it.

One of the easiest ways to prepare for your own success without getting bogged down by the bits and bytes is to take advantage of resources from some of our technology partners (and friends). If you have a PHP, Ruby on Rails or Node.js applications, Engine Yard will help you deploy and manage a specialized hosting environment. When you need a little more flexibility, RightScale's cloud management product lets you easily manage your environment in "a single integrated solution for extreme efficiency, speed and control." If your biggest concern is your database's performance and scalability, Cloudant has an excellent cloud database management service.

Invest a little time in getting ready for your success, and you won't need to play catch-up when that success comes to you. Given how easy it is to prepare and protect your hosting environment these days, outages should go the way of the 8-track player.

-@jpwisler

June 11, 2012

"World IPv6 Launch Day" and What it Means for You

June 6, 2012, marked a milestone in the further advancement of the Internet: World IPv6 Launch Day. It was by no means an Earth-shattering event or a "flag day" where everyone switched over to IPv6 completely ... What actually happened was that content providers enabled AAAA DNS records for their websites and other applications, and ISPs committed to providing IPv6 connectivity to at least 1% of their customers by this date.

What's all of this fuss about the IPv6 transition about? The simplest way to explain the situation is that the current Internet can stay working as it does, using IPv4 addresses, forever ... if we're okay with it not growing any more. If no more homes and businesses wanted to get on the Internet, and no more new phones or tablets were produced, and no more websites or applications were created. SoftLayer wouldn't be able to keep selling new servers either. To prevent or lose that kind of organic growth would be terrible, so an alternative had to be created to break free from the limitations of IPv4.

IPv4 to IPv6

The long-term goal is to migrate the entire Internet to the IPv6 standard in order to eliminate the stifling effect of impending and inevitable IP address shortages. It is estimated that there are roughly 2.5 billion current connections to the Internet today, so to say the transition has a lot of moving parts would be an understatement. That complexity doesn't lessen the urgency of the need to make the change, though ... In the very near future, end-users and servers will no longer be able to get IPv4 connections to the Internet, and will only connect via IPv6.

The primary transition plan is to "dual-stack" all current devices by adding IPv6 support to everything that currently has an IPv4 address. By adding native IPv6 functionality to devices using IPv4, all of that connectivity will be able to speak via IPv6 without transitional technologies like NAT (Network Address Translation). This work will take several years, and time is not a luxury we have with the dwindling IPv4 pool.

Like George mentioned in a previous post, I see World IPv6 Launch day as a call-to-action for a "game changer." The IPv6 transition has gotten a ton of visibility from some of the most recognizable names on the Internet, but the importance and urgency of the transition can't be overstated.

So, what does that mean for you?

To a certain extent, that depends on what your involvement is on the Internet. Here are a few steps everyone can take:

  • Learn all you can about IPv6 to prepare for the work ahead. A few good books about IPv6 have been published, and resources like ARIN's IPv6 Information Wiki are perfect places to get more information.
  • If you own servers or network equipment, check them for IPv6 functionality. Upgrade or replace any software or devices to ensure that you can deliver native IPv6 connectivity end-to-end without any adverse impact to IPv6 users. If any piece of gear isn't IPv6-capable, IPv6 traffic won't be able to pass through your network.
  • If you are a content provider, make your content available via IPv6. This starts with requesting IPv6 service from your ISP. At SoftLayer, that's done via a zero-cost sales request to add IPv6 addresses to your VLANs. You should target 100% coverage for your services or applications — providing the same content via IPv6 as you do via IPv4. Take an inventory of all your DNS records, and after you've tested extensively, publish AAAA records for all hostnames to start attracting IPv6 traffic.
  • If you are receiving Internet connectivity to your home or business desktops, demand IPv6 services from your upstream ISP. Also be sure to check your access routers, switches and desktops to ensure they are running the most recent code with stable IPv6 support.
  • If you are running equipment such as firewalls, load balancers, IDS, etc., contact your vendors to learn about their IPv6 support and how to properly configure those devices. You want to make sure you aren't limiting performance or exposing any vulnerabilities.

Starting now, there are no more excuses. It's time to get IPv6 up and running if you want to play a part in tomorrow's Internet.

-Dani

January 19, 2012

IPv6 Milestone: "World IPv6 Launch Day"

On Tuesday, the Internet Society announced "World IPv6 Launch Day", a huge step in the transition from IPv4 to IPv6. Scheduled for June 6, 2012, this "launch day" comes almost one year after the similarly noteworthy World IPv6 Day, during which many prominent Internet businesses enabled IPv6 AAAA record resolution for their primary websites for a 24-hour period.

With IPv6 Day serving as a "test run," we confirmed a lot of what we know about IPv6 compatibility and interoperability with deployed systems throughout the Internet, and we even learned about a few areas that needed a little additional attention. Access troubles for end-users was measured in fractions of a percentage, and while some sites left IPv6 running, many of them ended up disabling the AAAA IPv6 records at the end of the event, resuming their legacy IPv4-only configuration.

We're past the "testing" phase now. Many of the IPv6-related issues observed in desktop operating systems (think: your PCs, phones, and tablets) and consumer network equipment (think: your home router) have been resolved. In response – and in an effort to kick IPv6 deployment in the butt – the same businesses which ran the 24-hour field test last year have committed to turning on IPv6 for their content and keeping it on as of 6/6/2012.

But that's not all, folks!

In the past, IPv6 availability would have simply impacted customers connecting to the Internet from a few universities, international providers and smaller technology-forward ISPs. What's great about this event is that a significant number of major broadband ISPs (think: your home and business Internet connection) have committed to enabling IPv6 to their subscribers. June 6, 2012, marks a day where at least 1% of the participating ISPs' downstream customers will be receiving IPv6 addresses.

While 1% may not seem all that impressive at first, in order to survive the change, these ISPs must slowly roll out IPv6 availability to ensure that they can handle the potential volume of resulting customer support issues. There will be new training and technical challenges that I suspect all of these ISPs will face, and this type of approach is a good way to ensure success. Again, we must appreciate that the ISPs are turning it on for good now.

What does this mean for SoftLayer customers? Well the good news is that our network is already IPv6-enabled ... In fact, it has been so for a few years now. Those of you who have taken advantage of running a dual-stack of IPv4 and IPv6 addresses may have noticed surprisingly low IPv6 traffic volume. When 6/6/2012 comes around, you should see that volume rise (and continue to rise consistently from there). For those of you without IPv6 addresses, now's the time to get started and get your feet wet. You need to be prepared for the day when new "eyeballs" are coming online with IPv6-only addresses. If you don't know where to start, go back through this article and click on a few of the hyperlinks, and if you want more information, ARIN has a great informational IPv6 wiki that has been enjoying community input for a couple years now.

The long term benefit of this June 6th milestone is that with some of the "big guys" playing in this space, the visibility of IPv6 should improve. This will help motivate the "little guys" who otherwise couldn't get motivated – or more often couldn't justify the budgetary requirements – to start implementing IPv6 throughout their organizations. The Internet is growing rapidly, and as our collective attentions are focused on how current legislation (SOPA/PIPA) could impede that growth, we should be intentional about fortifying the Internet's underlying architecture.

-Dani

July 22, 2011

Don't Let IPv4 Exhaustion Sneak Up on You

A few months ago, IANA exhausted its unallocated IPv4 address pool when it gave the last /8's to regional registries around the world. That news got a fair amount of buzz. Last month, some of the biggest sites in the world participated in World IPv6 Day to a little fanfare as well. Following those larger flows of attention have been the inevitable ebbs as people go back to "business as usual." As long as ARIN has space available (currently 4.93 /8s in aggregate), no one is losing sleep, but as that number continues decreasing, and the forced transition to incorporate IPv6 will creep closer and closer.

On July 14, I was honored to speak at IPv6 2011: The Time is Now! about how technology is speeding up IPv4 exhaustion and what the transition to IPv6 will mean for content providers. Since the session afforded me a great opportunity to share a high level overview of how I see the IPv4-to-IPv6 transition (along with how SoftLayer has prepared), it might be interesting to the folks out there in the blogosphere:

As time goes by, these kinds of discussions are going to get less theoretical and more practical. The problem with IPv4 is that the entire world is about to run out of free space. The answer IPv6 provides is an allocation pool that is not in danger of exhaustion. The transition from IPv4 to IPv6 isn't as much "glamorous" as it is "necessary," and while the squeeze on IPv4 space may not affect you immediately, you need to be prepared for the inevitability that it will.

-@wcharnock

January 25, 2008

Virtualized Virtualization

For the past several months, we have been struggling with how to implement virtualization in a hosting environment. Xen, VMWare, Virtuozzo, Parrallels, and Virtual Iron just to name a few. As many of you know, the software world courts the enterprise and the hosting world is left to shove the square peg into a round hole. Once again, these software packages have been designed for one company with many servers versus one company with many clients with many servers.

The most shocking reality about virtualization is the lack of scalability. Now, before you call quack shack to have my head examined – hear me out. All (and I mean all) of the virtualization products on the market scale extremely well to a couple hundred physical servers (lets call it 200). These technologies were designed to be used in companies that have relatively small subsets of physical servers (yes…I think 200 is small) managed through a centralized console. The idea is – those 200 servers should be utilized more efficiently thereby creating 400 to 2000 virtual machines. This model works great in companies that only have the need for one or two mass “virtual deployments.”

Now, fast forward to SoftLayer where we have already virtualized every aspect of the datacenter and we manage over 12,000 servers. Let’s run through the high points of virtualization - Rapid deployment – we got that. Asset tracking – yip, been there done that. Network management – baked and done. Add services on-the-fly – is there any other way? Complete control – piece of cake. Eliminate inefficiencies – have you seen our offerings? In essence, SoftLayer has abstracted the physical layer from the datacenter and left our customers with a complete virtualized datacenter environment. So, the questions remains – how do we virtualize the virtualized?

-@lavosby

January 11, 2008

I Need a Whataburger!!

Somebody...Anybody...I need a Whataburger!!

If you haven't been to a Whataburger, I'm sorry. It's an amazing fast food chain that sells not only the freshest made-to-order burgers, but they're also open 24-hours a day, and their breakfast is second to none (Chris Menard has a clinical addiction to their taquitos). The problem with this is that they only exist in the South. I'm in the North. In Seattle, Washington to be precise—accompanied by our go-live team to manage our newest datacenter and make sure the launch goes smoothly.

On the bright side (no pun intended, it hasn't stopped raining since we landed), it has. We have assembled an amazing team, the datacenter is absolutely spectacular, and the locals have been very friendly. Efficiencies we have built into our normal daily operations over the last two years have basically allowed us to "drag and drop" our datacenters as needed, where they are needed without having to reinvent the wheel every time we launch. Since the deployment is simple, we can focus on service upgrades—like the latest 40-Gigabit rack-level connections—while we roll out a new facility. Connectivity you could use…say…to look for a Whataburger near you http://www.whataburger.com/one_near_you.php (I look every day). We've already flown through our first historic Seattle Truck Day, and had a second one to boot. We're provisioning droves of machines for new and current customers who are taking advantage of our network architecture, tools, and StorageLayer to create their own custom solutions. In a nutshell, we have brought a new DC online and maintained the ability to provide our customers with the same cutting edge hardware and innovative utilities that they have come to expect in Dallas.

On the darker side, with everything is going so well, it leaves a lot of time to sit and think about a tasty Whataburger. With jalapenos. And bacon. Ugh.

-Joshua

Subscribe to deployment