Posts Tagged 'POD'

October 8, 2014

An Insider’s Look at Our Data Centers

I’ve been with Softlayer over four years now. It’s been a journey that has taken me around the world—from Dallas to Singapore to Washington D.C, and back again. Along the way, I’ve met amazingly brilliant people who have helped me sharpen the tools in my ‘data center toolbox’ thus allowing me to enhance the customer experience by aiding and assisting in a complex compute environment.

I like to think of our data centers as masterpieces of elegant design. We currently have 14 of these works of art, with many more on the way. Here’s an insider’s look at the design:

Keeping It Cool
Our POD layouts have a raised floor system. The air conditioning units chill from the front bottom of the servers on the ‘cold rows’ passing through the servers on the ‘warm rows.’ The warm rows have ceiling vents to rapidly clear the warm air from the backs of the servers.

Jackets are recommended for this arctic environment.

Pumping up the POWER
Nothing is as important to us as keeping the lights on. Every data center has a three-tiered approach to keeping your servers and services on. Our first tier being street power. Each rack has two power strips to distribute the load and offer true redundancy for redundant servers and switches with the remote ability to power down an individual port on either power strip.

The second tier is our batter backup for each POD. This offers emergency response for seamless failover when street power is no more.

This leads to the third step in our model, generators. We have generators in place for a sustainable continuity of power until street power has returned. Check out the 2-megawatt diesel generator installation at the DAL05 data center here.

The Ultimate Social Network
Neither power nor cooling matter if you can’t connect to your server, which is where our proprietary networking topography comes to play. Each bare metal server and each virtual server resides in a rack that connects to three switches. Each of those switches connects to an aggregate switch for a row. The aggregate switch connects to a router.

The first switch, our private backend network, allows for SSL and VPN connectivity to manage your server. It also gives you the ability to have server-to-server communication without the bounds of bandwidth overages.

The second switch, our public network, provides pubic Internet access to your device, which is perfect for shopping, gaming, coding, or whatever you want to use it for. With 20TB of bandwidth coming standard for this network, the possibilities are endless.

The third and final switch, management, allows you to connect to the Intelligent Platform Management Interface that provides tools such as KVM/hardware monitoring/and even virtual CDs to install an image of your choosing! The cables to your devices from the switches are color-coded, port-number-to-rack-unit labeled, and masterfully arranged to maximize identification and airflow.

A Soft Place for Hardware
The heart and soul of our business is the computing hardware. We use enterprise grade hardware from the ground up. We offer our smallest offering of 1 core, 1GB RAM, 25GB HDD virtual servers, to one of our largest quad 10-core, 512GB RAM, multi 4TB HDD bare metal servers. With excellent hardware comes excellent options. There is almost always a path to improvement. Meaning, unless you already have the top of the line, you can always add more. Whether it be additional drive, RAM, or even processor.

I hope you enjoyed the view from the inside. If you want to see the data centers up close and personal, I am sorry to say, those are closed to the public. But you can take a virtual tour of some of our data centers via YouTube: AMS01 and DAL05

-Joshua Fox

February 8, 2013

Data Center Power-Up: Installing a 2-Megawatt Generator

When I was a kid, my living room often served as a "job site" where I managed a fleet of construction vehicles. Scaled-down versions of cranes, dump trucks, bulldozers and tractor-trailers littered the floor, and I oversaw the construction (and subsequent destruction) of some pretty monumental projects. Fast-forward a few years (or decades), and not much has changed except that the "heavy machinery" has gotten a lot heavier, and I'm a lot less inclined to "destruct." As SoftLayer's vice president of facilities, part of my job is to coordinate the early logistics of our data center expansions, and as it turns out, that responsibility often involves overseeing some of the big rigs that my parents tripped over in my youth.

The video below documents the installation of a new Cummins two-megawatt diesel generator for a pod in our DAL05 data center. You see the crane prepare for the work by installing counter-balance weights, and work starts with the team placing a utility transformer on its pad outside our generator yard. A truck pulls up with the generator base in tow, and you watch the base get positioned and lowered into place. The base looks so large because it also serves as the generator's 4,000 gallon "belly" fuel tank. After the base is installed, the generator is trucked in, and it is delicately picked up, moved, lined up and lowered onto its base. The last step you see is the generator housing being installed over the generator to protect it from the elements. At this point, the actual "installation" is far from over — we need to hook everything up and test it — but those steps don't involve the nostalgia-inducing heavy machinery you probably came to this post to see:

When we talk about the "megawatt" capacity of a generator, we're talking about the bandwidth of power available for use when the generator is operating at full capacity. One megawatt is one million watts, so a two-megawatts generator could power 20,000 100-watt light bulbs at the same time. This power can be sustained for as long as the generator has fuel, and we have service level agreements to keep us at the front of the line to get more fuel when we need it. Here are a few other interesting use-cases that could be powered by a two-megawatt generator:

  • 1,000 Average Homes During Mild Weather
  • 400 Homes During Extreme Weather
  • 20 Fast Food Restaurants
  • 3 Large Retail Stores
  • 2.5 Grocery Stores
  • A SoftLayer Data Center Pod Full of Servers (Most Important Example!)

Every SoftLayer facility has an n+1 power architecture. If we need three generators to provide power for three data center pods in one location, we'll install four. This additional capacity allows us to balance the load on generators when they're in use, and we can take individual generators offline for maintenance without jeopardizing our ability to support the power load for all of the facility's data center pods.

Those of you who are in the fondly remember Tonka trucks and CAT crane toys are the true target audience for this post, but even if you weren't big into construction toys when you were growing up, you'll probably still appreciate the work we put into safeguarding our facilities from a power perspective. You don't often see the "outside the data center" work that goes into putting a new SoftLayer data center pod online, so I thought it'd give you a glimpse. Are there an topics from an operations or facilities perspectives that you also want to see?

-Robert

July 27, 2012

SoftLayer 'Cribs' ≡ DAL05 Data Center Tour

The highlight of any customer visit to a SoftLayer office is always the data center tour. The infrastructure in our data centers is the hardware platform on which many of our customers build and run their entire businesses, so it's not surprising that they'd want a first-hand look at what's happening inside the DC. Without exception, visitors to a SoftLayer data center pod are impressed when they walk out of a SoftLayer data center pod ... even if they've been in dozens of similar facilities in the past.

What about the customers who aren't able to visit us, though? We can post pictures, share stats, describe our architecture and show you diagrams of our facilities, but those mediums can't replace the experience of an actual data center tour. In the interest of bridging the "data center tour" gap for customers who might not be able to visit SoftLayer in person (or who want to show off their infrastructure), we decided to record a video data center tour.

If you've seen "professional" video data center tours in the past, you're probably positioning a pillow on top of your keyboard right now to protect your face if you fall asleep from boredom when you hear another baritone narrator voiceover and see CAD mock-ups of another "enterprise class" facility. Don't worry ... That's not how we roll:

Josh Daley — whose role as site manager of DAL05 made him the ideal tour guide — did a fantastic job, and I'm looking forward to feedback from our customers about whether this data center tour style is helpful and/or entertaining.

If you want to see more videos like this one, "Like" it, leave comments with ideas and questions, and share it wherever you share things (Facebook, Twitter, your refrigerator, etc.).

-@khazard

February 10, 2012

Amsterdam Data Center (AMS01): Does it Measure Up?

SoftLayer data centers are designed in a "pod" concept: Every facility in every location is laid out similarly, and you'll find the same network and server hardware connected to the same network. The idea behind it is that this design makes it easier for us to build out new locations quickly, we can have identical operational processes and procedures in each facility, and customers can expect the exact same hosting experience regardless of data center location. When you've got several data centers in one state, that uniformity is easy to execute. When you open facilities on opposite sides of the country, it seems a little more difficult. Open a facility in another country (and introduce the challenge of getting all of that uniformity across an ocean), and you're looking at a pretty daunting task.

Last month, I hopped on a plane from Houston to London to attend Cloud Expo Europe. Because I was more or less "in the neighborhood" of our newest data center in Amsterdam, I was able to take a short flight to The Netherlands to do some investigatory journalism ... err ... "to visit the AMS01 team."

Is AMS01 worthy of the SoftLayer name? ... How does it differ from our US facilities? ... Why is everything written in Dutch at the Amsterdam airport?

The answers to my hard-hitting questions were pretty clear: SoftLayer's Amsterdam facility is absolutely deserving of the SoftLayer name ... The only noticeable differences between AMS01 and DAL05 are the cities they're located in ... Everything's written in Dutch because the airport happens to be in The Netherlands, and people speak Dutch in The Netherlands (that last question didn't get incorporated into the video, but I thought you might be curious).

Nearly every aspect of the data center mirrors what you see in WDC, SEA, HOU, SJC and DAL. The only differences I really noticed were what the PDUs looked like, what kind of power adapter was used on the crash carts, and what language was used on the AMS facility's floor map. One of the most interesting observations: All of the servers and power strips on the racks used US power plugs ... This characteristic was particularly impressive to me because every gadget I brought with me seemed to need its own power converter to recharge.

When you see us talking about the facilities being "the same," that's not a loosely used general term ... We could pull a server from its rack in DAL05, buckle it into an airplane seat for a 10-hour flight, bring it to AMS01 (via any of the unique modes of Amsterdam transportation you saw at the beginning of the video), and slide it into a rack in Amsterdam where we could simply plug it in. It'd be back online and accessible over the public and private networks as though nothing changed ... Though with Flex Images making it so easy to replicate cloud and dedicated instances in any facility, you'll just have to take our word for it when it comes to the whole "send a server over to another data center on a plane" thing.

While I was visiting AMS01, Jonathan Wisler took a few minutes out of his day to give a full tour of the data center's server room, and we've got video and pictures to share with more shots of our beautiful servers in their European home. If there's anything in particular you want to see from AMS01, let us know, and we'll do our best to share it!

-@khazard

P.S. Shout out to the SLayers in the Amsterdam office who offered their linguistic expertise to add a little flair to the start of the video ... From the four employees who happened to be in the office when I was asking for help, we had six fluent-language contributions: English, Italian, French, Dutch, Polish and German!

**UPDATE** After posting this video, I learned that the "US" server power plugs I referred to are actually a worldwide computer standard called C13 (male) and C14 (female).

October 27, 2011

SoftLayer Features and Benefits - Data Centers

When we last talked, I broke down the differences between features and benefits. To recap: a feature is something prominent about a person, place or thing, while a benefit is a feature that is useful to you. In that blog, I discussed our customer portal and the automation within, so with this next installment, let's move into my favorite place: the data center ... Our pride and joy!

If you have not had a chance to visit a SoftLayer data center, you're missing out. The number one response I get when I begin a tour through any of our facilities is, "I have been through several data centers before, and they're pretty boring," or my favorite, "We don't have to go in, they all look the same." Then they get a glimpse at the SoftLayer facility through the window in our lobby:

Data Center Window

What makes a SoftLayer DC so different and unique?

We deploy data centers in a pod concept. A pod, or server room, is a designed to be an identical installation of balanced power, cooling and redundant best-in-class equipment in under 10,000 square feet. It will support just about 5,000 dedicated servers, and each pod is built to the same specifications as every other pod. We use the same hardware vendor for servers, the majority of our internal network is powered by Cisco gear and edge equipment is now powered by Juniper. Even the paint on the walls matches up from pod to pod, city to city and now country to country. That's standardization!

That all sounds great, but what does that mean for you? How do all these things benefit you as the end user?

First of all, setting standards improves our efficiency in support and operations. We can pluck any of our technicians in DAL05 and drop him into SJC01, and he'll feel right at home despite the outside world looking a bit different. No facility quirks, no learning curve. In fact, the Go Live Crews in Singapore and Amsterdam are all experienced SoftLayer technicians from our US facilities, so they help us make sure all of the details are exactly alike.

Beyond the support aspect, having data centers in multiple cities around the world is a benefit within itself: You have the option to host your solution as close or as far away from you as you wish. Taking that a step further, disaster recovery becomes much easier with our unique network-within-a-network topology.

The third biggest benefit customers get from SoftLayer's data centers is the quality of the server chassis. Because we standardize our SuperMicro chassis in every facility, we're able to troubleshoot and resolve issues faster when a customer contacts us. Let's say the mainboard is having a problem, and your Linux server is in kernel panic. Instead of taking time to try and fix the part, I can hot-swap all the drives into an identical chassis and use the portal to automatically move all of your IP addresses and network configurations to a new location in the DC. The server boots right up and is back in service with minimal downtime.

Try to do that with "similar" hardware (not "identical"), and see where that gets you.

The last obvious customer benefit we'll talk about here is the data center's internal network performance. Powered by Cisco internal switches and Juniper routers on the edge, we can provide unmatched bandwidth capacity to our data centers as well as low latency links between servers. In one rack on the data center floor, you can see 80Gbps of bandwidth. Our automated, high-speed network allows us to provision a server anywhere in a pod and an additional server anywhere else in the same pod, and they will perform as if they are sitting right next to each other. That means you don't need to reserve space in the same rack for a server that you think you'll need in the future, so when your business grows, your infrastructure can grow seamlessly with you.

In the last installment of this little "SoftLayer Features and Benefits" series, we'll talk about the global network and learn why no one in the industry can match it.

-Harold

February 24, 2011

A Crash Course in CRAC Units - Data Center Cooling

In the past few weeks, we've fielded a few questions from our Twitter followers about temperatures in our data center and how CRAC units work. John mentioned in the "Building a Data Center" series that his next post would be about keeping the data center cool, so I'll try not to steal too much thunder from him by posting a basic CRAC unit explanation to answer those questions.

To record this video, we made the long walk (~2 minutes) downstairs to Pod 1 of SoftLayer's DAL05 facility to give you a first-hand look at the star of the show: the DC Computer Room Air Conditioning Unit. Because this was recorded on a "Truck Day" at SoftLayer, the pod was bustling with activity, so we found a "quiet" open area in a section of the pod that will soon be filled with new servers to record the video.

Due to the ambient noise in the data center, my explanation had to be "yelled," so please forgive the volume.

What else do you want to see/learn about in SoftLayer's data centers?

-@khazard

Subscribe to pod