Posts Tagged 'Cooling'

October 8, 2014

An Insider’s Look at Our Data Centers

I’ve been with Softlayer over four years now. It’s been a journey that has taken me around the world—from Dallas to Singapore to Washington D.C, and back again. Along the way, I’ve met amazingly brilliant people who have helped me sharpen the tools in my ‘data center toolbox’ thus allowing me to enhance the customer experience by aiding and assisting in a complex compute environment.

I like to think of our data centers as masterpieces of elegant design. We currently have 14 of these works of art, with many more on the way. Here’s an insider’s look at the design:

Keeping It Cool
Our POD layouts have a raised floor system. The air conditioning units chill from the front bottom of the servers on the ‘cold rows’ passing through the servers on the ‘warm rows.’ The warm rows have ceiling vents to rapidly clear the warm air from the backs of the servers.

Jackets are recommended for this arctic environment.

Pumping up the POWER
Nothing is as important to us as keeping the lights on. Every data center has a three-tiered approach to keeping your servers and services on. Our first tier being street power. Each rack has two power strips to distribute the load and offer true redundancy for redundant servers and switches with the remote ability to power down an individual port on either power strip.

The second tier is our batter backup for each POD. This offers emergency response for seamless failover when street power is no more.

This leads to the third step in our model, generators. We have generators in place for a sustainable continuity of power until street power has returned. Check out the 2-megawatt diesel generator installation at the DAL05 data center here.

The Ultimate Social Network
Neither power nor cooling matter if you can’t connect to your server, which is where our proprietary networking topography comes to play. Each bare metal server and each virtual server resides in a rack that connects to three switches. Each of those switches connects to an aggregate switch for a row. The aggregate switch connects to a router.

The first switch, our private backend network, allows for SSL and VPN connectivity to manage your server. It also gives you the ability to have server-to-server communication without the bounds of bandwidth overages.

The second switch, our public network, provides pubic Internet access to your device, which is perfect for shopping, gaming, coding, or whatever you want to use it for. With 20TB of bandwidth coming standard for this network, the possibilities are endless.

The third and final switch, management, allows you to connect to the Intelligent Platform Management Interface that provides tools such as KVM/hardware monitoring/and even virtual CDs to install an image of your choosing! The cables to your devices from the switches are color-coded, port-number-to-rack-unit labeled, and masterfully arranged to maximize identification and airflow.

A Soft Place for Hardware
The heart and soul of our business is the computing hardware. We use enterprise grade hardware from the ground up. We offer our smallest offering of 1 core, 1GB RAM, 25GB HDD virtual servers, to one of our largest quad 10-core, 512GB RAM, multi 4TB HDD bare metal servers. With excellent hardware comes excellent options. There is almost always a path to improvement. Meaning, unless you already have the top of the line, you can always add more. Whether it be additional drive, RAM, or even processor.

I hope you enjoyed the view from the inside. If you want to see the data centers up close and personal, I am sorry to say, those are closed to the public. But you can take a virtual tour of some of our data centers via YouTube: AMS01 and DAL05

-Joshua Fox

July 27, 2012

SoftLayer 'Cribs' ≡ DAL05 Data Center Tour

The highlight of any customer visit to a SoftLayer office is always the data center tour. The infrastructure in our data centers is the hardware platform on which many of our customers build and run their entire businesses, so it's not surprising that they'd want a first-hand look at what's happening inside the DC. Without exception, visitors to a SoftLayer data center pod are impressed when they walk out of a SoftLayer data center pod ... even if they've been in dozens of similar facilities in the past.

What about the customers who aren't able to visit us, though? We can post pictures, share stats, describe our architecture and show you diagrams of our facilities, but those mediums can't replace the experience of an actual data center tour. In the interest of bridging the "data center tour" gap for customers who might not be able to visit SoftLayer in person (or who want to show off their infrastructure), we decided to record a video data center tour.

If you've seen "professional" video data center tours in the past, you're probably positioning a pillow on top of your keyboard right now to protect your face if you fall asleep from boredom when you hear another baritone narrator voiceover and see CAD mock-ups of another "enterprise class" facility. Don't worry ... That's not how we roll:

Josh Daley — whose role as site manager of DAL05 made him the ideal tour guide — did a fantastic job, and I'm looking forward to feedback from our customers about whether this data center tour style is helpful and/or entertaining.

If you want to see more videos like this one, "Like" it, leave comments with ideas and questions, and share it wherever you share things (Facebook, Twitter, your refrigerator, etc.).

-@khazard

February 10, 2012

Amsterdam Data Center (AMS01): Does it Measure Up?

SoftLayer data centers are designed in a "pod" concept: Every facility in every location is laid out similarly, and you'll find the same network and server hardware connected to the same network. The idea behind it is that this design makes it easier for us to build out new locations quickly, we can have identical operational processes and procedures in each facility, and customers can expect the exact same hosting experience regardless of data center location. When you've got several data centers in one state, that uniformity is easy to execute. When you open facilities on opposite sides of the country, it seems a little more difficult. Open a facility in another country (and introduce the challenge of getting all of that uniformity across an ocean), and you're looking at a pretty daunting task.

Last month, I hopped on a plane from Houston to London to attend Cloud Expo Europe. Because I was more or less "in the neighborhood" of our newest data center in Amsterdam, I was able to take a short flight to The Netherlands to do some investigatory journalism ... err ... "to visit the AMS01 team."

Is AMS01 worthy of the SoftLayer name? ... How does it differ from our US facilities? ... Why is everything written in Dutch at the Amsterdam airport?

The answers to my hard-hitting questions were pretty clear: SoftLayer's Amsterdam facility is absolutely deserving of the SoftLayer name ... The only noticeable differences between AMS01 and DAL05 are the cities they're located in ... Everything's written in Dutch because the airport happens to be in The Netherlands, and people speak Dutch in The Netherlands (that last question didn't get incorporated into the video, but I thought you might be curious).

Nearly every aspect of the data center mirrors what you see in WDC, SEA, HOU, SJC and DAL. The only differences I really noticed were what the PDUs looked like, what kind of power adapter was used on the crash carts, and what language was used on the AMS facility's floor map. One of the most interesting observations: All of the servers and power strips on the racks used US power plugs ... This characteristic was particularly impressive to me because every gadget I brought with me seemed to need its own power converter to recharge.

When you see us talking about the facilities being "the same," that's not a loosely used general term ... We could pull a server from its rack in DAL05, buckle it into an airplane seat for a 10-hour flight, bring it to AMS01 (via any of the unique modes of Amsterdam transportation you saw at the beginning of the video), and slide it into a rack in Amsterdam where we could simply plug it in. It'd be back online and accessible over the public and private networks as though nothing changed ... Though with Flex Images making it so easy to replicate cloud and dedicated instances in any facility, you'll just have to take our word for it when it comes to the whole "send a server over to another data center on a plane" thing.

While I was visiting AMS01, Jonathan Wisler took a few minutes out of his day to give a full tour of the data center's server room, and we've got video and pictures to share with more shots of our beautiful servers in their European home. If there's anything in particular you want to see from AMS01, let us know, and we'll do our best to share it!

-@khazard

P.S. Shout out to the SLayers in the Amsterdam office who offered their linguistic expertise to add a little flair to the start of the video ... From the four employees who happened to be in the office when I was asking for help, we had six fluent-language contributions: English, Italian, French, Dutch, Polish and German!

**UPDATE** After posting this video, I learned that the "US" server power plugs I referred to are actually a worldwide computer standard called C13 (male) and C14 (female).

March 9, 2011

Building a Data Center | Part 2: The Absence of Heat

As you walk down the cold aisle in a data center, you might be curious about how all that cold air gets there. Like the electrical system, data center cooling travels a path through the data center that relies on many integrated systems working together to achieve the desired result.

To start, I should give a crash course in Heating, Ventilating, and Air Conditioning (HVAC). The most important thing to understand in HVAC theory is that cold is the absence of heat. When you say you're cooling a space, you're not adding cold air, rather you are removing heat. Heat is removed in a cycle called the refrigerant cycle. The refrigerant cycle is present in all air conditioning systems and is made up of four main components:

  • Refrigerant: Refrigerants are engineered chemicals developed to have very specific boiling and condensation temperatures. They come in many different flavors with cryptic names like 410a, R22, and water.
  • Compressor: Compresses refrigerant, turning it from a warm liquid to hot gas. This compression assists in the movement of heat and refrigerant within the system.
  • Evaporator: Evaporators are heat exchangers (devices built for efficient heat transfer from one medium to another), so an evaporator passes heat from the air to the refrigerant.
  • Condenser: Condensers are also heat exchangers. The condenser releases trapped heat in refrigerant outside the space being cooled.

This is a very simplified explanation of the refrigerant cycle components, and I only mention these four components or steps because they are common to all HVAC systems regardless of size or type, and you can apply them to any data center cooling system (or even a residential system).

Cooling System

Which came first - the chicken or the egg? Like the old analogy, figuring out a starting point for our cooling cycle isn't easy to do, so let's start at the source of heat. Your server uses electrical energy to process the information in the CPU, turn the spindle of hard drives, and light up the pretty little LEDs on the chassis. All that conversion of electrical energy to useful work creates heat as a side effect. Remember that we have to remove heat to cool something, so a server's cooling system acts like a heat exchanger, extracting heat from its components and passing that heat to cooler air entering the front of the server. That heat is rejected from the back of the servers into the hot aisle.

When the heat is exhausted into the hot aisle, it is pulled to the evaporator, which we call different things depending on how they perform their function. CRACs – Computer Room Air Conditioners – and AHU – Air Handling Units – are a few of the common terms we use. Regardless of what they are called, they perform the same function by removing heat from the hot aisle (called return air) and supply cooled air to the cold aisle. This completes the first step in the cycle.

Now that the heat was removed from the server room and passed to the refrigerant, it must go somewhere and that is where the compressor comes in. Warm liquid refrigerant is compressed into a hot gas and this compression of refrigerant forces it to travel to the condenser where the heat is absorbed into the outside air. This allows the cooled refrigerant to condense and return to the evaporator to start the process all over again. And again, this part of the cycle is accomplished different ways depending on the type of equipment installed.

If a CRAC unit is installed, the evaporator and compressor are on the computer room floor, a remote condenser will in placed outside, and fans will extract the heat from the refrigerant. In areas where AHUs are used, only the evaporator will be typically be on the raised floor. These systems use a remote compressor and condenser to send chilled water to the AHU’s on the raised floor. Also, these chilled water systems actually have two separate refrigerant systems in place, isolating the inside and outside portions of the refrigerant cycle. They are used in larger denser data centers because they allow for more efficient control of temperatures in the data center.

Like I said, this is a simplified explanation of the cooling of a data center but it lays the ground work for a more in-depth look at your specific systems.

-John

February 24, 2011

A Crash Course in CRAC Units - Data Center Cooling

In the past few weeks, we've fielded a few questions from our Twitter followers about temperatures in our data center and how CRAC units work. John mentioned in the "Building a Data Center" series that his next post would be about keeping the data center cool, so I'll try not to steal too much thunder from him by posting a basic CRAC unit explanation to answer those questions.

To record this video, we made the long walk (~2 minutes) downstairs to Pod 1 of SoftLayer's DAL05 facility to give you a first-hand look at the star of the show: the DC Computer Room Air Conditioning Unit. Because this was recorded on a "Truck Day" at SoftLayer, the pod was bustling with activity, so we found a "quiet" open area in a section of the pod that will soon be filled with new servers to record the video.

Due to the ambient noise in the data center, my explanation had to be "yelled," so please forgive the volume.

What else do you want to see/learn about in SoftLayer's data centers?

-@khazard

July 22, 2008

Always Awake, Cool and Dry

As I turn on to the main road after leaving my Kumdo dojang (Korean fencing school), I glance at the rear view mirror down the street, in the direction of SoftLayer's new east coast datacenter. The strangely cool, red light from the setting sun fills the mirror and signals the end of this long, hot day. My mind briefly escapes the fading heat by recalling the cool temperature and humidity regulated environs within the datacenter.

Ever wonder how to keep thousands of servers cool? In a word: CRAC - Computer Room Air Conditioning. These giants sit throughout the datacenter pumping cool air up through ventilated floors. The cool air blows up in front of the server racks, gets sucked in through the front of the servers, over the drives, past the CPU heat sinks and RAM, then out the back of the server. The warm air exits, rises, and returns to the CRACs where the humidity and temperature are adjusted, and the cycle continues. Just like you learned in science class.

So it must be a serene, sterile environment - like those IBM commercials? That would be nice, but the reality is : computers need fans. One or two fans wouldn't bother anyone when they kick in on your gaming pc, but multiply 4 or 5 fans (do you like RAID arrays? You get extra fans!) by one thousand, or more and the decibels add up. Solid state hard drives - when they become available - might help with the noise (and also with power consumption), but it is mostly from the server fans. Liquid cooling works, but I think most people would prefer not to have fluid of any sort circulating over their motherboard. Zane (resident Linux guru) extols the benefits of passive cooling. Whatever cooling solutions arise in the future, you can be sure SoftLayer will be leading in technology implementation.

My attention returns to the road ahead and the pale blue of the evening sky. I hope to get a few hours of shut-eye before returning for my shift. Because SoftLayer doesn't sleep. Always awake, cool and dry.

-Philip

Subscribe to cooling