Building a Data Center | Part 2: The Absence of HeatPosted by John Martin in Business, Infrastructure, SoftLayer
As you walk down the cold aisle in a data center, you might be curious about how all that cold air gets there. Like the electrical system, data center cooling travels a path through the data center that relies on many integrated systems working together to achieve the desired result.
To start, I should give a crash course in Heating, Ventilating, and Air Conditioning (HVAC). The most important thing to understand in HVAC theory is that cold is the absence of heat. When you say you’re cooling a space, you’re not adding cold air, rather you are removing heat. Heat is removed in a cycle called the refrigerant cycle. The refrigerant cycle is present in all air conditioning systems and is made up of four main components:
- Refrigerant: Refrigerants are engineered chemicals developed to have very specific boiling and condensation temperatures. They come in many different flavors with cryptic names like 410a, R22, and water.
- Compressor: Compresses refrigerant, turning it from a warm liquid to hot gas. This compression assists in the movement of heat and refrigerant within the system.
- Evaporator: Evaporators are heat exchangers (devices built for efficient heat transfer from one medium to another), so an evaporator passes heat from the air to the refrigerant.
- Condenser: Condensers are also heat exchangers. The condenser releases trapped heat in refrigerant outside the space being cooled.
This is a very simplified explanation of the refrigerant cycle components, and I only mention these four components or steps because they are common to all HVAC systems regardless of size or type, and you can apply them to any data center cooling system (or even a residential system).
Which came first – the chicken or the egg? Like the old analogy, figuring out a starting point for our cooling cycle isn’t easy to do, so let’s start at the source of heat. Your server uses electrical energy to process the information in the CPU, turn the spindle of hard drives, and light up the pretty little LEDs on the chassis. All that conversion of electrical energy to useful work creates heat as a side effect. Remember that we have to remove heat to cool something, so a server’s cooling system acts like a heat exchanger, extracting heat from its components and passing that heat to cooler air entering the front of the server. That heat is rejected from the back of the servers into the hot aisle.
When the heat is exhausted into the hot aisle, it is pulled to the evaporator, which we call different things depending on how they perform their function. CRACs – Computer Room Air Conditioners – and AHU – Air Handling Units – are a few of the common terms we use. Regardless of what they are called, they perform the same function by removing heat from the hot aisle (called return air) and supply cooled air to the cold aisle. This completes the first step in the cycle.
Now that the heat was removed from the server room and passed to the refrigerant, it must go somewhere and that is where the compressor comes in. Warm liquid refrigerant is compressed into a hot gas and this compression of refrigerant forces it to travel to the condenser where the heat is absorbed into the outside air. This allows the cooled refrigerant to condense and return to the evaporator to start the process all over again. And again, this part of the cycle is accomplished different ways depending on the type of equipment installed.
If a CRAC unit is installed, the evaporator and compressor are on the computer room floor, a remote condenser will in placed outside, and fans will extract the heat from the refrigerant. In areas where AHUs are used, only the evaporator will be typically be on the raised floor. These systems use a remote compressor and condenser to send chilled water to the AHU’s on the raised floor. Also, these chilled water systems actually have two separate refrigerant systems in place, isolating the inside and outside portions of the refrigerant cycle. They are used in larger denser data centers because they allow for more efficient control of temperatures in the data center.
Like I said, this is a simplified explanation of the cooling of a data center but it lays the ground work for a more in-depth look at your specific systems.