Author Archive: John Martin

March 9, 2011

Building a Data Center | Part 2: The Absence of Heat

As you walk down the cold aisle in a data center, you might be curious about how all that cold air gets there. Like the electrical system, data center cooling travels a path through the data center that relies on many integrated systems working together to achieve the desired result.

To start, I should give a crash course in Heating, Ventilating, and Air Conditioning (HVAC). The most important thing to understand in HVAC theory is that cold is the absence of heat. When you say you're cooling a space, you're not adding cold air, rather you are removing heat. Heat is removed in a cycle called the refrigerant cycle. The refrigerant cycle is present in all air conditioning systems and is made up of four main components:

  • Refrigerant: Refrigerants are engineered chemicals developed to have very specific boiling and condensation temperatures. They come in many different flavors with cryptic names like 410a, R22, and water.
  • Compressor: Compresses refrigerant, turning it from a warm liquid to hot gas. This compression assists in the movement of heat and refrigerant within the system.
  • Evaporator: Evaporators are heat exchangers (devices built for efficient heat transfer from one medium to another), so an evaporator passes heat from the air to the refrigerant.
  • Condenser: Condensers are also heat exchangers. The condenser releases trapped heat in refrigerant outside the space being cooled.

This is a very simplified explanation of the refrigerant cycle components, and I only mention these four components or steps because they are common to all HVAC systems regardless of size or type, and you can apply them to any data center cooling system (or even a residential system).

Cooling System

Which came first - the chicken or the egg? Like the old analogy, figuring out a starting point for our cooling cycle isn't easy to do, so let's start at the source of heat. Your server uses electrical energy to process the information in the CPU, turn the spindle of hard drives, and light up the pretty little LEDs on the chassis. All that conversion of electrical energy to useful work creates heat as a side effect. Remember that we have to remove heat to cool something, so a server's cooling system acts like a heat exchanger, extracting heat from its components and passing that heat to cooler air entering the front of the server. That heat is rejected from the back of the servers into the hot aisle.

When the heat is exhausted into the hot aisle, it is pulled to the evaporator, which we call different things depending on how they perform their function. CRACs – Computer Room Air Conditioners – and AHU – Air Handling Units – are a few of the common terms we use. Regardless of what they are called, they perform the same function by removing heat from the hot aisle (called return air) and supply cooled air to the cold aisle. This completes the first step in the cycle.

Now that the heat was removed from the server room and passed to the refrigerant, it must go somewhere and that is where the compressor comes in. Warm liquid refrigerant is compressed into a hot gas and this compression of refrigerant forces it to travel to the condenser where the heat is absorbed into the outside air. This allows the cooled refrigerant to condense and return to the evaporator to start the process all over again. And again, this part of the cycle is accomplished different ways depending on the type of equipment installed.

If a CRAC unit is installed, the evaporator and compressor are on the computer room floor, a remote condenser will in placed outside, and fans will extract the heat from the refrigerant. In areas where AHUs are used, only the evaporator will be typically be on the raised floor. These systems use a remote compressor and condenser to send chilled water to the AHU’s on the raised floor. Also, these chilled water systems actually have two separate refrigerant systems in place, isolating the inside and outside portions of the refrigerant cycle. They are used in larger denser data centers because they allow for more efficient control of temperatures in the data center.

Like I said, this is a simplified explanation of the cooling of a data center but it lays the ground work for a more in-depth look at your specific systems.

-John

February 21, 2011

Building a Data Center | Part 1: Follow the Flow

The electrical distribution system in a data center is an important concept that many IT professionals overlook. Understanding the basics of your electrical distribution system can save downtime and aid in troubleshooting power problems in your cabinets. It's easy to understand if you follow the flow.

As with many introductory lessons in electricity, I will use the analogy of a flowing river to help describe the flow of electricity in a data center. The river is akin to wires, the amount of water is the voltage and the speed the water moves is the current flow also known as amps. So, when looking at an electrical system, think about a flowing river and the paths that it must take to get to and from its source to the ocean.

External Power Sources
The preferred source of electrical power is delivered to a data center by the local utility company. Once that utility power enters the building, its first stop is usually going to be the ATS or Automatic Transfer Switch. This electro-mechanical device is fed power from two or more sources – a Primary and an Emergency. While the primary source is available, it sits happily and flows power to a series of distribution breakers, often called "switch gear." These large breakers are designed to carry hundreds or thousands of amps and pass that power to your uninterruptible power supply (UPS) units and other facility infrastructure: lighting, HVAC, fire life safety systems, etc.

If the Primary source becomes unavailable, the ATS triggers the emergency source. In our data center example, that means our on-site generators start up. It typically takes between 9 to 12 seconds for the generators to come up to speed to allow for full power generation. Once the ATS sees that the generators have started and are ready to supply power, it will switch the load from the primary source to the emergency source. This is called an open transition because the load is removed from the primary source during the switch to the emergency source.

UPS Units
Once the power leaves the ATS and switch gear, it is no longer important to know whether you are connected to the primary or emergency sources. The next step in the power flow is the UPS. Like a dam, the UPS system takes an untamed river and transforms it into something safe and usable: An uninterruptable source of power to your server cabinet.

This is achieved by a bank of batteries sized to support the IT load. The batteries are connected in-line with the supply and load, so while the ATS senses a utility outage and starts the emergency generators, the IT load is still supplied power. A typical UPS battery system is designed to support the IT load for a maximum of 10 minutes.

Another benefit of the UPS system is the ability to clean the incoming utility power. Normal utility power voltages vary wildly depending on what other loads the service is supplying. These voltage fluctuations are detrimental to power supplies in servers and can shorten their life spans or worse: destroy them. This is why most home computers have a surge suppressor to prevent power spikes from damaging your equipment. UPS units clean electrical power by converting utility power from AC to DC and back to AC again:

UPS

Power Distribution Units
After protecting and cleaning the power, the UPS power will flow to a group of power distribution units (PDUs). At this point, the voltage will normally be 480vac which is too high for most IT equipment. The PDU or a separate transformer has to to convert the 480 volts to a more usable voltage like 120vac or 277vac. Once the voltage is converted, the power is then distributed to electrical outlets via a common electrical breaker.

PDU technology has advanced, like all data center equipment, from simple breaker panels to complex devices capable of measuring IT loads, load balancing, alarm and fault monitoring and even automatic switching between two power sources instantly during an outage.

Power Strip
The final piece of equipment in the data center electrical system before your server is a power strip. Power strips are often mistakenly referred to as PDUs. The power strip is mounted in a cabinet and contains multiple electrical outlets, not electrical breakers. You plug the server power cord into the power strip, not the PDU. And from here, the flow of electricity finally reaches the sea of servers.

Here's a basic for a data center electrical distribution system:

Simplified Data Center Power Architecture

Our data centers are complex, and the entire building infrastructure is critical to its continuous operation. The electrical distribution system is at the heart of any critical facility, and it's vital that everyone working in and around critical sites knows at least the basics of your electrical distribution system.

In Part 2 of our "Building a Data Center" series, we'll cover how we keep the facility cool.

-John

Subscribe to Author Archive: %