Water-Cooled Data Center Packs More Power Per Rack

By Frank Blanchard and Ken Michaels, Staff Writers
A man with network racks.

The network racks in the foreground house all of the Local Area Network (LAN) and Wide Area Network hardware. The fiber cables from the 17 ATRF LAN closets, the storage systems, and the servers are fed through the overhead cable trays into the network racks and connected. The racks in the background contain 2 petabytes of tier-two and -three disk storage.

By Frank Blanchard and Ken Michaels, Staff Writers

Behind each tall, black computer rack in the data center at the Advanced Technology Research Facility (ATRF) is something both strangely familiar and oddly out of place: It looks like a radiator.

The back door of each cabinet is gridded with the coils of the Liebert cooling system, which circulates chilled water to remove heat generated by the high-speed, high-capacity, fault-tolerant equipment.

This passive cooling system eliminates the need for the usual raised floor found in many computing centers, lowers energy costs, and makes it possible for a 30 percent increase in power per rack: The cold air system in Building 430 on the Fort Detrick campus can handle 12 to 15 kilowatts per rack. The water-cooled Liebert system at the ATRF supports 20 kilowatts per rack.

This cooling system is one of the first of its kind to be installed in the mid-Atlantic region. The water source is a 400-ton chiller and pumps located on the roof of the building above the auditorium. Water and electrical conduit enter the data center through the ceiling and run to each individual computer rack.

“Super” Center

The ATRF data center has one-and-a-half times the computing power and twice the data storage of the data center in Building 430.

The data center’s first priority is to provide networking, storage, and computational support for all of the laboratories and staff located at the ATRF.  The resources will be shared within the various groups and, if warranted, made available to other groups and potentially to new partners who wish to collaborate with ATRF researchers. 

The computational services include virtualization that will offer dynamic resources on an as-needed basis, along with a batch facility for job-based processing.  These jobs have historically been for sequence analysis, computational chemistry, and molecular modeling. The storage facilities will provide multi-tiered resources to accommodate both higher performance and increased capacities, with the ability to scale out as the research dictates.

Continuous Operations Ensured

Adjacent to the data center is the motor control room, which houses the Uninterruptible Power System. The system’s size and redundancy allow it to support the 800-kilowatt Phase I facility. The motor control room, in turn, is supported by two GenRac generator sets for continuous data center operations in the event of a loss of commercial power. The administration and laboratory wings have separate backup systems for all major cooling and power systems, so that if one cooling or power module fails, there is enough reserve capacity to still meet full demand.

A high level of fault tolerance in the network, storage, and servers is incorporated into the design of the data center. All components have dual paths for power, network, and storage area network connectivity.

All of the equipment in the data center and the motor control room has monitoring capability that has never been available in the past. The status can be monitored and alarms can be generated by the Building Automation System, which has only recently been able to support the operations in Building 430. 

The Unified Communications will provide all phones, conference calls, and Webex activities to take advantage of the completely digital TCP/IP-based network.

The ATRF could house as many as 20,000 cores and 20 petabytes of data (which is equivalent to the amount of data processed by Google in one day*). A core is the current term for what was once called a central processing unit, or CPU, which came one to a computer. The initial installation involved 684 cores.

The ATRF and Building 430 data centers are connected using Dense Wave Division Multiplexing (DWDM), which can provide as much as 32 times the normal capacity of a fiber optic line.

Double Current Capacity Possible

If future funding becomes available, the number of generators, uninterruptible power supplies, and power distribution units will be doubled. Expansion has been provided for with pre-poured generator slabs, extra conduit, a support rack for an additional 400-ton chiller, additional floor space, and specialized air handlers.

*http://mozy.com/blog/misc/how-much-is-a-petabyte/.