Liquid cooling system data centre. Image: Rittal

By Herb Villa, Snr. Applications Engineer, Rittal North America LLC                   

Downtime is the ultimate enemy for data centres. One frightening weapon used by this enemy is heat. As the demands in data centres grow, the instances of heat taking down a smooth-running facility only continue to increase. In other words, the heat is on.

The battle continues as various IT components consume more power, with CPU thermal design power (the maximum heat a computer chip can use) expected to push 400 watts soon. As memory demands grow (terabytes per server) and modern IT workloads skyrocket, cooling server hardware using air alone is impossible. Thus – liquid cooling.

Liquid cooling is nothing new. Many IT professionals in traditional data centres have implemented it in some form for at least a decade. So, why is it becoming such a favourable option now? And – at the risk of opening an even greater debate – just what exactly are we talking about when we use the very broad term “Liquid Cooling”?

The conversation continued below uses the more commonly (and recently) accepted liquid cooling definition – heat removal, using a liquid medium, at a chip, chassis or component level. This updated definition does not include row/room based solutions that use air/water or air/refrigerant in row solutions. Even though they use a `liquid’, they are separate systems, and separate conversations.

The “new normal” changed the conversation about liquid cooling

Being 2+ years into a global pandemic has created the “new normal.” And similar to any industry, IT professionals have pivoted, learned and adapted to different business trends.

Trends, including within the data centre space, force change in one way or another. What is clear at this point? Traditional methods (removing heat by mixing cold air with hot air to reach an appropriate temperature) are becoming extinct.

With the clarity based on real-world experience: air-based cooling infrastructure has numerous problems — rising energy costs, high maintenance costs, space constraints, and hardware being vulnerable to pollutants. Most importantly of all, the fundamental limitations of air-based cooling are not able to keep pace with high-density data centre demands for three specific reasons:

  1. Component Power — As mentioned, CPU TDP has risen significantly, and continues (400 watts may be commonplace by 2025), memory is exploding (servers with a terabyte of memory!), and graphics processing units (300-400 watts) along with AI all generate a lot of heat, pushing server utilization and the associated thermal loads to levels never seen to date
  2. Edge Computing — Deploying computer and network infrastructure as close as possible to where data is generated is the advantage of the Edge. Whether in a single, standalone enclosure or a local Spine data centre cluster, data-heavy applications, connected to high-performance machines, demand continuous analytics and require significant processing power and cooling systems
  3. Regulatory Change and Sustainability — Possible future regulations involving power usage effectiveness (data centres not exceeding certain limits) continue the push toward “net zero,” with some sights set for just two decades away. This moves the needle closer toward liquid cooling

Barriers to wider data center liquid cooling deployment

With all the benefits being so obvious, why isn’t liquid cooling everywhere by now? Well, there are disadvantages to any system, and liquid cooling is no different.

Adapting to a new system is a big one. Many enterprise and commercial users are in data centres built in the past 10-15 years, and they cannot simply drop current cooling methods and instantly modify them for liquid cooling systems. It is obvious that from a sustainability, performance (CPUs, GPUs, etc.), and total cost of ownership outlook that liquid cooling infrastructure is the wise choice. It’s simply difficult to scale-out this “new” technology.

Together, fear/unfamiliarity is another barrier. Will the fear of a leak bringing down an entire rack (or several racks) scare off otherwise highly experienced IT professionals? Are plumbers now going to play a significant role in data centre design and planning? Do non-water liquids within some liquid cooling system evaporate rather than damage components?

Questions and concerns abound, making adaptation over time the key. Changing to liquid cooling throughout an entire data centre will likely take time, possibly years. Existing data centres must retrofit to liquid cooling while new facilities can immediately specify liquid cooling solutions for high demand data centres.

Take a look at this resource for advice specifically for colocation centres.

Liquid cooling options

Speaking of retrofitting to liquid cooling, it is possible to use a hybrid system with fans spinning air serving to assist liquid cooling (doing the majority of the cooling). Not only does that serve as a backup in case of failure, it can also provide peace of mind as data centre facility managers begin to adapt liquid cooling technology.

The term “liquid cooling” covers multiple methods of heat removal and climate control. In all, either chilled water or refrigerant is the primary heat removal medium at the component level. So, what are the options for full liquid cooling systems?

Immersion cooling uses vats of dielectric fluid, either in single-phase or two-phase immersion, for cooling equipment. The advantage is that the system is built from the ground up as part of the data centre’s design. There is no adapting to a legacy system, so there is added flexibility and increased power usage effectiveness (PUE).

Direct to chip cooling (also called cold plate-based cooling) is a more involved system — moving liquids to and from the cold plate — yet it can be retrofitted into existing data centre infrastructure.

The closed loop cooling version of direct to chip is completely built into the rack and sealed. No foundational infrastructure is changed, so the footprint remains the same, and any possible leaks are self-contained in that rack.

In an open loop cooling system, a cooling distribution unit (CDU) pumps cold liquid into the server and hot liquid out, recycling the liquid between steps.

Reasons to use liquid cooling vary. The results don’t

The reasons a data centre chooses liquid cooling varies depending on the user’s goals.

  • Is liquid cooling going to reduce our energy bills?
  • Is liquid cooling going to make us more sustainable?
  • Does liquid cooling look good on an ESG report?

Whatever the ultimate reasons for using liquid cooling, the results are impressive. A reduction in power use up to 40 per cent even takes into account that liquid cooling also requires its own motors, pumps, electronics, etc. That is a significant move toward meeting sustainability targets and handling today’s high density demands.

Each situation presents unique cooling challenges. Take a look at this resource and discover more insights & keys to tackling the complexity of IT equipment cooling at the Edge.

This sponsored editorial is brought to you by Rittal. For further information, please visit

©2024 Utility Magazine. All rights reserved


We're not around right now. But you can send us an email and we'll get back to you, asap.


Log in with your credentials

Forgot your details?