Keeping things “cool enough” is a major challenge for many designs. Whether it’s a component, board, or overall system (or some combination), the issues related to heat and keeping the temperature below a critical threshold are often key drivers of the overall design. Depending on the application and market, the allowable ranges are generally designated as commercial (0oC to 85oC); industrial (−40oC to 100oC); automotive (−40oC to 125oC); and military (−55oC to 125oC). While operation at the low end can be an issue (think of automotive or mil/aero), for most engineers it is high end that is of concern.
So, what to do? The usual “keep it cool” tools are a combination of conduction, convection, and radiation, with “natural” air convection often preferred due to low cost and reliability (no moving parts) and forced, fan-driven air as the second choice. Even so, use of airflow-based convection as the primary heat carrier is often insufficient due to thermal or user issues.
But convection is not limited to use of air, whether natural or forced. Liquid cooling can move large amounts of heat, and efficiently transfer it to that magical place known as “away” where heat is no longer an issue. The problem is that the mere mention of liquid cooling can make engineers cringe, and with good reason. For most designers, the reaction to the thought of liquid cooling is quick and firm: “no way…too many reasons to not consider it.” Frankly, that’s sensible thinking in most cases, as liquid cooling mandates very different types of mechanical design, assembly, test, and maintenance processes, for obvious reasons.
Liquid cooling can be limited to a single component, as gamers do with their overclocked PC processors, Figure 1.
Liquid cooling is often used in overclocked PC-based gaming systems, either with improvised arrangements or commercially available systems such as this one. (Image source: EK Fluid Gaming)
It can also be routed to multiple ICs on a board, Figure 2.
Liquid cooling is also used across the many hot ICs in a high-performance PC board, although the required “plumbing” can become complicated and impacts mechanical design as well as assembly and test. (Image source: Lenovo)
Nor does it have to go directly to the PC board itself: some approaches primarily use conduction to pull the heat out of the PC board out to the card-cage rails, and then use liquid cooling for the rails and cage assembly, Figure 3.
Some card-cage systems, such as this PowerBlock from Mercury Systems, use conduction to bring the card heat to the card edges and the card cage and then use liquid cooling for the cage only, while the PC cards themselves have no plumbing. (Image source: I4U LLC)
Despite the difficult reputation of liquid cooling, a good engineer will consider the options, and also reconsider them. A general guide to liquid cooling says that the threshold between forced air and liquid approaches seems to be around 300 to 400 watts per board in a multiboard chassis. Technology doesn’t stand still and some implementation which were once impractical can take on a new life as components and techniques advance and improve, as pointed out in the article ” Thermal management for high-performance embedded computing” in a recent issue of Military & Aerospace Electronics.
While the article admits that “liquid cooling has historically been one of the most-expensive and least-reliable electronics cooling techniques available,” it noted that “the added expense can be worth it.” It also made the case that the conventional wisdom about liquid cooling may no longer be correct, as “the previous problems of leaking from quick disconnects and other sources have been greatly reduced.”
So, do we go with liquid cooling? On one hand, the heat transfer it offers is impressive. On that proverbial other hand, the realities of the design-in issues it brings, along with the consequences of a leak, can’t be ignored. It’s definitely not for most mass-market electronic consumer products, that’s for sure (although some high-volume consumer products such as Internal combustion engines do use it, and that has worked out pretty well).
Or, you could take liquid cooling to the extreme, and let your data center/server farm be packed into a self-contained “pod” that rests in the ocean. Microsoft is exploring that approach, as discussed in two IEEE Spectrum articles, “Want an Energy-Efficient Data Center?: Build It Underwater” and “20,000 Leagues Under the Cloud” as well as at the Microsoft “Project Natick” site, Figure 4. (Of course, that does give new meaning to the phrase “trying to boil the ocean,” meaning a project’s goals are way too ambitious.)
For the ultimate in liquid cooling, use the ocean, as Microsoft is exploring with their Project Natick which puts a complete, remotely managed data center is a submersible watertight pod about the size of a truck trailer. (Image source: Microsoft)
Despite its acknowledged difficulties and issues, have you ever used or considered using liquid cooling for your projects? How did that work out?
When software controls your cooling tactics
A New Approach for Microcooling Hot Microchips?
Misconception revealed: Can a heat sink be too big?