Our digital lifestyles are driving relentlessly increasing demand for electricity. US data centers could consume around 140 billion kWh by 2020, with small to mid-size corporate data centers identified as the least energy efficient compared to hyper-scale Cloud-computing infrastructure.
One could not accuse the data-center industry, or the high-tech sector in general, of ignoring the issue: data centers feel the effects of their huge energy consumption with each utility bill. Improving energy efficiency to reduce environmental impact is a central goal of the Open Compute Project, which is seeking to change the way IT-infrastructure hardware is designed to minimize burdens on owners, operators, and the planet.
There are many ways energy consumption can be reduced in an electronic system, including redesigning circuits and components to do the same work from less energy, and reducing energy “losses” usually dissipated as heat. Reducing losses has a double benefit, not only saving the expense of wasted energy but also relieving the need to actively cool the circuitry. Whichever way this cooling is done – such as by fans or water pumps – extra engineering, equipment, and, of course, energy to run those cooling systems is needed.
On the circuit boards of today’s data-center servers, as well as other high power-consuming systems such as industrial power supplies, digital power is taking over from conventional “analog” power supplies, to help minimize the energy dissipated to convert from distribution voltages to the low-DC voltages needed by loads such as processors, memory, FPGAs, network-interface components, and others. Today’s boards typically have multiple power rails, at voltages ranging from below 1V for the most advanced nanometer-scale ICs, to 3.3V, 5V, or 12V for other components and I/Os.
One of the most difficult aspects of designing a conventional switched-mode converter is determining the values of components in the feedback loop needed to optimize the power supply’s response and stability across a range of operating conditions. This requirement to fix component values forces designers to work towards a “best-fit” solution. As a result, the design can only operate at its best efficiency within a narrow operating range, and the efficiency at other load conditions is compromised. Digital techniques now allow power supplies to change their parameters, to optimize performance and efficiency, at multiple points in their operating range from no-load to low-, mid-, or high load.
Digital power is becoming more widely understood and implemented to enhance system performance and cost of ownership, as well as to increase efficiency. Adaptive Voltage Scaling (AVS) is an advanced efficiency-boosting technique that leverages the flexibility of digital power. Low-power computing systems and SoCs featuring advanced processors like Intel Skylake or some ARM®-based cores utilize AVS power management to ensure the core is always supplied with just enough voltage to handle its workload, operating at the lowest possible clock speed. The VR13 voltage-regulator specification for Skylake processors requires the regulator to be able to change its nominal output voltage from 1.2V to 0.9V and back “on the fly.” This can be achieved more easily using digital techniques than is possible with a conventional analog converter.
Digital power supplies can also be programmed to respond intelligently to exceptions, such as over-current conditions. Whereas a conventional power supply is usually designed to shut down immediately if a condition such as an over-current is detected, digital control enables short-term or non-serious exceptions to be handled with minimal disruption, without compromising protection against potentially damaging short-circuit currents.
In addition, the advent of digital power conversion allows data to be collected from individual power supplies and converters to enable system performance and trends to be analyzed. These can include not only efficiency but also system temperatures, transient responses, voltage and current ripple, and component-parameter shifts that can help predict failures, schedule maintenance to minimize equipment down-time, and increase equipment reliability and business performance.
This emerging generation of digital power converters includes CUI’s Novum Advanced Power family, which comprises fully regulated intermediate bus DC/DC converters and point-of-load converters that feature advanced digital control. To facilitate designing with these converters, CUI has published a GUI, called Novum ACE, which simplifies the configuration of individual parameters as well as features such as power-up/down sequencing, current sharing, and warning limits by connecting to the converter’s PMBus™ (Power-Management Bus) digital interface.
A software GUI helps simplify configuration of digital power converters. [Picture source: CUI, Inc]
As engineers come to understand and take advantage of digital power, to increase the energy efficiency and performance of computing systems, the technology continues to evolve and advance. Power designers are ready to unleash software-defined power architectures, which can respond to events intelligently and autonomously in real time. This is the next step in the digital solution to the energy challenges imposed by our digital lifestyles.