Advertisement

Blog

Power Integrity for IoT Components

The internet-of-things, or IoT, a fast-developing application space, is forecast to grow to 50B devices by 2020. Figure 1 illustrates a smart home segment of this market. Devices communicate with each other and through a smart hub with the internet. These devices need complex mixed-signal chips for sensing, processing, communications, and even energy harvesting. Since most IoT devices are not tethered, such chips must meet ultra-low power [1] and low-cost constraints.

A prevailing assumption in the industry, EDA in particular, is that IoT chips are less of a challenge. Technology and IP integrated are well known. The chips are not leading edge in the fabrication processes used. Tools that met the needs of prior fabrication processes hence suffice to meet the needs of IoT chips. It is just a matter of designing to the specific application.

Nothing could be farther from the truth when one considers power integrity (PI)! One of the first challenges with dynamic voltage drop I've encountered was in an embedded heart-rhythm monitoring chip. A combination of its fabrication process and operational constraints led to this difficulty. Small form-factor, low-cost, and ultra-low power combined to impact chip DVD and reliable operation.

As described ahead, many challenges arise when meeting the constraints of IoT chip design. But there are solutions: front-end PI analysis and optimization being the most promising.

Figure 1

Figure 1: A connected network of devices, smart hub, and the internet (Image: ComLSI)

Figure 1: A connected network of devices, smart hub, and the internet (Image: ComLSI)

Power and Power Management

Power is a critical aspect of an IoT chip. The devices must operate for days to years on battery power or on energy harvested from their environment. Such chips must thus consume the lowest possible power in the application space. And meet this need despite a wide variance in conditions of deployment [1]. These constraints, and their energy storage form factor, limit IoT chips to between nW and mW of power consumption.

Figure 2

Figure 2: Power limitations (nW to mW) of IoT devices based on energy storage [1]

Figure 2: Power limitations (nW to mW) of IoT devices based on energy storage [1]

This power limitation requires the use of all circuit and system techniques that cut power. Circuits operate at the lowest possible voltages to conserve energy [2]. When required, they operate at higher voltages in burst mode. Small, efficient DC-DC converters enable such circuit functionality. Many circuits operate in low-swing mode. Power gating manages supply to most IP blocks that do not need to be always-on. Form-factor limitations compel the use of small linear regulators for such power management.

Figure 3

Figure 3: Block diagram of a TI SimpleLink CC26XX IoT processor [3]

Figure 3: Block diagram of a TI SimpleLink CC26XX IoT processor [3]

These circuit and system techniques result in fragmented and weak power supplies within IoT chips. They also reduce available noise margins for circuits within. Low-cost fabrication processes limit available metal layers for low-impedance power grids. Limited on-chip and package decoupling capacitors further degrade IoT chip power grid robustness.

A voltage regulator isolates a chip IP block from power grid noise. It often performs a dual function of power gating the IP block. But it also generates significant inrush current load on a chip power grid when the IP block awakes. Besides, an isolated block does not support a chip power grid with its intrinsic decoupling [2]. Thus, IP block isolation, while beneficial in power reduction, can cause significant local PI degradation.

Architectural Techniques for Improved Power Integrity

Clock-gating and Dynamic Voltage and Frequency Scaling (DVFS) are well-recognized techniques for low power design. IoT components employ these techniques to maximum advantage: the processor depicted in Figure 3 consumes just 2.9mA running at its top speed of 48MHz and just 0.55mA when idle [3]. Such low chip current consumption permits higher power grid impedance and relaxes design constraints.

Enabling the fine grain power optimization, in two voltage domains and numerous power domains of this IoT processor, is an advanced on-chip communication system called a Network-On-Chip (NOC). A flexible NOC permits reliable, packet-based communication between many distinct clock and power domains. It also permits low power optimization in this essential communication system of the complex chip that is traditionally served by interconnect-hungry bus architectures. A NOC port that is not sending or receiving data for a particular clock cycle need not be clocked in that cycle, permitting fine-grain power management. And, the use of fewer wires is an added advantage of a NOC that relaxes constraints for power grid metal usage.

Nevertheless, the use of advanced power management techniques is resource-intensive, and leads to complex on-chip current demand patterns that burden IP block and chip power grids.

Power Delivery Optimization

The low-power, low-cost, small form factor constraints of IoT chips compel intelligent design solutions. With voltage in power domains now a design parameter, voltage noise, or variation of this design parameter, is correspondingly a design specification – not a verification value! Early, front-end analysis of power integrity, including all contributing aspects, is a must [2].

Consider the IoT processor described in [3]. With an at-speed chip current of 2.9mA, I x r drop may well not be the principal concern. With numerous power domains, and voltage within controlled by regulators, power delivery interconnect extends from a global supply grid down to active silicon and back up to local power grids. As these domains are turned on, and off, significant transient currents lead to corresponding voltage droops, and overshoots, impacting the power domains and the global grid. Such rapid supply current draw leads to L x di/dt issues – that are local interconnect and distributed decoupling dependent (and not addressed by the package inductance inclusion touted by industry today). An analysis environment that includes such local effects is therefore essential to effective power integrity design and optimization.

Low power design is strongly dependent upon the operating voltage range of each power domain within a chip. Power reduction is tied to voltage reduction. How low one can go is tied to the accuracy of noise analysis on power grids [2] and noise margins of circuits integrated.

Low cost, similarly, is tied to optimal use of on-chip resources: metal for power and routing, which determines fabrication process layers, and decoupling, which determines silicon area. An early, front-end, power integrity analysis and optimization environment – as opposed to I x r drop back-end verification – facilitates chip resources optimization [4]. Moreover, such design, optimization, and floor planning greatly diminishes chip physical design iterations necessitated by verification findings at the back-end, or routing congestion due to increased power metal and reduced routing channels.

For IoT chips, such front-end PI methodology advancement is the necessary next step!

References

[1] D. Blaauw et. al., “IoT Design Space Challenges: Circuits and Systems ,” in Symp. VLSI Circuits Dig. Tech. Papers, Honolulu, HI, USA, 2014, pp. 1–2.

[2] Raj Nair and D. Bennett, “Power Integrity and Energy Aware Floorplanning,” EETimes Jan. 2008.

[3] Linley Gwennapp, “Low-Power Design Using NOC Technology,” May 2015.

[4] Raj Nair and Donald Bennett, “Power Integrity Analysis and Management for Integrated Circuits,” Pearson Education Prentice-Hall, 2010.

2 comments on “Power Integrity for IoT Components

  1. justfab
    September 9, 2016

    I recently had a discussion about the best use to remember our wifi with my boss and I explained to him that the intranet was the ideal solution.

  2. raj_at_anasim_dot_com
    September 10, 2016

    A learned explanation, Fabrice.

     

Leave a Reply