Blog Discrete Signals

A close look at makers’ sensor processing power paradox

Adding a sensor to a project used to be a big deal in the analog times. Special wiring. Tiny signals needing amplification. Calibration with potentiometer tweaking. Strip chart recorders with funky thermal paper. Noisy readings only eyeballs could smooth out. Ah, the memories.

In these digital-first days, makers can add a smart sensor to a project with less mess involved. Wiring up power, ground and I2C or SPI pins gets readings in a couple of minutes. Wireless sensors may take a bit longer but are still quick to set up. However, the ease of adding a smart sensor can be deceptive. Behind the scenes, three verses of the sensor processing power paradox await in the digital domain.

  1. Sleeping as long as possible is an art

Manufacturers dive through hoops while using ultra-low-power microcontrollers found inside smart sensors. A glance at a datasheet for these microcontrollers reveals several operating modes. There is “normal” or full-on mode, which may consume fair amounts of power. Then there are various levels of power-saving modes. One is deep sleep with the least logic to wake the part back up in response to an external request.

Want to save power? Operate a smart sensor at a low sample rate and a low duty cycle. The sensor stays mostly asleep, consuming only tiny amounts of power. On cue, the sensor wakes up, takes its reading, sends it somewhere for processing, and goes back to sleep.

This is how a sensor in sleep mode wakes up in response to an event.

When sample rates and duty cycles go up—keeping the sensor on longer—power savings start disappearing. The idea of duty cycle also drives a decision between battery power and USB or AC adapter power. Batteries are an option when devices sleep more often.

  1. Minimizing what’s “always-on” helps

Sometimes, the external request for a smart sensor to wake up is simple. A hardware pin changes state, or an I2C polling command shows up. It doesn’t take much power to sleep while waiting for these events.

In the 1980s, an “always-on” home automation device was all the rage. It was The Clapper, pretty much a relay plugged into an AC outlet. A sharp clap created enough of a sound spike to tell it to turn on or off, along with the lamp or other thing plugged into it. Simple, not super reliable, but the novelty and a catchy jingle sold a few million units.

Now, wake up requests in always-on devices are voice commands. Capturing voice digitally takes a MEMS microphone, an A/D converter, a microcontroller and memory, and a pattern recognition algorithm. More varied commands, more processing, and more power consumed; more on that shortly. That’s a lot of stuff to wake up and get rolling only to figure out if the device should respond or not.

I’m sure many readers saw the recent post about Aspinity, titled, “Does analog have a place in the machine learning world”. Aspinity’s technology uses a simplified machine learning structure to spot an analog event, like a voice command or other sound. If it matches up, the core forwards the analog data and a wake-up request to the A/D converter and microcontroller for digital processing. The longer that digital chain sleeps, the better.

  1. Send data only when absolutely needed

The Internet of Things (IoT) ushered in a naïve architecture: streaming data from sensors into the cloud for processing. Big data is a handy concept when one doesn’t know exactly what to look for. But it brings a lot of moving parts: bigger processors, more memory and storage, wireless and wired networking, and so on. A smart sensor by itself is low power. A large group of smart sensors plus the network plus the cloud computing adds up to enormous power.

Let’s simplify this into a more likely maker scenario. Say there is a streaming sensor and a known data model. The stream needs monitoring, looking for something that’s a bit out of the ordinary. No human wants to sit and watch data that doesn’t change for hours on end, with a small chance something odd pops up. Yet we ask computers to do it all the time, sending data that has no significance around networks.

It depends on the algorithm and data model, but sometimes the right solution is not sending data. A doctor does not need to see a stream of perfect EKG data, just anomalies. A barbeque wizard does not care if the smoker is at temperature, just when it fluctuates.

If data can be pre-processed locally, compared to the cognitive model, then shipped out if more detailed analysis helps, the power savings can be big. A smaller processor can look for boundary conditions. No need to store normal data. No waking up the wireless network. The cloud gets involved only if asked.

Call to action: Separate stream from sample

The sensor processing power paradox points to opportunity in creating systems. The benefits of moving data from analog to digital are tremendous, but they come at a price. The amount of energy it takes to get one sample off a smart sensor is set by its design. The amount of energy it takes to process a stream of data from one or more smart sensors is up to how much of the digital chain comes to bear for how long.

Looking at how a data stream contributes to making decisions helps makers understand the right processing for the job.

After spending a decade in missile guidance systems at General Dynamics, Don Dingee became an evangelist for VMEbus and single-board computer technology at Motorola. He writes about sensors, ADCs/DACs, and signal processing for Planet Analog.

Related Content

0 comments on “A close look at makers’ sensor processing power paradox

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.