When GIGO meets analog

(An edited version of this column appeared in EETimes , February 19, 2007.)

Not too long ago, the acronym GIGO (garbage in, garbage out) was fairly commonplace. It neatly summarized that no matter how powerful your computer, no matter how good your algorithms, the output result would only be as good as the input data. If your data analysis calculated the current through the sensing resistor in your MP3 player, based on your test readings, to be 47.1235 amps, it was pretty clear you have two problems: excess and meaningless precision, and a badly misplaced decimal point.

The GIGO concept seemed to fade somewhat a few years ago. After all, processors were getting so powerful, and data-collection and analysis software were improving and including better bounds checking, that many problems were trapped and flagged early in the analysis cycle.

The trend of using faster processors and advanced software to overcome inherent system defects also spread to basic analog-channel design. Even if the basic analog component (voltage reference, amplifier, A/D converter) was not perfect, no matter. Just take that pretty good but not great reference, and calibrate it at the factory across various temperature and supply-rail values, then store correction values in memory. The broad availability of flash memory made this approach all too easy. The theory was that you could use a lower-cost, moderate-performance analog part and the virtually free memory, for a net saving.

In the past few years, though, I have seen a trend away from this digital-correction approach for several reasons, despite its apparent attractiveness. First, it takes production time and specialized test equipment. Second, it marries the calibration of a unique analog component with the product, until “death do you part”. Unfortunately, such death happens, requiring a factory- or depot-level recalibration.

But the other reasons to avoid this approach are both tangible and philosophical. First, analog vendors have pushed the price and performance factors even harder, and come out with significantly better components. In the past year, I have seen an uptick in the number of low-power, low-cost, higher-performance op amps, in amps, voltage references, and converters from most of the top analog vendors.

At the same time, designers realized that the digitally calibrated approach does not make sense for mainstream products. It may be the right thing to do for a calibration-lab or instrumentation-grade system, for example, to suppress any 2nd- and 3rd-order errors due to drift, aging, and other ills. But it is a production-time and compute-intensive solution that assumes you can clearly anticipate all the dimensions of the future error sources (temperature, humidity, supplies, signal noise, vibration and stress, as well as many other factors), and calibrate against them in advance. Worse, it often doesn't help when the problem is internal component noise or external system noise.

Instead, engineers now see that it's generally wiser to choose better components so that system accuracy and overall performance are assured by design, analysis, and guaranteed specifications, and not try to bypass reality by applying more external correction factors. GIGO is still a very valid summary of the real world signal-processing chain.

0 comments on “When GIGO meets analog

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.