Advertisement

Analog Angle Blog

Functional Integration Changes Your Calibration Strategy, Part 2

In part 1 (Functional Integration Changes Your Calibration Strategy, Part 1), we looked at the way engineers analyze a circuit to deal with the fact that not all components have typical values. Resistor and capacitor values will be scattered across their respective tolerance ranges — perhaps in a Gaussian distribution, perhaps (depending on the manufacturers' selection process) not. Other analog components (e.g., op-amps) will have minimums and maximums published for their various parameters.

Now we'll look at someone who knew the tricks to squeeze maximum performance out of the parts he had available. We'll also consider integrating the functionality needed onto an IC in a way that minimizes the need for the user to do any calibration.

To me, the epitome of such elegant design was the 1976 article by the late, much-missed Jim Williams in EDN, This 30-ppm Scale Proves Analog Designs Aren’t Dead Yet. This scale not only had to be accurate with high resolution, it had to need no further calibration once in use. To do this, Jim looked at every conceivable error source — noise, drift, common-mode, and component aging, among others, and figured out how to minimize each one. Jim's design epitomized for me what Samuel Florman called The Existential Pleasures of Engineering in his book of the same title.

Of course, times change and so do strategies for calibration. Designers now lean towards a processor-based approach, using available multichip and integrated single-chip solutions with inexpensive and versatile microcontrollers.

This brings new choices: You can calibrate the sensor and/or channel at the factory at a single operating point, at multiple operating points, or at whatever makes sense, and then store compensating factors in memory. These can then be pulled into the signal-analysis algorithms as needed.

Or, you can go beyond factory-based calibration and have the unit calibrate itself on start-up in the field. For example, the $20 digital tire-pressure gauge I use has a three-second self-calibration and test cycle every time you turn it on, where it measures the ambient temperature and then factors that into the reading from the pressure-sensor itself, which has a moderate temperature coefficient.

No doubt about it, integration takes a lot of the calibration burden away from the analog circuitry, and puts it into software. Yet, despite the virtues and flexibility of processor-based calibration, vendors of analog components are still releasing devices with higher accuracy, lower drift, and lower offset.

Why? Because sometimes the cost-effective solution is to spend a little more for a better part. Then you know that the signal channel is going to be well-behaved and you are reducing the processing load on the microcontroller. Plus, there's another factor: there are sources of error and inaccuracy which cannot be calibrated out via software, or for which doing so is very difficult. Many types of noise — such as common-mode noise — are best dealt with at the analog component. Otherwise they can both contribute to error and affect other parts of the circuit, adding second-order errors that are hard to trap and remove.

A good circuit engineer knows when to strive for system perfection via better components, circuit design, and layout versus the mostly digital approach, or what combination of the two to use… and it's a really good engineer who admits it.

What's your experience with analog signal-chain calibration and trim: mostly circuitry-centric, mostly processor with integrated functions, or a mix of both? Were you able to convince others on the project about your preferred approach?

Related posts:

1 comment on “Functional Integration Changes Your Calibration Strategy, Part 2

  1. eafpres
    October 29, 2013

    Hi Bill–I was wondering how much of the tedious work you describe has been moved into software like AWR's Microwave Office.  Packages like these can run simulations including Monte Carlo on lots of parts.  Doing actual Monte Carlo is better than RSS if you have an easy way to do it; otherwise it takes too much time to set it all up and RSS gives you at least some boundary values.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.