Advertisement

Analog Angle Blog

Do Analog ICs Still Need Perfection?

As most analog circuit engineers know, designing a circuit for very good performance at a nominal or fixed operating point is only part of the challenge. A larger challenge is making sure that the circuit provides precision performance over time and temperature, along with the variabilities of various transducers, as well as supply-rail and current fluctuations.

For this reason, I re-read the outstanding EDN article by the late, much-missed Jim Willams, “This 30-ppm scale proves that analog designs aren't dead yet” (see it here), every year or so.

To me, his processor-free design represents the ultimate in engineering understanding and circuit elegance, as he identified and worked with (or around) every error and drift source. For example, read how he describes testing for the zero-drift current point of a Zener diode he is using as a voltage reference. This is a great example of what Samuel Florman refers to as the “Existential Pleasures of Engineering.”

But times change, and so do technologies. When I spoke with Jim shortly before his untimely passing, he said that much of the striving for analog perfection he used to do was no longer as necessary. With the availability of embedded processors running compensation and calibration routines, and integrated flash memory to store calibration factors, he could now calibrate a circuit digitally. He could do this for general system errors such as drift, or even do additional compensation for individual production unit variations.

Errors in transducers could also be calibrated out at the factory, as long as you could link the specific transducer to those calibration factors stored in the instrument, via an electronic tag.

These steps would take care of many errors, but not all. Some error sources, such as various types of noise, would be much harder to calibrate out — but perhaps more could be accomplished with additional effort and algorithm development. (See: Can Integration Help With Our Noise Gamble?)

By integrating non-analog functions such as processor, code memory, and data memory onto a single IC, along with the analog signal channel(s), designers have a new way of dealing with long-standing error situations — and overcoming them. To what extent they can do this depends on the specific errors and imperfections: are they foreseeable, understandable, and consistent?

Perhaps the historical striving for inherent analog-circuit perfection is no longer as large as imperative as it was when Jim developed the 30-ppm scale and many “near-perfect” circuits.

Which error types do you think can be calibrated out with digital techniques, and which ones can’t? Have you ever assumed you could minimize an error with one approach, but eventually found that you had to do it the other way?

Related posts:

7 comments on “Do Analog ICs Still Need Perfection?

  1. Scott Elder
    June 20, 2013

    I've been studying this problem for a while now.  Think about all of the power and money that is used in an analog design just because things don't perfectly match.

    All of these calibration circuits that have to fix the mismatch in a simple differential pair at the front of an amplifier.  Like auto zeroing and chopping.  Yes, chopping gets rid of 1/f noise, but then you get lots of chopping noise in return.  So we go add a few more power hungry filter circuits to solve that.

    Same for resistor mismatch around an instrumentation amplifier to reject the common mode noise.  More noise, more power, more money.

    So you're right that you don't need perfect anymore.  But if you don't have it, be prepared to pay for it.

    Last time I checked, an 18-bit SAR was over $25 in moderate volume.  And that's for a part made from 30 cents worth of materials.  Sure is a lot of money for having to fix something that's not perfect.

     

  2. Netcrawl
    June 20, 2013

    @Bill great article, the fierce competitition in the industry demands that a company must develop complex products in the most cost-effective manner and the most fastest way, and speaking of cost-effective, how do you get a cost-effective design? 

  3. D Feucht
    June 22, 2013

    @Bill – The fraction of purely analog designs like those Williams liked to do is diminishing. Thirty years ago, I was using a 6502 uP to do two-point calibration of a thermal energy meter DAS which used an LM331 VFC for A/D conversion. Two-pt cal is common and need only assume that the process is linear for compensation of both offset and gain or slope errors.

    @Scott – With a cheap uC and an external op-amp integrator, it is not hard to achieve 14 or more bits of conversion with a sigma-delta algorithm. This is often adequate resolution and linearity is not much of a problem with inherently monotonic sigma-delta conversion. The biggest benefit of the uC is that it allows abandonment of all those awful trim pots! Some variable capacitors can also be eliminated with dynamic compensation.

  4. Scott Elder
    June 22, 2013

    @D Feucht

    I here you, but if you need the result in 1us from a sleep state so the average power is a few microwatts, you'll need something alot faster and more expensive than a 1st order SD ADC.

    I think there will be a resurgence in SAR converters once this IoT thing gets into full gear.  SAR converters are about as low power as you can get since they can wake up, make a 16-bit measurement, and shutback down to zero power in one micro second.  And most IoT devices will probably be mobile battery powered devices.

    But who knows…..for sure.

     

  5. amrutah
    June 23, 2013

    I agree with Scott here.

      Though the process and technologies are changing (shrinking) the perfection to the analog parts is brought by growing the size i.e. reducing mismatch, burning more current to increase accuracy and reduce noise and each of these directly translates to increase in cost.

      We still need good and perfect analog, with the stirdes we are making into space and inside human bodies.  But we are losing the people who know it.

  6. Brad Albing
    June 23, 2013

    Scott – very good point on the very fast cycle time as a method of saving power. If you really can power up the device, take a measurement, process/store the data, and shut back down, that's the way to go in ultra-low power apps. Rather than just trying to make the worlds lowest power SAR.

  7. jkvasan
    June 27, 2013

    @Brad,

    Hit-and-run would be a good idea, I agree, but, if there are intemediate stages like the sample and hold, then this timing needs to be added to the process. As you said, instead of wasting energy on making an impossibly low power device, one could do well with a rapid cycling device with so called 'imperfections' .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.