Analog Angle Blog

What Analog’s ‘Imperfections’ Taught Me

In the analog world of transducer input/output, you're in pretty good shape if you can consistently achieve performance — a term I'll use here as a combination of accuracy, resolution, and repeatability — of 0.1 percent.

Wait a moment: That's only 1 part in 1000, or 10 bits, which seems very coarse when it's alongside 16 through 24-bit converters, and processors crunching at 32 bit and more. I can understand the processors needing more bits, to minimize various cumulative errors, which will build up during repeated calculation steps. But what about the analog side?

Reality is that when you do a proper error budget all the way from transducer to A/D converter, taking into account the various analog signal-chain components, noise, tempco and time drifts, bias currents, and other factors, 0.1 percent is about what you'll get. Of course, you can do better if you trim the signal channel, and calibrate it; in fact, if you calibrate it under various conditions and operating points (voltage, temperature), you may get down to 0.05 percent (11 bits) or even 0.025 percent (12 bits). It's really hard to do better than that, and rarely needed.

Seeing that gap between real-world performance and the presumed number of bits available in the system taught me to step back and think about the subsequent analysis. The most important questions I learned to ask are these: “Does this data/answer/analysis make sense? Or does it make no sense at all?” I learned to be humble and skeptical about the acquired data and analysis, as it's easy to get so wrapped up in apparent precision that you miss the bigger picture, that something isn't quite right.

We all know the algorithm acronym GIGO, short for “garbage in, garbage out.” But there is a rarely mentioned alternate interpretation of GIGO, namely, “garbage in, gold out.” In other words, if the result says something is such-and-such, then it must be so — how can you doubt it?

Many years ago, I had an instructor whose admonition to us was simple: “Before you compute, stop and think.” He insisted we first rough out what the range of sensible answers and conclusions might be, using reasonable approximations. He wanted us to do what he referred to as “back of the envelope” calculations, and he was so insistent that I even made up a special “gag” pad entirely of envelopes, to demonstrate that what I had done was indeed just such a quick and rough calculation.

This custom-made pad is made entirely of envelopes, as a visible reminder of the importance of doing rough calculations as a preliminary 'sanity check' on precision results. I use this custom-made 'back of the envelope' pad to remind me that there are times when a rough calculation is what I need to do first.

This custom-made pad is made entirely of envelopes, as a visible reminder of the importance of doing rough calculations as a preliminary “sanity check” on precision results. I use this custom-made “back of the envelope” pad to remind me that there are times when a rough calculation is what I need to do first.

Have you ever been unintentionally misled by someone else's excess precision, when they were precisely wrong rather than roughly right? Worse, have you ever fooled yourself by jumping to precise yet misleading conclusions?

30 comments on “What Analog’s ‘Imperfections’ Taught Me

  1. jkvasan
    April 23, 2013


    I agree that like an “One Minute Manager” , we need to become an “One Minute Engineer”. Think for a minute just before getting into action. It had helped me while working in a thyristor drive and designing a Gate driver stage of a MOSFET.

    I normally use the rear side of wasted print-outs. I cut them into a smaller size and pin them to my pinup board. I use a lot of those pins. 

  2. Bill_Jaffa
    April 23, 2013

    I've seen otherwise smart engineers take whatever result the software or simulation spits out as “correct by definition”, even if it says to use a 100-A component in a circuit powered by a small 9V battery. Or megaohms where it should be milliohms, and vice versa. Sometimes it is due to incorrectly entered data, sometimes due to poor simulation model.

    Plus, it's always good to do a back-of-the-envelope “sensitivity” check: “if I vary this factor, I'd expect to see the output change by how much, approximately?”

  3. eafpres
    April 23, 2013

    Hi Bill–here is an example not so much of over-precision but just possible nonsense.  I have been seeing articles about “new battery technology” that promises “charging your phone instantly” or “charging an EV in minutes”.

    Here is the most recent example:

    U. Illinois claims new battery will allow phone to charge in less than 1s

    I did a quick calculation:

    iPhone 5 battery 1440 mAh

    1440 mAh * 3600s/h = 5,184,000 mAs

    5,184,000 mAs * 1A/1000mA = 5,184 As

    So if we are to deliver 5,184 As in under a second, what is the peak current?

    Hmmm…about 5 kiloamps.  I suggest you plug everything into a switched circuit that is off and stand way back when you turn it on.

  4. Bill_Jaffa
    April 23, 2013

    You've earned a “back of the envelope” award on that one, thanks.

  5. eafpres
    April 23, 2013

    @Scott–you are right, of course.  I've heard it said that only Universities do R&D in the US any more.  On the other hand, those bright young minds don't have the seasoning that years of learning from mistakes and from experienced peers brings.  Without knowing anything about a charging circuit it should be obvious that “instant” charging can't be true.  In time they will learn…

  6. Netcrawl
    April 23, 2013

    @easfpres I agree, They need to learn more and from mistakes and experience, it could be their training ground. This is a highly technical works, so it require a vey focus mind and patient. This is a good example a starting point.  

  7. Davidled
    April 24, 2013

    Analog Engineer group has always a weekly meeting to come up the best idea of production level such as circuit optimization, simulation tool, and different analogy component.  I think that that this kind of process is very a crucial in the company perspective.  One brain is better than two brain, unless you get an own consulting company.

  8. jkvasan
    April 24, 2013


    I agree that periodic review meetings and gatherings of contemporaries could be helpful. However,as this blog suggests, it always pays to have a self-review of what has been implemented in the circuit just before switching it on. 

  9. Bill_Jaffa
    April 29, 2013

    I've seen similar ridiculous numbers regarding recharging all-battery EVs in just an hour or two–you'd need a 50 or 100 A line to the car charger connection. Not saying it can't be done, but that's some very special wiring. Plus who knows is the local transformer can to that, especially if several people in the same block or apt are charging.

    There's no getting around the basic laws of physics when it comes to energy, power, amps, watt-hours, current, voltage.

  10. Davidled
    April 29, 2013

     I remembered that DTE engineer presented a long term solution for power distribution in conference related to Hybrid/EV vehicles. I think that fundamentally, infrastructure of power transformer is not established well. This was a known issue for all engineers working on this field, when hybrid/EV vehicles had been surfaced in the automotive industry.  So, I believe that they have cooperated with government to retrofit or rebuild power transformer, which is not easy.

  11. Brad Albing
    April 29, 2013

    Probably better than just after you switch it on. Then the review is done for you, tho' repairs may be necessary at that point.

  12. Brad Albing
    April 29, 2013

    But experience is a great teacher.

  13. Brad Albing
    April 29, 2013

    You have to do that. Can't understand an engineer who can't/won't do that fairly simple calculation (at least in their head).

  14. Brad Albing
    April 29, 2013

    Apparently U of IL needs either some better engineers – or better technical editors to check their press releases.

  15. Brad Albing
    April 29, 2013

    Looks like there is another blog here regarding the power needed once we start using EVs a lot more. One of us will write that blog soon.

  16. Brad Albing
    April 29, 2013

    We were still doing a bit a bit of R & D at Picker/Philips Medical before I left. Tho' I'm sure without me there, that probably stopped.

  17. sonic012
    May 1, 2013

    First off, I've had to build synthesizers that required 95dB of spurious free dynamic range. I've had to build receivers with a sensitivity of -138dBm.. And I couldn't just go down to national semiconductor and buy one, because these had to be radiation hard.

    And when you say people can't build an A/D converter with better than 12bits of resolution.. Seriously I don't know who you have been talking to.. And when you say that resolution is not necessary, just look at some of the medical applications.. Look at some military applications.. 

    You obviously don't have the experience to understand. Your whole article is just total nonsense.

    You're telling me that people make this just for fun?



    Click the 16 bit button. And it is reliable. That's how they can sell it you see?

  18. Bill_Jaffa
    May 1, 2013

    Whoa there–I am not saying that you can't have >12b resolution, I have worked with and verified 24b converters myself.

    All I am saying is you have to do a sanity chec to see if your overall channel has that many bits of real information to give you, and you are not fooling yourself by the precision your converter offers. There are lots of sources of error and distortion and noise between the source and the converter that can make those extra bits somewhat meaningless–not in all desgns, of course, but in many I have seen.

  19. sonic012
    May 1, 2013


    I'm just trying to talk to you. 

    First off, those A/D converters aren't designed by digital engineers. A digital engineer wouldn't have a clue how to design those things.

    Second, sure you have to test it. But I've never bought an A/D from national that didn't work out of the box. 

    Third, they aren't trimmed or tuned.

    Four, it's never going to be possible to process a signal as fast in digital as in analog. NEVER.. 

    I mean please teach me.. Where is the 30GHz digital downconverter with a sensitivity of -138dBm? 


    I'm all ears.

    Regards, Alex

  20. Netcrawl
    May 1, 2013

    @Sonic012  I agree with you,  @Bill I think we need to provide some link here just to prove something. How to make it and if this one really works?     

  21. Brad Albing
    May 2, 2013

    I agree with Bill's perspective here – he's looking at this from a system point-of-view and suggesting that we need to consider the big picture before we drop a 24-bit ADC in and expect to get 24 bits of useful, believable data.

  22. Brad Albing
    May 2, 2013

    Alex – I think Bill thought you were coming on a bit strong, so that might explain some of what transpired. And I don't think there's really a disagreement on design topologies and what each is capable of.

  23. sonic012
    May 2, 2013

    Brad maybe your right..

    I apologize to Bill if I came on a bit strong.

    I just know that you can reliably buy a 16 bit converter, and it always works. And I've tested for the dynamic range many times.

    Sorry if I was a bit aggressive there.. I guess I'm not sure that Bill and I totally agree.

    Brad, I agree that you can't nor should you just drop in a 24 bit converter.. I thought what Bill said was 12bit , or roughly 72 dBs of SFDR.. And that's just no big deal now days… Thanks to our analog engineers. 🙂

    And it is absolutley necessary in many applications.



  24. sonic012
    May 2, 2013

    “Of course, you can do better if you trim the signal channel, and calibrate it; in fact, if you calibrate it under various conditions and operating points (voltage, temperature), you may get down to 0.05 percent (11 bits) or even 0.025 percent (12 bits). It's really hard to do better than that, and rarely needed.”

    So I just don't agree. It is very often needed and they aren't trimmed. And I've never seen one that didn't work.

  25. Scott Elder
    May 3, 2013


    Bill was talking about sensor ADCs.  Where do you think the practical limit exists if not 12 bits?  I've been looking at 16 bits for a while (SAR not DS) and none of them have, for example, noise free codes at 16 bits, but they of course exist–even at 18 bits, also not noise free.  Certainly one can get around the noise by averaging, but I'm just curious where you draw the line of diminishing returns.  Thanks.

  26. sonic012
    May 4, 2013


    Of course their not noise free.. I'd like to know if you ever heard of an analog or digital circuit that was “Noise free”.. Quite frankly, I don't understand your point.

    Dynamic range of an ADC means that when you put two tones in, within bandwidth (no cheating) that you detect the tones at some minimum level, and when you increase the amplitude to the point where intermodulation pops up throught noise floor, that's the max. So dynamic range is difference between the two. 

    Scott, there is always noise. There is no such thing as a circuit without noise.

    Now I re-state my point. You can buy one from national and it works out of the box at 12 bits. Even 16 bits. It works out of the box.



  27. Scott Elder
    May 5, 2013

    @ Alex

    Are you familiar with what “Noise Free Codes ” means?  Its different than noise.  Please study here….

    and then here….–ENOB–and-effective-resolution-in-analog-to-digital-converters

    I'm simply asking for your opinion on what the upper limit is for practical sensor designs since you were strongly adamant that it wasn't 12 bits.  And I was hoping you would provide the answer in terms of the number of bits.

    A 16 bit SAR does not have 16 bits of NOISE FREE CODES.  A 24 bit delta sigma does.  But then are there sensors with 20 bits of linearity?

    So my question is simply what do you assume as a practical upper limit in sensor signal processing since you've told Bill it is not 12 bits.  Where do you set the point of diminishing returns.  Noise can be solved with averaging…but then there is linearity…and then temperature drift…..  I don't know the answer which is why I was asking.  I think I have a pretty good handle on noise, but not on sensor limitations like linearity etc.



  28. TheMeasurementBlues
    May 6, 2013

    Bill, are you kidding? I covered test and measurement for 20 years. You think I've ever heard claims about accuracy and precision? But remember, having huing around metrology people, I learned that there's no such thing as accuracy in a measurement. It's uncertainty. And every uncertainty comes with a confidence level. that means for any measurements you can only have a limited level of confidence that the actual value is between some stated tolerance.

    There once was a company who used the slogan “Make measurements, not estimates.” Completely foolish. All measurements are estimates to some dagree.

    Never trust the rightmost digit, it can always be off by ±1 because of quantization.

    Plus for any analog measurement there is noise. Measurement companies loved to claim thay their measurements had the lowest noise, therfore the best accuracy. But as for accuracy, see above.

  29. Brad Albing
    May 7, 2013

    >>Never trust the rightmost digit, it can always be off by ±1 because of quantization.

    And never trust the Marketing guys a.k.a. the Sales guys. They just want to move product, so expect little correlation to Reality or Truth.

    >>There once was a company who used the slogan “Make measurements, not estimates.” Completely foolish. All measurements are estimates to some dagree.

    Again, see above.

  30. TheMeasurementBlues
    May 7, 2013

    Brad, The company with that claim is still very much in existance.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.