“Do you think you can dissect me with this blunt little tool?” – Hannibal Lecter, ‘Silence of the Lambs’
Without doubt, ENOB (Effective Number Of Bits) has become a “thing” in analog-to-digital converter discourse. But I don’t like it. And I’ll tell you why.
It perpetuates the myth that the number of bits that a converter “has” is a useful measure of performance. The more bits the better. Or, at least, that something really, really important gets improved as you ratchet the number of “bits” in your converter skywards.
ADC vendors must take some blame for this. I did a search, and the vast majority of articles that discuss ENOB are written by people with ADCs to sell – it’s a push parameter, not a pull parameter. And this piece isn’t going to change the status quo ante because, full disclosure, my company (Cypress Semiconductor) makes SoCs containing great ADCs, and of course we’re going to talk about their ENOBs because if we didn’t, the engineering community would be puzzled. But that doesn’t mean that old grumblers like me need to be happy about it.
For me, the only real merit is that “bits” are a logarithmically-scaled measure of the magnitude of something that can vary over a numeric range that’s too large for our puny human brains to get round.
We already have plenty of logarithmic scales for the relationship between the magnitudes of two signals, or between a signal and some reference level. The visual Magnitude of celestial objects, the amount of light falling on the film or sensor in your camera (EV – Exposure Value), and the inferred magnitude of rock movement in the Richter scale, all permit us to comprehend parameter variations of thousands or millions to one.
And right here in the world of electronics, I’ll be surprised if any of you engineering-trained readers aren’t completely comfortable with the decibel.
And that’s one of my big beefs. Just like the best tomato ketchup is “100% tomatoes”, ENOB is “100% decibels”. You’ll often see the incantation: ENOB = (SINAD-1.76)/6.02. Now, call me old-fashioned, but this say to me that ENOB – this kind of ENOB, anyway – contains no information that is not already present in SINAD.
My second beef, not surprisingly, is that SINAD tells you nowhere near enough information to make a judgment on whether the converter is good for your application! In other words, it risks being an unnecessary arithmetic transform on a number that may already be meaningless or misleading for the actual application.
What’s wrong with SINAD – Signal in Noise and Distortion? By the way, in the wonderful world of audio engineering, we call it THD+N (Total Harmonic Distortion + Noise). It acquires a negative sign in audio, because we don’t see it as a figure of merit (i.e. more-positive values of SINAD are better) but a figure of de merit, which should be as close to minus infinity as is permitted by your design skills and by the color of the toilet tissue in the restrooms (which I always thought was an old engineer’s joke until changing the brand of toilet tissue really did make a difference, that’s a story for another day…)
The main issue is that both SINAD and THD+N aren’t one number, they are a function of signal level, and also of frequency.
Most audio systems – especially digital ones, can have either terrible or terrific value of THD+N, depending on the signal level (and frequency). It’s not surprising, and it’s partly to do with those pesky “bits”. Signals with a lower amplitude just can’t excite all the possible states in a digital representation. So the inevitable quantization noise caused by the jump between one state and the next is a larger proportion of the signal at that moment. If you’re listening to a very quiet piece of classical music of a CD, you’re not listening to 16-bit audio at that moment – it might be 8-bit, or 5-bit… A ‘perfect’ ADC with 65536 possible output states can only have “16-bit” performance when fed with its maximum input signal – but it obviously has 16-bit performance (without the bunny ears) for any signal – it does what any system limited by a 16-bit output representation does.
So the best that we can say about SINAD – and therefore the ENOB that can be calculated for it – is that it’s a figure of merit for the converter when doing one rather specific thing. It doesn’t tell you much about how well the converter might perform in your application on a completely different signal.
Another way of putting it is that not all ENOBs are the same. Depending on the nature of the deterioration that gets “rolled up” into a fall-off of “effective” number of bits versus the actual number of states in the output word, the behavior might be very different. One converter might have significant non-monotonicity or bad DNL. Another could have a smooth, gentle (almost “analog”) nonlinearity. Look, here’s one that just has a lot of low frequency noise in its front end. And here is one whose reference is picking up some 1 kHz from the operation of the on-chip USB interface. All of these could be converters providing a calculated 13.8 ENOB on their streams of 16-bit output words. But they’ll all behave differently, whether in audio, communications or data acquisition systems.
Another ‘kind’ of ENOB I’ve seen regularly is applied to converters used in precision weighing systems. I saw one customer’s weigh scale subsystem rated as “17.5 bits ENOB rms, 16 bits ENOB peak to peak”. Now, I know what they meant by that (because they told me). They meant that the peak error in the reading is one part in 216 , i.e. 65536, and the standard deviation of the readings is equivalent to one part in 217.5 , i.e. 185364. Now, to make an ‘ideal’ 16-bit ADC give an output that’s different from its ‘correct’ value by one count, you might have to put in an error signal of amplitude 1 count (i.e. 1 LSB). So what they are saying is that their “16 bits ENOB peak to peak” converter has… an ENOB of 15? And we know from the standard analysis of quantization noise that its rms value is 1 count / LSB divided by sqrt(12). And when you work that out, it implies that by this definition, the rms ENOB of an ideal 16-bit ADC must 17.79 bits… I don’t know about you, but by now I’m just rolling my eyes.
So, if ENOB isn’t the bee’s knees, then what should you be looking for in an ADC (or a DAC, for that matter)? Well, my take is that this is the wrong question . You should always ask yourself what you’re looking for in your signal . In its broadest sense, my work usually involves chasing down information about systems that is hiding in one or more signals acquired from some sensors. The format of those signals can vary widely, and they’ll frequently be changed between raw input and decision output – an “analog-to-digital” converter is one of a large class of domain converters that change the representation of a signal without, ideally, changing the information in it.
The key step is to understand which potential impairments could be the most damaging to your signal. Random noise is usually quite benign (and can in fact help to suppress a lot of the other problems). Simple low-order linearities will generate harmonics that are picked up by proper THD+N or SINAD measurement – but the main impact on your system might be the impact of the linearity on the system gain. If your system performance will be compromised should the gain at 1 Vrms be slightly different than at 1 mVrms, then pay attention to harmonic distortion. I’ll write something specific on that soon – something else that came from the electricity metrology deep-dive.
But my advice is – be very circumspect about ENOB. It won’t usually tell you how well or badly an ADC will work, in your particular system on your particular signal. Try to find out the actual impairments that the converter suffers from. Have a go at building a model for those impairments – it’s good practice. And remember – picking the right converter may make the difference between being ennobled – and being ENOBbled! / Kendall