So, I was consulting into one of our teams that was putting together a cool data acquisition subsystem. They wanted to use (OK, I admit it, I also wanted them to use) my DIY AC line filter (as written up in Product how-to: Decimation and AC line rejection filtering – a study and an earlier Filter Wizard piece entitled Now synthesize your FIR filters using high-school algebra.) “Oh, that’s great, because you’ll get some improvement in output resolution too, because of the reduction in bandwidth.”
There was the conference call equivalent of a tumbleweed moment. I explained: “It’s because one of your biggest noise contributors is the ADC quantization noise, and the filter will reduce that, giving you an equivalent noise floor closer to 18 bits.”
And then it hit me. “You are taking the full 24-bit output from the hardware Digital Filter Block, aren’t you?”
“Actually, we thought 16 bits would be fine. Seeing as we are using a 16-bit ADC, we didn’t see any point in providing a wider output. I mean, how can you make a 16-bit ADC better just by filtering the output?”
So I prepared this little tutorial gag reel of FFT plots, to show them exactly why they should take more bits of output – which they did, I’m glad to say.
Squeezing a signal through the finite resolution of an N-bit fixed-point number introduces quantization noise just like any other quantization process. We inflict such noise upon ourselves every time we force a data value to have a certain ‘number of bits’.
The plots are all FFTs of 16384 time points simulated in SPICE. Each plot shows the FFT of a test sinewave before and after passage through the lowpass filter. If the frequency of the test sinewave lines up exactly with a bin frequency of the FFT, the time record has no discontinuity and it’s not necessary to window the data. However, in this condition, the quantization noise becomes correlated to the signal and so it doesn’t ‘look like’ uniformly distributed random noise any more. Using a window – here I used the Blackman-Harris (B-H) window available in LTspice – has an averaging effect across nearby bins and can reduce the variation in level between bins, revealing more structure in the noise.
The input signal is rounded to 16-bit resolution in the simulation, to mimic a 16-bit ADC with negligible additional analog noise, which is a good approximation to the super converter in Cypress’s PSoC 3 and PSoC 5LP device families, when running at this resolution. The output from the filter is shown in various precisions: ‘unlimited’, or rounded to 24 bits or 16 bits. Sample rate is 220 sps in all cases – my favourite sample rate for a lot of low frequency applications, as explained in the linked articles.
There are two sets of plots. One is for a colleague’s ‘awkward frequency’ of 8.154845 Hz (don’t ask) and the other is for the nearest FFT bin frequency to this – which makes quite a difference, as we’ll see. The lighter gray trace is the output of the filter, the darker is the input signal.
All the plots have a vertical range of 0 dB to -330 dB in 30 dB steps – suggested by LTspice for the first plot and therefore held for all to make them consistent. The FFTs have a bin spectral density that’s consistent with the expected SNR as driven by the quantization noise.
It’s easy to use simulation to determine what your output noise behavior is going to be. It’s also easy to see that using a filter at the output of your ADC not only enables you to get rid of those pesky AC line components, but also to get a useful reduction in the noise floor. The second fact shouldn’t be a surprise – this is exactly how delta-sigma ADCs work anyway!
So, don’t get floored by noise – get filtering! / Kendall
Kendall:
This sounds a bit (but only a vague bit) like the idea
I suggested some time back – I think it was over our
breakfast table, and your third cup of tea – of adding
pseudo-random noise ahead of an A/D converter (the
particular application was to a reviised CD recording
method) in place of the usual asnchronous jitter that
is added to avoid strict quantization noiss; but then,
on playback, an identical PsR noise sequence would
be subtracted. Of course, this requires the inclusion
of locking data added to the recording master before
the audio data begins.
I read your piece, and at first thiought it was about
a similar practice of pre/post PsR noise in an A/D/A
channel. But as I continued reading, it became less
clear. Do you see my confusion?
Barrie
I do indeed, Mr B! My aim with this little piece was only to point out that the noise reduction gains from band-limiting aren't realized if you don't let the digital representation of the signal widen out. But I see the alternative interpretation.
In fact, the very lovely H-P 89410A Vector Signal Analyzer (which we used to call 'Victor' in the old days just so we could say “What's your vector, Victor” to it) used large-scale subtractive dither on its input ADC expressly to improve the linearity and suppress all those psky spuriae.
It's lovely to see you here on Planet Analog! best / K
Hi Barrie,
Thanks so much for your informative commentary and for joining our audience on Planet Analog—it is a great honor to have you here on this site! And thanks to Kendall for bringing this educational piece to the Planet Analog audience.