Editor’s note : Our guest blogger this month is David Buchanan. He received a BSEE from the University of Virginia in 1987. Employed in marketing and applications engineering roles by Analog Devices, Adaptec, and STMicroelectronics, he has experience with a variety of high-performance analog semiconductor products. He is currently a senior applications engineer with ADI’s High Speed Converters product line in Greensboro, North Carolina.
Q : Can you explain why the minimum and maximum gain errors specified by my ADC differ so much?
A : Gain is not usually a key specification when choosing a high-speed ADC for a particular application. Noise, distortion, power dissipation, and price get a lot more attention during that phase of the design. But over the years, we’ve learned that once an ADC and all of the other devices in the signal chain are identified, some lucky engineer gets to calculate the gain variation of the composite signal chain and determine how it will affect the system. The ADC is usually not the dominant contributor to the overall variation, but some devices are worse than others.
Gain error, defined as the difference between the measured full scale and ideal full scale, is usually expressed as a percentage of full scale. The worst gain error specification that I’ve seen is ±10% FS, which is equivalent to ±1 dB. What concerns some users is the seemingly lopsided difference between the minimum and maximum gain errors specified by some ADCs, and I’m sympathetic, as some devices have minimum/maximum % FS specifications of –6/+2, –1.5/+3.5, and even –10/0 (that’s right, all parts are below the nominal!). Users aren’t typically upset by the specifications, but these are analog-to-digital converters, not purely analog components, so most inquiries are just to make sure they understand the reason.
So, why the big difference? Several factors contribute to the gain error, including the reference voltage error, reference buffer gain error, channel-to-channel variation on multichannel ADCs, but the number one reason is that the true nominal input range does not line up with the specified nominal input range. This may sound crazy, but some good reasons exist. One reason that users would probably never think about is that the target input range is often set before the ADC is designed or characterized, maybe because the device was intended to be functionally or pin compatible with another device. That’s what happened with the part whose minimum/maximum gain specification is –10/0% FS. It was designed to be functionally compatible with an older design that specified a 2-V p-p input range with a minimum/maximum gain range of –4.2/+4.2%.
If the ADC’s gain variation is significant within the signal chain, I recommend redefining the nominal input range to be in the center of the distribution. In the case of the –10/0% FS device, just retarget the nominal input range to be 5% low, or 1.9 V p-p. I hope this helps clear up some of the confusion.