In some articles on driving sampling ADCs, you see the settling behaviour at the inputs being tweaked with resistors and capacitors. It’s good to see the subject raised, but the treatment always seems rather empirical to me, and doesn’t explain where all the ringing you see comes from. You all want to know, surely? So here is some simulation work I did many, many years back, which may help to explain it.
High speed ADCs sample the input voltage onto internal capacitors, so there’s a charging current. Rapidly-changing input voltages must be acquired to high accuracy during the short fraction of the sampling cycle reserved for the charge transfer. There’s usually no buffering between the input terminals and the sampling switches. So, the time-domain behaviour of the charge flow is determined by the time constants formed between the internal capacitors and the impedances in the charging current path, both external and internal to the chip.
This charging behaviour is outside the control of the either ADC designer or the guy that writes the datasheet. If external impedances affect the settling behaviour of the charging waveform (and reader, they do) they may prevent the input voltage from being acquired to sufficient accuracy in the time available. Level-and slope-dependent errors follow, appearing as gain and linearity problems even on low frequency input signals.
Such an ADC is – quelle surprise – a sampled data system and not well suited to a continuous transient analysis. However, the charging behaviour inside one sampling period is entirely predicted by the response of the equivalent input network to a voltage step input representing the clock. The effect of external components can be examined by simulating the combined external and internal network as a filter (hey, what did you expect?) in the time domain, using the clock as the input signal. This approach isn’t accurate beyond the next clock edge – but the system would be well inaccurate anyway if it hasn’t settled by then.
The equivalent circuit is shown in Figure 1. It’s derived from the schematic of a (now fairly old) Burr-Brown ADC with one topological change for simulation convenience (the final branch of the ladder is rearranged so that the 2 pF sampling capacitor is grounded; also, the whole thing is a single-ended equivalent of this normally differential circuit). The internal detail only has a secondary effect, compared to the external components.
Basic circuit for analysis.
Input is a 5 MHz squarewave, a slow sampling rate for this 80 MHz-rated converter, and chosen to illustrate the poor settling times which can result from plausible external component values. The graphs show the settling behaviour of the voltage on the sampling capacitor. Disregard the first of the three clock cycles (see how it looks slightly different); I didn’t let the circuit get to steady state, my bad. The +/- 10% plot scale shows just how gross these effects can be.
I’ve shown three parameter sweeps: the capacitor Cextx2, the series resistor Rext and the effective inductance of the buffer Lopeff (the integrator-like noise gain bandwidth of the amplifier transforms its output resistance Ropol into an inductance – homework time, if you don’t understand why).
Layout inductance is included, but the inductive component of the source impedance is completely dominated by that rising closed-loop output impedance of the driving amplifier. This is the main reason why the settling performance of such systems is improved by using wide-band amplifiers – not some hand-wavy stuff about how well the amplifier’s output stage drives the “difficult” input impedance of the ADC. Now, let’s sweep. First, the capacitor Cextx2:
Input capacitor swept between 3 pF and 300 pF (slowest).
As the capacitor value increases, the waveform becomes better damped but takes longer to settle. Here, Rext = 30 ohm and Lopeff = 160 nH (corresponding to an amplifier with an open-loop Ropol of 100 ohm and a noise gain bandwidth of 100 MHz). Adding more capacitance lengthens the settling time, whatever tweaks are done to “nicen up” the actual waveform. For high-speed systems you should make this capacitance as small as possible. Next, the resistor Rext:
External resistance swept between 3 ohm and 300 ohm (slowest).
Sweeping that resistor also changes the external RC time constant, but increasing Rext has a more dramatic effect than increasing Cextx2 because of the series voltage drop. Here Cextx2 = 30 pF and Lopeff = 160 nH. Increasing the resistance value improves the damping of the resonant circuit formed at the input, particularly for low values of input capacitance, but does slow the system down. The value of the resistor needs to rise as the source inductance rises (i.e. as the noise gain bandwidth of the op amp reduces) in order to preserve a clean acquisition waveform.
Now, the inductance – effectively, we are sweeping the amplifier’s GBW. Figure 4 shows the effect of changing the noise gain bandwidth of the amplifiers from 1000 MHz down to 10 MHz with Rext = 30 ohm and Cextx2 = 30 pF. As you might expect, using a slower opamp significantly extends the settling time. Also, that large overshoot could cause input stage problems.
Effective source inductance swept between 16 nH and 1600 nH (slowest).
As the opamp GBW is reduced, the achievable clean acquisition time (with best Rext and Cextx2) also rises, showing that slower buffer amplifiers may be fundamentally unable to support accurate acquisition in your system at the speed you need. The waveform variations indicate that if the value of one of the three main external components is fixed (say the opamp can’t be changed, or the ADC has a large Cin) you need to optimize both the others for good results – and that this fixed choice might make it impossible to achieve the settling time you need!
Hope this gives you a feeling for where the ‘ping ‘comes from, and how to investigate it in your systems – try it yourself! Has this one rung you when you weren’t expecting it? – Kendall.