This edition of Signal Chain Basics in brought to you by Rafael Ordonez, SAR ADC Applications Engineer, Texas Instruments
An analog-to-digital converter (ADC) sampling rate is determined by a clock signal that controls how often a voltage snapshot is taken for each conversion. However, clock signal timing isn’t perfect, having some type of jitter that becomes part of the overall ADC sampling rate jitter.
In practice, ADC sampling rate jitter cannot be estimated accurately because the sampling clock signal is always synthesized by complex interactions of crystals, phase-locked loops (PLL), interrupts, timers and software. For example, a sampling clock synthesized by a low-power microcontroller can have very different jitter values that range from a couple of nanoseconds to microseconds, depending on peripheral interrupt handling. Also, when a sampling clock is synthesized by a field-programmable gate array (FPGA), jitter can vary from tens of picoseconds to nanoseconds simply from PLL settings.
Fortunately, among the many types of jitter, the most commonly found jitter dominating ADC sampling jitter is period jitter, which is easily measured by an oscilloscope as standard deviation of the clock’s period. Thus, once an oscilloscope is used to measure period jitter, it is possible to accurately estimate noise due to sampling jitter or jitter noise, and how much this noise degrades effective number of bits (ENOB) over frequency.
Jitter noise vs sampling jitter
Sampling jitter is a timing error compared against an ideal sampling clock. Jitter is random and generates random conversion errors as jitter noise. However, jitter noise is not constant over frequency because, at higher frequencies, the input changes faster, increasing jitter noise for a given timing error (Figure 1).
Now that jitter noise (σjitter-noise ) and sampling jitter (σ sampling-jitter ) have been defined as conversion error and time error, they can be related to the root-mean-square (RMS) of the signal derivative, where the RMS of delta-code is jitter noise and RMS of delta-time is sampling jitter:
With full-scale sinusoid of 2N peak-to-peak codes, where N is the resolution of the ADC in bits, the RMS of a sinusoid derivative is simply the RMS of a sinusoid times 2 Π f, where f is the input frequency:
With respect to a full-scale input, RMS jitter noise over frequency in least-significant bits (LSB) becomes:
Sampling jitter and ENOB
ENOB calculation compares ideal ADC noise against other non-ideal noises from signal-to-noise-ratio (SNR).
Unlike other non-ideal noise sources such as thermal noise, jitter noise increases with frequency degrading ENOB at higher frequencies. To combine ENOB and jitter-noise, SNR is defined to include both ideal quantization noise (σ ideal-quantization-noise ) and jitter-noise:
ADC resolution creates ideal quantization noise as rounding errors of a sinusoid input. These errors approximate a sawtooth wave with a peak-to-peak error of +/-0.5 LSB. Thus, the ideal quantization noise is defined as the sawtooth wave RMS times 0.5 LSB.
To solve ENOB with respect to sampling jitter, equations (3) to (6) are combined and simplified below:
In summary, as shown in Figure 2 , for low frequencies, ideal quantization should dominate maintaining ENOB performance of the converter. However, depending on sampling jitter, ENOB can begin to degrade for frequencies lower than 100 KHz, which account for many applications dealing with ultrasound and audio. Therefore, it is important to measure period jitter and optimize it during system development to minimize ENOB degradation due to jitter. Also, equation (7) can be used to quickly estimate if jitter is low enough for a particular application, especially if the sampling clock synthesizer is a low-power microcontroller.
Join us next time when we will discuss how to implement an active, DC-coupled, broadband balun with a differential amplifier.