Basics of ADCs and DACs, part 2

[Part 1 explains the basics of data sampling and shows how to use undersampling and antialiasing filters. Part 3 examines distortion and noise in practical ADCs.]

ADC and DAC Static Transfer Functions and DC Errors
The most important thing to remember about both DACs and ADCs is that either the input or output is digital, and therefore the signal is quantized. That is, an N-bit word represents one of 2N possible states, and therefore an N-bit DAC (with a fixed reference) can have only 2N possible analog outputs, and an N-bit ADC can have only 2N possible digital outputs. The analog signals will generally be voltages or currents.

The resolution of data converters may be expressed in several different ways, including the weight of the least significant bit (LSB), parts per million of full scale (ppm FS), millivolts (mV). Different devices (even from the same manufacturer) will be specified differently, so converter users must learn to translate between the different types of specifications if they are to successfully compare devices. The size of the least significant bit for various resolutions is shown in Figure 2-7.

(Click to enlarge)

Figure 2-7: Quantization—The Size of a Least Significant Bit (LSB).

Before we can consider the various architectures used in data converters, it is necessary to consider the performance to be expected, and the specifications that are important. The following sections will consider the definition of errors and specifications used for data converters. This is important in understanding the strengths and weaknesses of different ADC/DAC architectures.

The first applications of data converters were in measurement and control where the exact timing of the conversion was usually unimportant, and the data rate was slow. In such applications, the dc specifications of converters are important, but timing and ac specifications are not. Today many, if not most, converters are used in sampling and reconstruction systems where ac specifications are critical (and dc ones may not be). These will be considered in the next part of this section.

Figure 2-8 shows the ideal transfer characteristics for a 3-bit unipolar DAC, and Figure 2-9 a 3-bit unipolar ADC. In a DAC, both the input and the output are quantized, and the graph consists of eight points. While it is reasonable to discuss the line through these points, it is very important to remember that the actual transfer characteristic is not a line, but a number of discrete points.

Figure 2-8: Transfer Function for Ideal 3-Bit DAC.

Figure 2-9: Transfer Function for Ideal 3-Bit ADC.

The input to an ADC is analog and is not quantized, but its output is quantized. The transfer characteristic therefore consists of eight horizontal steps (when considering the offset, gain, and linearity of an ADC we consider the line joining the midpoints of these steps).

In both cases, digital full scale (all 1s) corresponds to 1 LSB below the analog full scale (the reference, or some multiple thereof). This is because, as mentioned above, the digital code represents the normalized ratio of the analog signal to the reference.

The (ideal) ADC transitions take place at 1/2 LSB above zero, and thereafter every LSB, until 1-1/2 LSB below analog full scale. Since the analog input to an ADC can take any value, but the digital output is quantized, there may be a difference of up to 1/2 LSB between the actual analog input and the exact value of the digital output. This is known as the quantization error or quantization uncertainty as shown in Figure 2-9. In ac (sampling) applications this quantization error gives rise to quantization noise , which will be discussed in the next section.

There are many possible digital coding schemes for data converters: binary , offset binary , ones complement , twos complement , gray code , BCD, and others. This section, being devoted mainly to the analog issues surrounding data converters, will use simple binary and offset binary in its examples and will not consider the merits and disadvantages of these, or any other forms of digital code.

The examples in Figures 2-8 and 2-9 use unipolar converters, whose analog port has only a single polarity. These are the simplest type, but bipolar converters are generally more useful in real-world applications. There are two types of bipolar converters: the simpler is merely a unipolar converter with an accurate 1 MSB of negative offset (and many converters are arranged so that this offset may be switched in and out so they can be used as either unipolar or bipolar converters at will), but the other, known as a sign-magnitude converter, is more complex, and has N bits of magnitude information and an additional bit that corresponds to the sign of the analog signal. Sign-magnitude DACs are quite rare, and sign-magnitude ADCs are found mostly in digital voltmeters (DVMs).

The four dc errors in a data converter are offset error , gain error , and two types of linearity error . Offset and gain errors are analogous to offset and gain errors in amplifiers, as shown in Figure 2-10 for a bipolar input range. (However, offset error and zero error, which are identical in amplifiers and unipolar data converters, are not identical in bipolar converters and should be carefully distinguished.) The transfer characteristics of both DACs and ADCs may be expressed as D = K + GA, where D is the digital code, A is the analog signal, and K and G are constants. In a unipolar converter, K is zero, and in an offset bipolar converter, it is –1 MSB. The offset error is the amount by which the actual value of K differs from its ideal value. The gain error is the amount by which G differs from its ideal value, and is generally expressed as the percentage difference between the two, although it may be defined as the gain error contribution (in mV or LSB) to the total error at full scale. These errors can usually be trimmed by the data converter user. Note, however, that amplifier offset is trimmed at zero input, and then the gain is trimmed near to full scale. The trim algorithm for a bipolar data converter is not so straightforward.

Figure 2-10: Converter Offset and Gain Error.

The integral linearity error of a converter is also analogous to the linearity error of an amplifier, and is defined as the maximum deviation of the actual transfer characteristic of the converter from a straight line, and is generally expressed as a percentage of full scale (but may be given in LSBs). There are two common ways of choosing the straight line: endpoint and best straight line (see Figure 2-11).

Figure 2-11: Method of Measuring Integral Linearity Errors (Same Converter on Both Graphs).

In the endpoint system, the deviation is measured from the straight line through the origin and the full-scale point (after gain adjustment). This is the most useful integral linearity measurement for measurement and control applications of data converters (since error budgets depend on deviation from the ideal transfer characteristic, not from some arbitrary “best fit”), and is the one normally adopted by Analog Devices, Inc.

The best straight line, however, does give a better prediction of distortion in ac applications, and also gives a lower value of “linearity error” on a data sheet. The best fit straight line is drawn through the transfer characteristic of the device using standard curve-fitting techniques, and the maximum deviation is measured from this line. In general, the integral linearity error measured in this way is only 50% of the value measured by endpoint methods. This makes the method good for producing impressive data sheets, but it is less useful for error budget analysis. For ac applications, it is even better to specify distortion than dc linearity, so it is rarely necessary to use the best straight line method to define converter linearity.

The other type of converter nonlinearity is differential nonlinearity (DNL). This relates to the linearity of the code transitions of the converter. In the ideal case, a change of 1 LSB in digital code corresponds to a change of exactly 1 LSB of analog signal. In a DAC, a change of 1 LSB in digital code produces exactly 1 LSB change of analog output, while in an ADC there should be exactly 1 LSB change of analog input to move from one digital transition to the next.

Where the change in analog signal corresponding to 1 LSB digital change is more or less than 1 LSB, there is said to be a DNL error. The DNL error of a converter is normally defined as the maximum value of DNL to be found at any transition.

If the DNL of a DAC is less than –1 LSB at any transition (see Figure 2-12), the DAC is nonmonotonic ; i.e., its transfer characteristic contains one or more localized maxima or minima. A DNL greater than +1 LSB does not cause nonmonotonicity, but is still undesirable. In many DAC applications (especially closed-loop systems where nonmonotonicity can change negative feedback to positive feedback), it is critically important that DACs are monotonic. DAC monotonicity is often explicitly specified on data sheets, although if the DNL is guaranteed to be less than 1 LSB (i.e., |DNL| ≤ 1LSB) the device must be monotonic, even without an explicit guarantee.

ADCs can be nonmonotonic, but a more common result of excess DNL in ADCs is missing codes (see Figure 2-13). Missing codes (or nonmonotonicity) in an ADC are as objectionable as nonmonotonicity in a DAC. Again, they result from DNL > 1 LSB.

Figure 2-12: Transfer Function of Non-Ideal 3-Bit DAC.

Figure 2-13: Transfer Function of Non-Ideal 3-Bit ADC.

Defining missing codes is more difficult than defining nonmonotonicity. All ADCs suffer from some transition noise as shown in Figure 2-14 (think of it as the flicker between adjacent values of the last digit of a DVM). As resolutions become higher, the range of input over which transition noise occurs may approach, or even exceed, 1 LSB. In such a case, especially if combined with a negative DNL error, it may be that there are some (or even all) codes where transition noise is present for the whole range of inputs. There are, therefore, some codes for which there is no input that will guarantee that code as an output, although there may be a range of inputs that will sometimes produce that code.

Figure 2-14: Combined Effects of ADC Code Transition Noise and DNL.

For lower resolution ADCs, it may be reasonable to define no missing codes as a combination of transition noise and DNL that guarantees some level (perhaps 0.2 LSB) of noise-free code for all codes. However, this is impossible to achieve at the very high resolutions achieved by modern sigma-delta ADCs, or even at lower resolutions in wide bandwidth sampling ADCs. In these cases, the manufacturer must define noise levels and resolution in some other way. Which method is used is less important, but the data sheet should contain a clear definition of the method used and the performance to be expected.

AC Errors in Data Converters
Over the last decade, a major application of data converters is in ac sampling and reconstruction. In very simple terms, a sampled data system is a system where the instantaneous value of an ac waveform is sampled at regular intervals. The resulting digital codes may be used to store the waveform (as in CDs and DATs), or intensive computation on the samples (digital signal processing, or DSP) may be used to perform filtering, compression, and other operations. The inverse operation, reconstruction, occurs when a series of digital codes is fed to a DAC to reconstruct an ac waveform—an obvious example of this is a CD or DAT player, but the technique is very widely used in telecommunications, radio, synthesizers, and many other applications.

The data converters used in these applications must have good performance with ac signals, but may not require good dc specifications. The first high performance converters to be designed for such applications were often manufactured with good ac specifications but poor, or unspecified, dc performance. Today the design tradeoffs are better understood, and most converters will have good, and guaranteed, ac and dc specifications. DACs for digital audio, however, which must be extremely competitive in price, are generally sold with comparatively poor dc specifications—not because their dc performance is poor, but because it is not tested during manufacture.

While it is easier to discuss the dc parameters of both DACs and ADCs together, their ac specifications are sufficiently different to deserve separate consideration.

Distortion and Noise in an Ideal N-Bit ADC
Thus far we have looked at the implications of the sampling process without considering the effects of ADC quantization. We will now treat the ADC as an ideal sampler, but include the effects of quantization.

The only errors (dc or ac) associated with an ideal N-bit ADC are those related to the sampling and quantization processes. The maximum error an ideal ADC makes when digitizing a dc input signal is ±1/2 LSB. Any ac signal applied to an ideal N-bit ADC will produce quantization noise whose rms value (measured over the Nyquist bandwidth, dc to fs /2) is approximately equal to the weight of the least significant bit (LSB), q, divided by √12. (See Reference 2.) This assumes that the signal is at least a few LSBs in amplitude so that the ADC output always changes state. The quantization error signal from a linear ramp input is approximated as a sawtooth waveform with a peak-to-peak amplitude equal to q, and its rms value is therefore q/√12 (see Figure 2-15).

It can be shown that the ratio of the rms value of a full-scale sine wave to the rms value of the quantization noise (expressed in dB) is:

SNR = 6.02 N + 1.76 dB, where N is the number of bits in the ideal ADC. This equation is only valid if the noise is measured over the entire Nyquist bandwidth from DC to fs /2 as shown in Figure 2-16. If the signal bandwidth, BW, is less than fs /2, then the SNR within the signal bandwidth BW is increased because the amount of quantization noise within the signal bandwidth is smaller. The correct expression for this condition is given by:

Figure 2-15: Ideal N-bit ADC Quantization Noise.

Figure 2-16: Quantization Noise Spectrum

The above equation reflects the condition called oversampling , where the sampling frequency is higher than twice the signal bandwidth. The correction term is often called processing gain . Notice that for a given signal bandwidth, doubling the sampling frequency increases the SNR by 3dB.

Although the rms value of the noise is accurately approximated by q/√12, its frequency domain content may be highly correlated to the AC input signal. For instance, there is greater correlation for low amplitude periodic signals than for large amplitude random signals. Quite often, the assumption is made that the theoretical quantization noise appears as white noise, spread uniformly over the Nyquist bandwidth dc to fs /2. Unfortunately, this is not true. In the case of strong correlation, the quantization noise appears concentrated at the various harmonics of the input signal, just where you don't want them.

In most applications, the input to the ADC is a band of frequencies (usually summed with some noise), so the quantization noise tends to be random. In spectral analysis applications (or in performing FFTs on ADCs using spectrally pure sine waves—see Figure 2-17), however, the correlation between the quantization noise and the signal depends upon the ratio of the sampling frequency to the input signal. This is demonstrated in Figure 2-18, where an ideal 12-bit ADC's output is analyzed using a 4096-point FFT. In the left-hand FFT plot, the ratio of the sampling frequency to the input frequency was chosen to be exactly 32, and the worst harmonic is about 76 dB below the fundamental. The right-hand diagram shows the effects of slightly offsetting the ratio, showing a relatively random noise spectrum, where the SFDR is now about 92 dBc. In both cases, the rms value of all the noise components is q/√12, but in the first case, the noise is concentrated at harmonics of the fundamental.

Figure 2-17: Dynamic Performance Analysis of an Ideal N-bit ADC.

(Click to enlarge)

Figure 2-18: Effect of Ratio of Sampling Clock to Input.

Note that this variation in the apparent harmonic distortion of the ADC is an artifact of the sampling process and the correlation of the quantization error with the input frequency. In a practical ADC application, the quantization error generally appears as random noise because of the random nature of the wideband input signal and the additional fact that there is a usually a small amount of system noise that acts as a dither signal to further randomize the quantization error spectrum.

It is important to understand the above point, because single-tone sine wave FFT testing of ADCs is a universally accepted method of performance evaluation. In order to accurately measure the harmonic distortion of an ADC, steps must be taken to ensure that the test setup truly measures the ADC distortion, not the artifacts due to quantization noise correlation. This is done by properly choosing the frequency ratio and sometimes by injecting a small amount of noise (dither) with the input signal.

Now, return to Figure 2-18, and note that the average value of the noise floor of the FFT is approximately 100 dB below full scale, but the theoretical SNR of a 12-bit ADC is 74 dB. The FFT noise floor is not the SNR of the ADC, because the FFT acts as an analog spectrum analyzer with a bandwidth of fs /M, where M is the number of points in the FFT. The theoretical FFT noise floor is therefore 10log10 (M/2)dB below the quantization noise floor due to the so-called processing gain of the FFT (see Figure 2-19). In the case of an ideal 12-bit ADC with an SNR of 74 dB, a 4096-point FFT would result in a processing gain of 10log10 (4096/2) = 33 dB, thereby resulting in an overall FFT noise floor of 74 + 33 = 107 dBc. In fact, the FFT noise floor can be reduced even further by going to larger and larger FFTs; just as an analog spectrum analyzer's noise floor can be reduced by narrowing the bandwidth. When testing ADCs using FFTs, it is important to ensure that the FFT size is large enough so that the distortion products can be distinguished from the FFT noise floor itself.

Figure 2-19: Noise Floor for an Ideal 12-Bit ADC Using 4096-Point FFT.

Part 3 examines distortion and noise in practical ADCs.

Used with the permission of the publisher, Newnes/Elsevier, this five-part series of articles is based on chapter 2 of “Mixed-signal and DSP Design Techniques,” by Walt Kester.

0 comments on “Basics of ADCs and DACs, part 2

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.