If you are building a single-supply, 12-bit SAR-ADC system, such as a multiplexed circuit, handheld meter, data logger, automotive systems, or some other monitoring system, you probably will require an analog signal-conditioning front end. These signal conditioning circuits usually have an amplifier and/or a programmable gain amplifier (PGA) to accomplish analog signal-gain, filtering, and driving activities. A typical block diagram is shown below.

We can discuss the attributes and characteristics of various single-supply operational amplifiers (op-amps) and their related specifications, but you probably have seen this style of review before. And you certainly can count on the various manufacturers to point out the exceptional features of their products. The purpose of this article is to point out the key amplifier characteristics as they relate to their position in these single-supply systems.

The amplifiers that fit into this class of application usually are single-supply, voltage-feedback ones with possible special functions such as programmable gain or shutdown. For our systems, we are going to look at the standalone op-amp and the PGA, excluding the amplifier(s) for the anti-aliasing filter and the driving amplifier to the ADC. The fundamental DC and noise characteristics of interest are:

- Input offset voltage/over temperature
- PGA gain range/gain error
- Bandwidth
- Output noise

These specifications enable a quick decision on the components in these circuits before breadboard. The dynamic range output code of a 12-bit ADC spans from 0 to 4095. We will see how a majority of the amplifier's output range characteristics affect the 12-bit ADC's dynamic range.

**Input offset voltage**

The amplifier and PGA's input offset voltage and offset voltage drift occur because of mismatches in the internal amplifier's input stage. Within an op-amp, there is a differential pair of transistors in the input structure between the inverting and non-inverting inputs. In this evaluation, we will use CMOS op-amps.

You can find these offset specifications in the product data sheets. In this evaluation, one multiplies these errors by the amplifier's gain. This places the op-amp and ADC errors at the input of the ADC.

**PGA gain range/gain error/over temperature**

The amplifier's gain error results from the inaccuracy of the input and feedback resistors. Typically, accuracy of these resistors can be 1 percent or, better yet, 0.1 percent. The resistor manufacturer also specifies the resistor's drift over temperature.

The PGA gain error and over temperature errors are the result of the mismatch of the resistor values in the PGA chip. You can find the specifications for the PGA's gain error and over temperature gain error in the product data sheet.

As a first step in this evaluation, we combine the offset and gain errors, along with the ADC differential nonlinearity (DNL) and integral nonlinearity (INL). You can tabulate these errors in an Excel spread sheet, like the one shown below with the circuit diagram for those numbers.

If accuracy is of no concern, these DC errors impact only the total dynamic range of the 12-bit converter near the rails. You will see this effect below.

**Bandwidth**

The bandwidth ranges of the analog devices are critical. These values service the bandwidth requirements of the input signals, and they provide enough frequency range to accommodate the clocking delays in the system. The next article in this series will provide an overview of the clocking delays found in the systems under evaluation. The objective of the designs in this series is to maximize the bandwidths of five proposed systems.

**Output noise**

Amplifier noise primarily originates in the input stage transistors. You describe the amplifier output noise using regions of frequency. However, it is more effective to describe this noise in terms of voltage (either RMS or p-p) and examine total cumulative noise at the amplifier's output. One calcultates this output noise by integrating the total noise over frequency from DC to the system's maximum bandwidth. Amplifier noise spans across the entire output dynamic range, and this error is impossible to calibrate.

Noise is considered an AC phenomenon. However, in this series, we compare the system's noise impact to the total DC specifications in the circuit. You can see this concept in the image below, which shows the DC errors and output noise across the complete 12-bit range. This figure quantifies the ADC dynamic range.

Note that the DC errors (offset, gain error, etc.) appear near the supply voltage or ground. In an individual system, these errors may land between the supply rails. However, in our evaluation, the diagram rightfully places these errors at either of the two rails. Both the DC errors and noise errors impact the dynamic range of the entire system.

There are various attributes and characteristics of single-supply op-amps that service applications such as handheld meters, data loggers, and monitoring systems. We have covered the main attributes, which will help you make a first pass judgment on which amplifiers will suit your needs. The specifications of interest are the offset voltage, gain error, bandwidth, and noise errors. We suggest that you tabulate these specifications in an Excel spreadsheet for further reference.

Next month's article will introduce the first systems under consideration and look at the timing effects. Here are some additional sources of information on this topic:

- Join the discussion about data converters on TI's E2E Community, where engineers ask questions and help other engineers find solutions: http://www.ti.com/e2e-ca.
- For more information about data converters, visit: http://www.ti.com/dataconverters-ca.

**Related posts:**

- ADC Basics, Part 1: Does Your ADC Work in the Real World?
- ADC Basics, Part 2: SAR & Delta-Sigma ADC Signal Path
- ADC Basics, Part 3: Using Successive-Approximation Register ADC in Designs
- ADC Basics, Part 4: Using Delta-Sigma ADCs in Your Design
- ADC Basics, Part 5: Key ADC Specifications for System Analysis

Bonnie,

Â Â Â excellent series of articles introducing the basics ofÂ ADC operation. I believe that, for the SAR-type converters, the specifications for input resistance and input capacitanceÂ (listed onÂ datasheets), may also be included in this table for comparison. With these numbers would be possible to estimate the maximum allowedÂ sensor impedanceÂ to be connected to the ADC input for a given sample time (or inversely).

I am wondering if 12 bit SAR couldÂ be extended to 16 bits or higher bits to get higher resolution as shown in the diagram above.Â Â If outputs are a floating variable, higher bit may be required. Second, if one 's complement is added inside chip, that will be great.

Dae J,

You are correct. I have focussed this series on 12-bits, which is the most popular resolution. Through out this series I am going to use analog gain and digital gain (aka process gain) to keep the resolution at 12 bits, however the LSB size will be jumping around quiet a bit. For instance, the LSB size of a 12 bit converter with a 4.098 V reference is 1 mV. Now put an analog gain of, say,Â 8 into the circuit and the systemÂ LSB size becomes 0.125 mV. Well you can easily implement that gain of 8Â on the digital side with you ADC by shifting theÂ captured converter bits 3Â steps to the right (that is if you have a 16-bit converter). Hence you are using process gain instead of analog gain. This all works out in the end.

Dircue,

Yes the converter R and C values could be use to figure out your allowable sample time for the converter. However, it is important to be able to control the sample time of the converter. Many converters control the sample time internally so your added exteral resistor will essentially distroy any chance of getting a good conversion.

At any rate, you will be working with an R/C or single pole system on the input of your converter. This single pole system will control the rise time of you signal as the ADC sees it. I don't know how many bits that you conveter has or better yet how many bits that you are interested in, but you can simple figure out the worst case time of an input signal and make that your sampling time.

Dae J

In order to get a higher resolution when you have a 12 bit ADC, may be, oversampling and averaging could be helpful. I have implemented a 12 bit conversion with a 10 bit ADC by doing this over-sampling. However, you could find from web where some claim you could go upto 14 bit from a 12 bit ADC.Â

I am not able to understand why you need a one's complement but many ADCs provide 2's complement which is very convenient. (Ref : Energy Meter chip – CS5460)

'you can simple figure out the worst case time of an input signal and make that your sampling time'

Bonnie,

Very true.

Also, I was not finding Sample and Hold being discussed. An internal sample and hold could also affect the sampling time and if a S/H is in the signal path, it may be important to factor its effect and then calculate the sampling time.

@Bonnie, Thanks for your good quality insite for SAR ADC.

Some useful documents for oversampling of ADC.

Â

http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1824&appnote=en533730

Â

http://www.ti.com/lit/an/slaa323/slaa323.pdf

Â

http://www.st.com/st-web-ui/static/active/cn/resource/technical/document/application_note/CD00177113.pdf

Â

http://www.atmel.in/Images/doc8003.pdf

Â

http://www.actel.com/documents/Improve_ADC_WP.pdf

Â

http://wiki.mcselec.com/Implementation_of_enhancing_ADC_resolution_by_oversampling

Â

This might help someone like me.

Vishal You may check in Silabs website. There is a very nice application note on oversampling.

Jayaraman,

The input signal doesÂ influence your sample time, but there is more to that than meets the eye. You need to also consider the surrounding system. For instance, do you need to build considerations for the settling time of your analog parts in the signal path or are there delays as you change channels with a multiplexer in your system or again your sample hold, etc? I think that this warrants another article that is just about the timing issues in your circuit.

Jayaman,

The oversampling of a 12-bit converter only works if you have noise is the system or if the 12-bit converter is capable of only producing 10 or 11 bits without toggling outputs. If you get a full 12-bits, time after time, without toggleing output bits, the oversampling will not work.

Vishal,

Another good reference is:

http://www.edn.com/electronics-blogs/bakers-best/4322994/Sometimes-noise-can-be-good

@JAYARAMAN:

I am not able to understand why you need a one's complement…Â Â I pictured a simpleÂ application for an ADC with one's complement output.Â See the figure below. A single supply op. ampÂ reduces the sensor's voltage range fromÂ [0Â V – 10 V] toÂ [2.5 V – 0 V]. With a 2 V voltage reference applied onÂ non-inverting input, the op. amp. output voltage is given by Vo = 2.5 – 0.25 * Vi. As this signal is inverted, the ADC with one's complement outputÂ restores theÂ correct polarity. Maybe for a system not based on microcontroller.

Â

Â

Dirceu,

This is an intesresting analog solution to a digital problem.

Oversampling is a widely misunderstood topic. Â Engineers tend to confuse oversampling in a low order delta sigma with oversampling using a SAR.

Oversampling in a SAR is not guaranteed to produce a more accurate code (i.e. more linear). Â In a low order delta sigma it does. Â You can trade time for guaranteed linearity in a low order delta sigma, but not in a SAR.

If you get a lucky SAR part from the manufacturer (i.e. the DAC on the SAR is 16 bit linear not 12 bits), then you can produce a more accurate SAR result by oversampling. Â But don't expect manufacturers to do this very often since 16 bit linear SARs sell for 10X the price of 12 bit linear SARs. Â If you buy 12 bit linearity, you shouldn't plan on 16 bit linearity just because you got time to spend averaging.

Oversampling on SARs just makes the code error more stable and gives you more codes around the error. Â But the error is still the error. Â You just call it a 16 LSB error on a 16 bit code instead of 1 LSB error on a 12 bit code.

Yes, my intention is to use microcontroller for one's complementary, but the below design is a very innovative and challenge.Â I guess that this circuit can be modified slightly for any application with any standard op amp chip.

@Dirceu,

Innovative application of single supply amplifier and one's complement ADC. Of course, for a non-mcu system this would be very valuable.

Bonnie,

I agree fully that the involvement of analog parts in the path, multiplexer , sample/hold, etc influence the timing aspect. Surely, a detailed article on how one can work on these issues, would be welcome.

Scott,

Fascinating assertion. I know that you are bemoaning the fact that you 12-bit converter does not have 16 bit linearity in the on-chip DAC, however consider the manufacturer that develops a family of converters. The more common family of converters would be 8-bit, 10-bit and 12-bit converters. This manufacturer simply eliminates the LSBs at the output as the converter family goes lower in bits. The manufacturer charges the end customer the appropriate price in accordance to the other products offered in industry. This would be for the lower frequency SAR converter that provides an SPI serial output. Using your suggestion you could turn your 8-bit converter into a 12-bit converter with oversampling at the 8-bit price. But the place whereÂ I get lost is the fact (in my example) that you still get 8 bits coming out of the converter. How do you increase these 8 bits to 12 bits.

Bonnie, I'm confused about your post. Â What is it that you understand I am claiming?

Bonnie,

Say I take a 12 bit reading that is alternating between two 12 bit codes (because of introduced dither or noise) and repeatedly add them up to get a total. Â And then divide that total by the number of times the 12 bit reading is added. Â The result of the division is a fractional number. Â I could turn that fractional number into a reading with more than 12 bits. Â See the case below with 256 readings. Â Do you agree?

(2^16/2^12)*(4095 x 128 + 4094 x 128)/256 = 65,512

If you agree, then my point is simply that this is a 16 bit number, but it is not neccesarily 16 bit linear unless the 12 bit DAC was 16 bit linear. Â The process increased the number of bits in the answer, but it didn't necessarily improve the accuracy.

Did I write something that you think is inaccurate?

Jayaraman,

The beginning of this journey will start with the next installment of this article series. Expect it to come out the end of June or the first of July.

Scott,

Thanks for the formula.Â I can see how that works for at least the porton of the formula that is inside the parenthesis, which equals 4,094.5. The multiple of 2^16 does not make sense. Actually, the total formula equals 268,337,152 and not 65,512.

The Title of the thread was “extending 12 bit SAR”. Â My point was that you can extend resolution, but you can't extend accuracy with a SAR unless you get a lucky part. Â I think many engineers confuse one with the other.

And you're right, somehow I dropped the 4096 denominator in my equation that was meant to normalize the fractional result relative to 12 bits prior to converting into 16 bits. Â But I hope you get my point.

[went back and fixed the 4096].Â

Â

Bonnie,

Â Â Many thanks for writing the blog on ADC basics, I read through all the 6 parts and looking forward for next parts.

Â Â I would like to know,

1> are there any architectures that use and sample the current instead of voltages? Because if the voltage starts to drop we can escalate the current and rise above the noise.

2> Does the jitter on the sampling clock affect us in any way?

3> InÂ Î”-ÎŁ architecture, there is oversampling and the need for anti-aliasing filter is needed, but in SAR where we are sampling the ADC at nyquist rate why do we need to have filter (do they have the additional roles like current limiting (R) and avoid charge injection (C) ?

Thanks.

You are very welcome. I wishÂ I had a similar blog as I started to work with these types of converters in my earlier years.

1. There are some ADC architectures that sample current instead of voltage. TI has a strong line that uses delta-sigma technology, however the input signal is current as you asked for. These converters have a front end acronym that is DDC.

2. Jitter does not seem to have an impact until you start to sample at the higher frequencies. The sampling clock determines the samples per second of the converter.

3. The anti-aliasing filter is definitely required with the SAR converter. The order of this anti-aliasing filter is higher than the delta-sigma converter. For instance, the delta-sigma converter only requires a 1st order filter (or maybe a 2nd order filter) to eliminate high Â as the highest order for this type of converter. The SAR converter requires a 3rd to 7th o r8th order anti-aliasing filter; typically 4th or 5th order filter. The purpose of the anti-aliasing filter is to remove signals that have a frequency component higher that the nyquist frequency of the converter. If you are unable to remove these signals, the converter is reliably convert these signals into a lower frequency within the bandwidth of the digital conversion. The actual magnitude of these signals reduced per the input bandwidth of the converter. The signals are brought back into the converters bandwidth at the digital output with the formula f

_{ALIASED}= |f_{IN}– Nf_{S}| where N is equal to an integer between 1 and infinity, FIN is your input signal, and Fs is the ADC's sampling frequency. Â Charge injection does play a role in the accuracy of the conversion, but this is different than any aliasing error in the output digital code.