Advertisement

Blog

Calibration, Part 1

In the comments to my series of blogs on temperature measurement on the late, lamented Microcontroller Central, there was some feeling that a blog on calibration of analog IO might be a good idea.

Let me start by taking you back to some high school math. We know that a straight line graph takes the form y = mx+b (Equation 1), where m is the slope of the line, and b is the intercept on the y axis. There was a spirited discussion on this site on whether this was a linear function. Without getting drawn into that discussion, I am going to refer to the straight line graph as linear for convenience, so sue me if you don't like it. The typical graph is shown below.

Graph showing a straight line relationship.

Graph showing a straight line relationship.

The way to derive the equation of the line is using this approach:

(x-X1)/(X2-X1) = (y-Y1)/(Y2-Y1)… (Equation 2)

When I learned that as a Cartesian technique, I just accepted it without realizing that it is nothing more than the equation derived from similar triangles. Of course, you can calculate the slope and intercept, but I have found that it is simpler to work with Equation 2 without the extended math operations to get what would probably be floating point numbers. Not every physical relationship can be characterized by a straight line, but it will be a good place to start.

In acquiring or outputting an analog voltage, the analog to digital converter (ADC) or the digital to analog converter (DAC) is rarely the only component in the chain of connections between the signal source and the digital number that the microcomputer will use. Every ADC or DAC has its own shortcomings, including offset voltages and reference tolerance, but throw in an op-amp or two and a multiplexer, and now you have a whole series of gain and offset variations that vary widely between individual units.

Unless you are designing a rather coarse system, you will want to take these variations into account, and the way to do this is by calibration. Calibration allows the system to learn the conversion response to a known stimulus. Provided the system is linear, if you set up two known input points (X1 and X2 in the graph above) and measure the associated output points (Y1 and Y2), you can then work backward from a measured y to get the corresponding input x.

A 0-5V out DAC driving an op-amp to give a 0-10V range.

A 0-5V out DAC driving an op-amp to give a 0-10V range.

Let's start simple with a DAC. Let us say that the DAC on the system is a 12-bit device with a 5V reference (so the maximum output is 5V at 0xFFF). The DAC output drives an op-amp; we want a 0-10V output. The way I do it is to set the gain on the op-amp to just over 2 to guarantee that the maximum output will always be higher than 10V for any possible variation on the DAC reference voltage and op-amp gain resistors. (It should go without saying that the op-amp must be powered from a supply of more than 10V.)

This approach reduces the dynamic range of the DAC, but it is a small price to pay to get rid of one or two trimpots. As part of my software development (design for test), I will create a method of bumping the output up or down in a calibration mode, so that the DAC setting is adjusted until the measured output (with a calibrated DVM) is 1V. (Of course, there is an associated tolerance.) This number is then learned as N1. Repeat the adjustment process at the top end looking for 10V, and the result is N2. These numbers are saved, preferably in nonvolatile storage.

The chosen voltages don't have to be at the ends of the range, though the wider the range, the better. In this case, the 0V may have an issue, because of the offsets of the DAC and op-amp. In case you are wondering, the math obviously holds outside the chosen calibration points, though the electronics may not be able to deliver. For instance, if you calculate a value for the DAC that is more than 4,095 or less than zero, the DAC cannot get to it.

If we use Equation 2, we can substitute:

(y-1)/(10-1) = (x-N1)/(N2-N1)… (Equation 3).

This can be reduced to:

(x-N1) = ((N2-N1) • (y-1)/9)… (Equation 4).

Extracting x, we get:

x = ((N2-N1) • (y-1)/9)+N1… (Equation 5).

(N2-N1) is a constant for a given calibration and can even be pre-calculated for execution speed. By doing the multiplication and then the division, I find I can normally get away with integer arithmetic. At the end of the calculation, you write x to the DAC with the confidence that this will give you the desired output voltage.

Circuit to convert a 0-20mA loop to a 0-5V input ADC via an instrumentation amplifier.

Circuit to convert a 0-20mA loop to a 0-5V input ADC via an instrumentation amplifier.

Let's now change the data flow and look at an ADC, as shown above. A 0-20mA current loop is a rare variation of the 4-20mA current loop, but I want to use it to make a point. Because it is possible to connect several receivers in series in the current loop, you want the volt drop in your receiver to be low and allow for floating input. Otherwise, it would be simple to connect a 250Ω resistor to ground. Let's choose an input resistor of 22.1Ω, so that the maximum voltage drop is 442mV. The instrument amplifier has a gain of 10, so we get to 4.42V.

I will choose to calibrate at 0mA and 20mA. The lower range is not chosen coincidentally — a judicious selection can simplify the calculation (my point from above), as you will see. The calibration procedure injects no current (i = 0mA), and the associated ADC reading N1 is learned. The calibrator is changed to i = 20mA, and the upper reading is taken and saved as N2. Going back to Equation 2 again:

(i-0)/(20-0) = (n-N1)/(N2-N1)… (Equation 6).

As I said, a choice of zero will simplify this to:

i/20 = (n-N1)/N2-N1)… (Equation 7).

And finally, i = 20 • (n-N1)/(N2-N1)… (Equation 8).

When the ADC performs its conversion, the current (i) can simply be calculated. Again, multiplying all the numerators before working on the denominator normally allows me to stay with integer arithmetic, especially if I scale the readings appropriately. For instance, instead of working with 20mA, I could express it as 20,000μA or some intermediate scale.

When calibrating, I have found it is much better to use a calibrator specially designed for the purpose. Using the sensor can yield unreliable results. We once sold an RTD to 4-20mA module that was set for 0-100°C. The customer tried to calibrate it by inserting it in freezing water and then in boiling water. How do you decide that the temperatures you are measuring are universal throughout the liquid and whether the RTD has stabilized — to say nothing of the fact that altitude and impurities affect both transition points? Using a calibrator allows for consistent settings in production, but it does mean that the variations in the sensor's performance will play into the overall system performance.

The characteristics of electronic components change with both age and temperature, so it can be beneficial to calibrate periodically. The problem is whether the customer has suitable calibration equipment.

So far, we've covered the approach I use in the vast majority of my analog projects. It has worked well for me, but there is at least one more approach that I will discuss in part 2.

Any interim comments?

Related posts:

20 comments on “Calibration, Part 1

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.