In the comments to my series of blogs on temperature measurement on the late, lamented Microcontroller Central, there was some feeling that a blog on calibration of analog IO might be a good idea.

Let me start by taking you back to some high school math. We know that a straight line graph takes the form y = mx+b (Equation 1), where m is the slope of the line, and b is the intercept on the y axis. There was a spirited discussion on this site on whether this was a linear function. Without getting drawn into that discussion, I am going to refer to the straight line graph as linear for convenience, so sue me if you don't like it. The typical graph is shown below.

The way to derive the equation of the line is using this approach:

(x-X1)/(X2-X1) = (y-Y1)/(Y2-Y1)… (Equation 2)

When I learned that as a Cartesian technique, I just accepted it without realizing that it is nothing more than the equation derived from similar triangles. Of course, you can calculate the slope and intercept, but I have found that it is simpler to work with Equation 2 without the extended math operations to get what would probably be floating point numbers. Not every physical relationship can be characterized by a straight line, but it will be a good place to start.

In acquiring or outputting an analog voltage, the analog to digital converter (ADC) or the digital to analog converter (DAC) is rarely the only component in the chain of connections between the signal source and the digital number that the microcomputer will use. Every ADC or DAC has its own shortcomings, including offset voltages and reference tolerance, but throw in an op-amp or two and a multiplexer, and now you have a whole series of gain and offset variations that vary widely between individual units.

Unless you are designing a rather coarse system, you will want to take these variations into account, and the way to do this is by calibration. Calibration allows the system to learn the conversion response to a known stimulus. Provided the system is linear, if you set up two known input points (X1 and X2 in the graph above) and measure the associated output points (Y1 and Y2), you can then work backward from a measured y to get the corresponding input x.

Let's start simple with a DAC. Let us say that the DAC on the system is a 12-bit device with a 5V reference (so the maximum output is 5V at 0xFFF). The DAC output drives an op-amp; we want a 0-10V output. The way I do it is to set the gain on the op-amp to just over 2 to guarantee that the maximum output will always be higher than 10V for any possible variation on the DAC reference voltage and op-amp gain resistors. (It should go without saying that the op-amp must be powered from a supply of more than 10V.)

This approach reduces the dynamic range of the DAC, but it is a small price to pay to get rid of one or two trimpots. As part of my software development (design for test), I will create a method of bumping the output up or down in a calibration mode, so that the DAC setting is adjusted until the measured output (with a calibrated DVM) is 1V. (Of course, there is an associated tolerance.) This number is then learned as N1. Repeat the adjustment process at the top end looking for 10V, and the result is N2. These numbers are saved, preferably in nonvolatile storage.

The chosen voltages don't have to be at the ends of the range, though the wider the range, the better. In this case, the 0V may have an issue, because of the offsets of the DAC and op-amp. In case you are wondering, the math obviously holds outside the chosen calibration points, though the electronics may not be able to deliver. For instance, if you calculate a value for the DAC that is more than 4,095 or less than zero, the DAC cannot get to it.

If we use Equation 2, we can substitute:

(y-1)/(10-1) = (x-N1)/(N2-N1)… (Equation 3).

This can be reduced to:

(x-N1) = ((N2-N1) • (y-1)/9)… (Equation 4).

Extracting x, we get:

x = ((N2-N1) • (y-1)/9)+N1… (Equation 5).

(N2-N1) is a constant for a given calibration and can even be pre-calculated for execution speed. By doing the multiplication and then the division, I find I can normally get away with integer arithmetic. At the end of the calculation, you write x to the DAC with the confidence that this will give you the desired output voltage.

Let's now change the data flow and look at an ADC, as shown above. A 0-20mA current loop is a rare variation of the 4-20mA current loop, but I want to use it to make a point. Because it is possible to connect several receivers in series in the current loop, you want the volt drop in your receiver to be low and allow for floating input. Otherwise, it would be simple to connect a 250Ω resistor to ground. Let's choose an input resistor of 22.1Ω, so that the maximum voltage drop is 442mV. The instrument amplifier has a gain of 10, so we get to 4.42V.

I will choose to calibrate at 0mA and 20mA. The lower range is not chosen coincidentally — a judicious selection can simplify the calculation (my point from above), as you will see. The calibration procedure injects no current (i = 0mA), and the associated ADC reading N1 is learned. The calibrator is changed to i = 20mA, and the upper reading is taken and saved as N2. Going back to Equation 2 again:

(i-0)/(20-0) = (n-N1)/(N2-N1)… (Equation 6).

As I said, a choice of zero will simplify this to:

i/20 = (n-N1)/N2-N1)… (Equation 7).

And finally, i = 20 • (n-N1)/(N2-N1)… (Equation 8).

When the ADC performs its conversion, the current (i) can simply be calculated. Again, multiplying all the numerators before working on the denominator normally allows me to stay with integer arithmetic, especially if I scale the readings appropriately. For instance, instead of working with 20mA, I could express it as 20,000μA or some intermediate scale.

When calibrating, I have found it is much better to use a calibrator specially designed for the purpose. Using the sensor can yield unreliable results. We once sold an RTD to 4-20mA module that was set for 0-100°C. The customer tried to calibrate it by inserting it in freezing water and then in boiling water. How do you decide that the temperatures you are measuring are universal throughout the liquid and whether the RTD has stabilized — to say nothing of the fact that altitude and impurities affect both transition points? Using a calibrator allows for consistent settings in production, but it does mean that the variations in the sensor's performance will play into the overall system performance.

The characteristics of electronic components change with both age and temperature, so it can be beneficial to calibrate periodically. The problem is whether the customer has suitable calibration equipment.

So far, we've covered the approach I use in the vast majority of my analog projects. It has worked well for me, but there is at least one more approach that I will discuss in part 2.

Any interim comments?

**Related posts:**

<< I am going to refer to the straight line graph as linear for convenience, so sue me if you don't like it.>>

That comment alone made reading your post worth it! Thanks.

Three dimension map data might be beneficial for analyzing the relationship between Gain and two resistors. Therefore, the above DAC behavior could be correctly investigated.

Hi Aubrey–nice post, with some interesting considerations for choosing calibration points. Having spent time in various labs, some meaasurement labs and some not, I always distrusted calibrators. The biggest problem I have with them is that you have to be really, really diligent to check them, and re-certify them on appropriate schedules, otherwise you can introduce completely unknown (and unknowable in the future) biases into your calibration.

You are right of course about using ice baths and boiling water as reference points has drawbacks. One suggestion I would have is that some known point like this be included in the procedure as a check on drift. If you calibrator is going haywire such a check before amassing a bunch of data can be a life saver.

eafpres

The biggest problem I have with them is that you have to be really, really diligent to check them, and re-certify them on appropriate schedules,The problem may in fact be worse. We have our calibrators re-calibrated every year per ISO 9000, but I have seen cases where staff misuse the calibrators which could concevably lead to calibration drift before the due date.

As an example we use DVMs to measure both high current on power supplies, and the same instrument to measure 4-20mA loops. If the power supply test stresses the DVM it could lead to faulty calibration elsewhere.

An extreme of this is where the tester has misconnected the DVM and blows the protective fuse. Then they come to test the 4-20mA loop and decide the DUT batch is defective because there is no measurable output current. I have prepeared another blog on testing your test equipment, but this may appear on my MCU Designlines blog.

” misconnected the DVM and blows the protective fuse. Then they come to test the 4-20mA loop “

I know this isn't a joke or meant to be funny, but I am laughing out loud. So now you need a work procedure update for instrument check to first verify the DVM fuse is intact, maybe measure a 9V battery or something, before doing any check measurements. Then, you will need a pre-test on the 9V to ensure it has any current left. And a pre-check on the pre-tester for the 9V for the test check before the real verification of the actual process instruments!

We humans must give ourselves credit–we can be so hard headed there just is no process that is truly human proof!

eappres

I know this isn't a joke or meant to be funny, So now you need a work procedure update for instrument check to first verify the DVM fuse is intact, maybe measure a 9V battery or something, before doing any check measurements. Then, you will need a pre-test on the 9V to ensure it has any current left. And a pre-check on the pre-tester for the 9V for the test check before the real verification of the actual process instruments!No joke. We actually had an ISO non-conformance to this effect. Watch for my upcoming blog.

@antedeluvian–you are killing me!! I can't wait for that blog. Next time ask the ISO guys who certifies their certifying body. I'll bet they can't name it without looking it up.

eafpres

Martin Rowe actually did an article on the topic .

<>

They use the King's foot. Oh! No, that's for distance. Forgot.

@antedeluvian–thanks for the link. Martin surely has been around a lot of real world environments and is a good resource. I've added the PDF to my library.

On a more serious note, I once managed development of antenna modules for things like telematics (those black modules on all the car tops now). The final test was very complex–essentially we made benchtop size anechoic (at RF frequencies) chambers into which we put the modules then exercised them using a network analyzer, measuring their behavior as an S21 measurement with the other side of the air-link being a set of patch antennas at the right frequencies. Some modules had GPS, SDARS, and Cellular so there were 3 connections routed to an RF switch that swapped inputs and outputs on the network analyzer and a computer that told the NA what to do, then collected the data.

In order to verify it we had “gold” modules that a technician had to run at any line change, start of day, operator swap, etc. If the gold unit would not run OK then the tech had to debug the station before proceeding. There were more than one gold unit, so if one became damaged another could be tested, and the suspect one taken to the RF lab for analysis.

Without a “gold” unit (and I recall in fact there were probably 2 or 3 levels–bronze units the line operators could use, silver units the techs used each day, and gold units only brought out if a problem occurred to to certify another silver or bronze unit) the procedure to validate the station from scratch would have taken a couple hours every day.

But is a straight line straight on a sphere? Is it straight on a CRT? (OK, a bit of aged humor)

@Scott – always good to keep from getting Aubrey riled up (he knows where you live). So, good that you agree with his approach.

@eafpres – to just that one point you made – yes! Check you equipment and its calibration before you start gathering data/running a test. I recall overlooking that important step as a young whippersnapper engineer. And wasting my time on a test (and the time of the engineer in charge of the project).

@eafpres – you're making my head spin….

@Scott – that, and 3 barleycorn kernals; and the distance from the king's elbow to to tip of his middle finger. That should cover it.

eafpres

I promised there would be a blog on verifying that your test equipment is working. Here it is: Testing Your Test Equipment

Thanks, Aubrey. Good article, covers everything I have been through on the automotive side. One thing you did not mention is a gage R&R on your testers. Do you routinely do that on new test setups?

I also think that for complex test setups you pretty much have to have golden parts, and backups for those.

Your idea of known bad parts is interesting. I don't think we ever went that route that I can recall. Doing FMEA on the test setup is also an interesting idea.

eafpres

One thing you did not mention is a gage R&R on your testers.It's a Friday afternoon (of a long weekend here in Canada) and my brain is possibly a bit slow. I don't believe I am familiar with the term “gage R&R” and “R&R” before a long weekend takes on a different meaning. Could you elaborate please?

Hi Aubrey–it should be gauge R&R, although it is interesting how many references use gage R&R. Anyway, it is the process of testing the repeatability & reproducibility of the measurement system. Typically involves multiple operators making the same measurement multiple times and assessing how much variation comes from the measurement system and how much comes from the operators. Those uncertainties then figure into your analysis of actual measurements in terms of confidence interval.

eafpres

Typically involves multiple operators making the same measurement multiple times and assessing how much variation comes from the measurement system and how much comes from the operators. Those uncertainties then figure into your analysis of actual measurements in terms of confidence interval.Interesting. We are a small company and we really don't have the resources to tackle this approach. Firstly we try to design the test so that no technical knowledge is required although that is not always possible. Where the test is software based I always create a table of parameters that can be adjusted. Often the test is built around prototype runs and so we don't really have an idea of where the variations will come from especially after we outsource production/test to the Far East.