No circuit or system is perfect, of course, so the real issue is “is it good enough for the application?” This is a question and dilemma – which often takes on philosophical aspects – [with which many analog designers grapple during initial design, formal review, and validation processes], especially so when the analog circuit involves a sensor and its signal conditioning as it often does.
First, there’s the challenge of quantifying what “good enough” means in the specific case. Second, it is rare that a single, briefly stated objective such as “accuracy of 0.1%” is enough to characterize the full situation, as there are many kinds of accuracy and errors: worst-case and typical nonlinearity, distortion, and various artifacts can lead to some interesting and often heated discussions as to which specs are important and how they related to the context of the situation. Plus, you nearly always have to factor in the temperature range that the electronic and mechanical elements may see, for assessing effect of temperature and even component overheating.
When the objectives are aggressive, and the accuracy and performance specs are tight, the designer must look at multiple paths to success. There are usually three ways to go, and they can be used individually or in parallel:
1) Calibrate the sensor and the channel, either one-time up front or in ongoing use. This seems sensible, enough but often easier to say than do. In general, sensors are difficult to calibrate, especially once they are in the field. After all, how do you calibrate a temperature sensor? You’d have to bring a controlled heat source to the sensor, characterize the sensor and channel performance, and then work the calibration numbers into the system. And if the sensor is replaced, you may have to re-do the process.
There is a little bit of good news related to sensor and channel calibration. Some MEMS sensors, such as accelerometers, have a provision for directly stimulating the sensing element as if it is being externally affected, and that provides some level of assurance.
2) Use better, more-accurate parts and also minimize sources of error within those parts. The need for accuracy is why there’s still a huge market for precision op amps, with ultralow bias current, offset voltage, noise, and temperature-related drift. Sometimes, a few dollars spent on a top-tier,
However, not all improvements come this easily with a modest, well-defined cost and minimal need for additional design effort. For highest performance oscillators, the unit may be a temperature-compensated version (called a TXCO) or even one within a controlled oven (OCXO). In many cases, the efforts made to eliminate inherent first-, second-, and even third-order error sources have been fairly impressive, as noted in the excellent MIT Press book “Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance” by Donald MacKenzie, Figure 1.
This book offers unique insights into the interplay between guidance and navigation technology and the surrounding geopolitical atmosphere. (Image source: MIT Press)
He explains that when the ultimate performance of classic spinning-rotor gyros was needed for missile guidance, the first step was to replace mechanical ball bearings with pressurized gas-rotor bearings, which have better performance at high speeds than even the best ball bearings. The next step was to minimize the effects of gravity on the rotor by “floating it” in a high-density silicon fluid that created neutral buoyancy. Finally, to eliminate third-order errors which were the result of temperature-induced variations in the density of this fluid, the entire floated assembly was placed in a temperature-controlled enclosure. That’s some serious effort to eliminate error sources ranging from medium to almost unmeasurable.
There are times, however, when the sources of errors are hard to determine at first, and even harder to eliminate. A recent article in Microwave Journal, “Measuring Quartz Crystal Oscillator G-Sensitivity”, made this very clear as it discussed the challenge of measuring the effects on oscillator crystals of various kinds of acceleration, including constant in single direction, vibration, shock, displacement, and basic inclination and rotation, in x-, y- and z-planes. In some applications, just turning the unit over or sideways during operation induces tiny but unacceptable shifts in the crystal and oscillator resonance.
3) Finally, there’s the engineer’s favored way to improve accuracy, although it is often not possible: self-cancellation. Sometimes, through clever circuit topologies or component arrangements, it is possible to have sources of error track and cancel themselves out. This is one of the many reasons for the popularity of ratiometric measurements as typified by the elegantly simple Wheatstone bridge, where use of identical components in the bridge arms leads to “free” cancellation of some significant types of error, Figure 2.
The classic Wheatstone bridge configuration is still widely used due to its simplicity and versatility, despite being almost 200 years old. (Image source: Omega Engineering Co.)
The self-cancellation concept is also used in differential line drivers, transmission lines, and receivers, rather than single-ended links. The idea is that external noise will be induced equally on both lines, so the difference with be zero (ideally) and thus self-cancel. While this does not eliminate 100% of the noise in practice, it can attenuate it by tens of dB at little incremental cost.
What techniques and tactics have you used to improve accuracy? Have you found one that you favor consistently?
When Your Sensors Mislead You
How the Analog Challenge Has Changed
Contrary to Rumor, Analog Circuit Design Is Alive & Exciting
Extreme Analog Design: Don't Forget Those Passives