As most analog circuit engineers know, designing a circuit for very good performance at a nominal or fixed operating point is only part of the challenge. A larger challenge is making sure that the circuit provides precision performance over time and temperature, along with the variabilities of various transducers, as well as supply-rail and current fluctuations.
For this reason, I re-read the outstanding EDN article by the late, much-missed Jim Willams, "This 30-ppm scale proves that analog designs aren't dead yet" (see it here), every year or so.
To me, his processor-free design represents the ultimate in engineering understanding and circuit elegance, as he identified and worked with (or around) every error and drift source. For example, read how he describes testing for the zero-drift current point of a Zener diode he is using as a voltage reference. This is a great example of what Samuel Florman refers to as the "Existential Pleasures of Engineering."
But times change, and so do technologies. When I spoke with Jim shortly before his untimely passing, he said that much of the striving for analog perfection he used to do was no longer as necessary. With the availability of embedded processors running compensation and calibration routines, and integrated flash memory to store calibration factors, he could now calibrate a circuit digitally. He could do this for general system errors such as drift, or even do additional compensation for individual production unit variations.
Errors in transducers could also be calibrated out at the factory, as long as you could link the specific transducer to those calibration factors stored in the instrument, via an electronic tag.
These steps would take care of many errors, but not all. Some error sources, such as various types of noise, would be much harder to calibrate out -- but perhaps more could be accomplished with additional effort and algorithm development. (See: Can Integration Help With Our Noise Gamble?)
By integrating non-analog functions such as processor, code memory, and data memory onto a single IC, along with the analog signal channel(s), designers have a new way of dealing with long-standing error situations -- and overcoming them. To what extent they can do this depends on the specific errors and imperfections: are they foreseeable, understandable, and consistent?
Perhaps the historical striving for inherent analog-circuit perfection is no longer as large as imperative as it was when Jim developed the 30-ppm scale and many "near-perfect" circuits.
Which error types do you think can be calibrated out with digital techniques, and which ones canít? Have you ever assumed you could minimize an error with one approach, but eventually found that you had to do it the other way?