In the ancient days of analog signal conditioning and data-acquisition channels, about 25 years ago, designers put lots of effort into perfecting the input channel. Sources of error such as drift, nonlinearity, gain accuracy, and other parameters were analyzed and then overcome by elimination or cancellation.
This was done by designers carefully studying possible architectures and meticulously selecting components; through the use of matched components, compensation, and clever circuits; and other methods. Ironically, in some cases, the “perfect” channel then had to be deliberately made less perfect, to accommodate and compensate for the vagaries of the attached sensor, such as the thermocouple with its well-defined nonlinearities of temperature versus Seebeck voltage.
All this has completely changed. It now is easier and makes much more sense to accept channel imperfections, or plan for the ones you deliberately want, and instead calibrate and correct the channel numerically. The calibration factors are usually stored in local EEPROM, and the channel will then perform to specification as the driver software uses these factors in its calculations. As an added benefit, the same physical channel can now, via a change of key calibration factors, handle many I/O types such as different thermocouples.
Of course, the next logical step is to have those imperfect transducers have there own calibration factors. That's the intention of IEEE 1451, which defines a Transducer Electronic Data Sheet. (TEDS). Transducers with this capability are self identifying and calibrating, among other capabilities.
Overall, transducer and analog channel calibration is a good thing, with no apparent downside. But there may be some. It's not cost, since the cost of the calibration circuit and EEPROM is generally less than the cost of precision analog circuitry. Instead, it's designer complacency. Unless you understand what exactly is calibrated, and what is not, you can have a false sense of perfection or at least meeting your system specs over all relevant situations. Is the circuitry calibrated over temperature? Over changes in ac and dc supplies? Over aging of components or saturation? Over nonlinear effects that may occur as voltage, current, or other internal factors change, whether consistently or not? How about variation in other factors? Can the calibration factors in memory get overwritten or corrupted?
It's not that you shouldn't use digitally based calibration factors—in fact, you'd be foolish not to, it's just that you need to be aware of what this calibration and compensation encompasses. That's responsible engineering practice.