Advertisement

Blog

Can μCs Replace Analog Circuits?

The cult of DSPism believes that eventually there will be essentially no analog circuits, that all circuitry will be either digital, as microcontrollers (μCs), with mixed-technology ADCs and DACs. This article explores the possibilities and limitations on this viewpoint.

Before μCs became circuit components, there was a clear divide between digital and analog. Digital circuits consisted of the kinds of functions found in 7400-series TTL or 4000-series CMOS databooks: flops, shift registers, decoders, and gates of all kinds. Analog databooks have quite different parts: mostly op-amps, voltage references, combinations of them such as voltage regulators, and various borderline parts such as comparators, which are analog in, digital out; the 555 timer, which has mostly analog pins but also has digital out and reset in; analog switches, which have digital in; and multipliers, which can have a digital input in some applications. On closer inspection, much of what is in the analog databooks has been partly digital all along. A comparator is a one-bit ADC, and a 555 timer is a kind of voltage-to-time converter, which is itself a kind of ADC.

So what exactly is digital and what is analog ? The best way to define these words is in reference to waveforms which are signals when they are encoded with a message in communications systems. Waveforms are electrical functions of time: v (t ) or i (t ) or even p (t ). The definition could be extended to any dynamic (time-dependent) physical quantity – and even to social variables such as company cash flow or the U.S. debt as functions of time. Analog waveforms are simply continuous functions of time. The definition ultimately goes back to mathematical definitions of continuity and analysis . In math, analysis refers to continuous functions and is “analog math”. Wherever waveforms are continuous in time, we have analog electronics.

In contrast, digital is mathematically synonymous with discrete . Discrete functions have discontinuities in their numerical values and are not associated with the continuous number line of real numbers but with the integers, which leave gaping holes along the number line to be filled by the irrational numbers. Digital computers have waveforms that are discrete in both value and in time. Therefore, to simplify the definitions to their minimalist essence:

Digital waveforms are limited to two values, {0, 1} (which make digital waveforms Boolean functions) but this digital limitation can be partially overcome by grouping multiple scalar digital variables together into a vector quantity that can represent a number in base 2. The earliest μCs did this with four bits (Intel’s 4004, the first μC), then quickly progressed to 8 bits, which became a dominant grouping of bits for processors in the 1970s and ‘80s. Now 32-bit ARM processors are becoming commonplace.

As an aside, the groupings of bits have informally become associated with names that have stuck. The most common is the byte for 8 bits, and of lesser usage, the nibble for 4 bits. I would like to propose a fuller set of neologisms, retaining the ingestion metaphor of the language that has become associated with bit groupings:

As the number of bits increases, numerical resolution increases and approaches the continuum in the limit. At the same time that functions represented in software are gaining in resolution because of increasing μC bit groupings, μCs are also increasing in clock rate, causing their discrete-time characteristic to approach continuity. Both trends cause μC capabilities to approach that of analog functions. Thus it might seem reasonable to suppose that μCs with ADCs and DACs are all that should be necessary for electronics in the future.

Zeno’s paradox – the frog jumping half the remaining distance to the line never gets there – ideally suggests that this belief is in vain, that no matter how fast μCs execute code, they are still digital in nature if the clock rate is finite. And in a physical world, it is. Yet in real-world engineering, many applications do not need infinite clock rate to adequately approach a continuous-time status, and as clock rates increase, more electronic systems that were once the sole domain of analog circuits are being implemented with μCs. On the leading edge of this trend are parts like the Cypress PSOC5, which has attractively-performing op-amps (±0.3 mV input offset voltage to over 2 x σ, and fT = 6 MHz) cohabiting a chip with an ARM processor.

What is limiting μC-only electronics? Several factors work against an all-μC world. First, as μC speeds increase, operating voltages decrease to 3.3 V or lower. This is not an adequate voltage range for many applications including general-purpose T&M equipment. Pin currents are also limited so that any application requiring significant power will not be implemented practically by a μC. Special-purpose μC-based ICs are a possibility for addressing these kinds of limitations, with power DACs of various kinds.

Another basic limitation is speed. Analog waveform processing is still far faster than a fast μC running a software program and driving a DAC or ADC. Analog multipliers can achieve multiplication to comparable accuracy (and at greater resolution) at a far greater bandwidth than a μC.

Systems such as measurement instruments have modes – different structural configurations of circuits – that are switched. Most commonly, these are ranges for parameters. The analog switches that do the ranging can be many and varied in their requirements. Electromechanical relays are in some cases the best (and sometimes the only feasible) analog switch. The resistance values of op-amp circuits can be many and varied. Analog circuits often need capacitors, some large in value. Some have matched diodes or transistors. These can be implemented monolithically, but the generality of the circuitry is a problem. The PSOC5 has an analog bus but not a general switch matrix that can allow for arbitrary component interconnection. Even if it could, the switches would detract from circuit performance in many cases. A general-purpose analog-circuit matrix has been an elusive goal for the analog circuit as a programmable I/O block for a μC.

High-performance analog – whether by precision or speed – is not likely to be replaced by a μC. Do not expect to see oscilloscope front-ends or waveform generator output amplifiers to be all-digital. μCs also cannot replace fast digital functions such as direct-digital waveform synthesizers.

Power electronics is not likely to be replaced by a μC except some control functions. The power stage, consisting of power MOSFETs, gate drivers delivering pulses of 1 A or more, magnetic components, and power capacitors are of a different electronics world than μCs. The same applies to motor-drives and pulsed-power electronics, including laser and ultrasonic probe drivers for medical instruments, and magnetizers, high-power transmission-line converters, and electric train drives, which use SCRs for switching. Exceptions to μC application abound in analog electronics.

The issue closer to home for many circuits engineers is not one of feasibility but of optimality. As the μC realm expands, it is at the edges of its applicability that hard design decisions must be made. Often, these decisions involve choices between external analog circuitry or μC code. For instance, in impedance (RLC) meter design, phase detection can be implemented with an analog translinear amplifier (purely analog) or analog switches followed by low-pass filters and switched by digital phased waveforms (semi-analog), or by acquiring the voltage and current waveforms with an ADC and calculating the real and reactive components of Z in software (μC digital). Radio designers are going through these same assessments, whether to use analog RF circuits or put a fast high-resolution ADC in the front-end of the receiver – and how close to the antenna? In audio, how good is digital amplification? Good enough for “golden ears”? For some, even bipolar-junction transistor analog is not good enough.

The clear design trend is one of replacing low-performing analog with μCs while retaining analog circuits for high-performance applications, where performance is measured in speed, precision, power, functional specialization, or other intangible aspects such as the knowledge-base of the designer, analog or digital technology biases, parts count, maintainability in the field, or observability for testing. In conclusion, at the leading edge of μCs, it is not always clear which design alternative is optimal, and the ability of the engineer can be strained in having to make such decisions.

8 comments on “Can μCs Replace Analog Circuits?

  1. Esen
    April 7, 2015

    Very nice article Dennis,

    I personally think a well-designed and well-tested circuit consists of a couple of analog ICs is more reliable than thousands of lines of code. Design choices may become much more clear if you consider the entire system in terms of “failure possibility” or “unintended behaviour”. But still in small companies like I work for at the moment, that choice is straightforward and you know what I mean. 

    Bob Widlar would turn over in his grave if he saw what this μC madness has turned into.

  2. vphilavanh
    April 7, 2015

    Very nice article Dennis.  It is hard to tell what is analog and what is digital to most people these days with the industry trend of integration…unless you peal back a few layers and dive into the specs.

    Monolithic these days means the integration of multiple analog functionalities to generate digital bits out to communicate with the uC (or FPGA).  Perhaps the founders of Fairchild didn't envision complete systems on a single die could be possible…or did they?  One that comes to mind is the agile transceiver (AD9361) from Analog Devices.  

  3. eafpres
    April 11, 2015

    Hi Dennis–to me an interesting aspect of this question of digital or analog is the language used around high-speed data in data communications, such as the backplane data rate capability in a server.  It seems fairly common, even when the data are discrete, to talk about 25 GHz and 25 Gbps (gigbits per second) interchangeably.  Using GHz is natural to circuit designers concerned with losses and signal to noise.  Using Gbps is natural when talking aobut how fast the system can move data.

    These systems running at 25 GHz or faster are smack in the microwave frequecy range, which for a long time was the domain of specialty EEs called RF Engineers.  RF stuff is “weird” as a VP of a company I worked for in the past liked to say.  One of the weird aspects is when you get into 100s of MHz and above, radiation into space as well as coupling ambient radiation into circuits (i.e., EMI) are two important mechanisms that designers at low frequencies didn't have to worry about.

    I guess for me the faster the speeds the more the silos of knowledge will be impediemnts and those who can master “analog and digital” or DC and RF will be more successful.

  4. D Feucht
    April 11, 2015

    Good point. At these high frequencies, digital waveforms, which are digital because they are boolean (2 values) in amplitude and discrete in time are actually continuous in time and are only interpreted as discrete using eye diagrams, etc. Their discreteness is an abstraction that is imposed upon the continuous waveform.

    Consequently, to engineer systems having these waveforms, one must have some  knowledge of continuous-waveform electronics, which is just another name for “analog” electronics.

  5. Victor Lorenzo
    April 12, 2015

    “(…)to talk about 25 GHz and 25 Gbps (gigbits per second) interchangeably ” it can even be more confusing as real data rate, in bytes per second, depends on other factors like bus architecture, framing, error checking, bit coding and other protocol related aspects.

  6. Victor Lorenzo
    April 12, 2015

    >> “a well-designed and well-tested circuit consists of a couple of analog ICs is more reliable than thousands of lines of code

    It is highly dependant on the final application and developtment tools used. There are many functions which are much more complicated and far less flexible when implemented with analog components than their counterpart digital domain models.

    Testing software components is difficult in some aspects, especially for unexperienced programmers, but fortunately we have plenty of static and dynamic code analyzer tools and tool sets. It is also fairly easy to write test cases and test benches and we also have several tools that help automating this process.

    For testing digital processing methods and algorithms we also have tools like MatLab, SciLab, Mathematica and others. Some tools like MatLb itself have modules for generating code which we can integrate into our application.

    Some other tools like Altera's NIOS II C2H compiler generate hardware accellerator VHDL/Verilog code from C source code which we can integrate directly as part of the soft core CPU's peripherals set.

    Both worlds have advantages and disadvantages, but it is up to us making the evaluation depending on our skills, experiences and ability to learn and master new things.

  7. eafpres
    April 12, 2015

    Hi Victor–your comments on software testing caught my attention.  It is a fact of today's designs, that almost everything, analog or digital, chips or finished products, involve lots of software.  Even if not embedded in the product, the design, test, even the test equipment and the manufacturing lines run on software.  

    At no time in our history has the Quality adage “you can't test in Quality” been more true.  Test is fine, but software, like any other product or tool, must be designed for quality and reliability.  You mention Test Cases; modern software development also includes so-called Unit Tests and Integration Tests which are part of development, and test code at every build.  Still not perfect, but if anyone depends only on QA to validate their software they are at high risk of issues later.

  8. Victor Lorenzo
    April 13, 2015

    Hi Blane,

    Software development has evolved at an incredibly fast pace and we can feel it especially in modern software development paradigms and methodologies. There is a mostly clear differentiation of several SW development areas: Security devices (Smartcards, HSM, secure comm), automotive (ECU, safety, etc), desktop user applications, embedded industrial devices (controllers, Servos, PLC, etc.), mobile user applications, compilers, test tools….and so on.

    Most SW development scenarios have a comprehensive set of quality assurance standards, either formally defined or simply generally accepted.

    Static/dynamic code analyzers help detecting errors according to a very extensive knowledge base and, in my oppinion, are the second barrier between us (occasional or dedicated SW developers) and SW bugs. The first barrier is making use of well established coding standards (like the The JSF air vehicle C++ coding standards or MISRA).

    Unit Tests constitute the third barrier. With this methodology every functionality is isolated and tested.

    Unfortunately we are not able to use UT under all SW development scenarios, even though we can use tricks and add dedicated code in the application that we can simply disable for release generation. I wrote a note for CodeProject time ago with one example application that implements the foundations for UT by adding JScript support (http://www.codeproject.com/Articles/644687/Adding-JavaScript-scripting-support-to-one-applica) It was at the core for the testbench I implemented during development of one smartcards emulator.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.