It has been estimated that the human brain performs 3.6×1015 synaptic operations per second and, from blood flow and oxygen consumption, consumes 12W. That means it manages 3×1014 operations per Joule, yet it is made up of slow and noisy components.
Digital computers have come a long way. UNIVAC managed 0.015 operations per Joule in 1951. IBM's Blue Gene managed 1.684 x 109 operations per joule in 2010. It has been argued that the human brain is that much more efficient, because it uses the natural physics associated with it, rather than something foreign. This may also imply that the conversion to the digital domain and the computation in that domain is inefficient, and I think few people would disagree with that. It is generally accepted that a processor is the least efficient way to get any particular job done. It just has flexibility on its side.
Therefore, it would seem that analog should be capable of higher levels of computation per unit of energy, because it too can utilize the physical attributes of its environment. But are we making use of this? It would appear that progress is being made, but I wonder if we aren't on the wrong track.
In a 2004 presentation on digitally assisted analog circuits, Bernhard Boser of the University of California, Berkeley, offered these two graphs. The first showed improvements in digital and analog circuitry performance over a 15-year period. It is not entirely clear to me what type of function this is based on, so the figures may be questionable. However, his numbers show a 150x difference in improvements between digital and analog.
He does note that, over that period, analog improved more than digital in terms of power consumption, although analog trailed digital in logic improvement. This again shows how processor architectures got less efficient during this period.
In 1998, Rahul Sarpeshkar of the department of biologic computation of Bell Labs, wrote in a paper that analog computation could be far more efficient than digital computation. Adding two eight-bit numbers would require about 240 transistors in a CMOS digital process, but it would take a single wire and the application of Kirchoff's current law in a analog structure. Similarly, an eight-bit multiplication would consume 3,000 transistors in a digital setup, but a similar multiplication of two currents in an analog setup takes only four transistors. The problem here, as I see it, is that he is comparing things that require absolute precision with things that cannot provide it, even though they would probably produce just as good an answer for most purposes.
Have we set ourselves up for requiring such precision and repeatability that we have got ourselves into a corner? The human brain is probably no more accurate than a typical analog circuit, and we hardly understand the noise sources to which it is subjected, yet we presume that the human mind is way more capable than any machine — even though we sometimes make mistakes. We are so much more powerful than computers that we can add redundancy into the process to try and catch the large errors, and most of the time we ignore the small ones. Ironically, IBM's Watson, which was designed to win on Jeopardy , made a number of errors but still managed to consistently beat its opponents. However, the power taken to do that was immense.
Are we expecting too much precision from analog? If we relaxed that, could we do a lot more for less power?