In my last blog, Operations per Joule, I talked about digital, analog, and the brain — comparing the energy efficiency of each and questioning the approach we take to solve problems.
The brain gets all of its advantage by working with physics rather than using some foreign mechanism to capture and process information. We don't yet know how it does all of it, but there is one aspect of the brain that is very clear: It relies on large amounts of slow communications rather than concentrating communications into a few very fast channels. The systems we design are based on how fast we can make communications, attempting to optimize either the throughput or latency.
We talk about signal propagation times in a computer in the nanosecond range, while for the brain it is in the millisecond range. It would appear that computers have a 106 speed advantage, and yet we cannot even come close to the total processing power.
Now, I want to tread on shaky ground for a moment and I hope nobody takes offense. Haier and Jung are neuroscientists who have been studying the brain for a long time. In a 2005 report they determined that men tend to have more gray matter while women have more white matter. Gray matter loosely equates to processing function while white matter equates to communications matter. The fact that both men and women seem to have similar mental abilities would imply that the two are partially interchangeable in that they both provide a certain level of capability.
In the digital world, there is a very clear distinction between what is processing and what is communications. We have, in fact, even built all of the programming paradigms around the fact that computation is faster than communications, so we attempt to minimize the latter as much as we can. We also separate memory from computation and cluster, primarily for practical fabrication reasons, but this inadvertently puts communications in the way of computation.
So while we have managed to create communications that are much faster than what happens in the brain, we have not managed to use them to our advantage. In the analog world, we are perhaps closer to the brain model in the way that computation and communications are not really that separate. Wires connect components in a network and can even perform some aspects of the computation itself. In the previous blog I used Kirchhoff's current law to demonstrate how a wire can perform an addition.
What we don't seem to have worked out is how to deal with noise in the analog world. It is generally accepted that the brain is very noisy and uses redundancy, either within or externally (multiple people thinking about a problem). It's as if one mind can never really be trusted, so multiple minds can compensate for it. It is not clear if the brain uses averaging or if it excludes the outliers through some form of threshold, but one thing is clear — it does not define noise as the number of bits of resolution that can be placed into a digital flow; it is more about how useful the result is for an intended function.
It is possible that the brain has a better manufacturing system than our chips and that this process yields more consistent performance in each of the cells, but there are techniques such as digitally assisted analog that can be used to compensate for some of this and that will be the subject of some future blogs. But for now I will leave you with a question: Did the introduction of digital into the processing chain make life more difficult for analog design? Would certain things be easier if they went back to being all analog?