In two previous blogs, Neuronics Creates Highly Efficient Memory, Part 1 & Neuronics Creates Highly Efficient Memory, Part 2, we talked about neural computing and how as a science or technology, it has come into its own. It is largely associated with digital computing, but there are indications that there are some very powerful computing possibilities using analog circuit topology. If we can show that analog has a higher information coding density than digital, then we are on the right path.
Is there a way of implementing the capabilities of Jim Albus’s CMAC scheme using analog circuits, by analog processing? One problem that immediately arises in attempting to convert any digital scheme to analog is that digital memory differs from analog memory in that a digital state can be held indefinitely with no loss of information while in analog, the energy of a storage reactance eventually decays because of resistance.
A voltage across a sample-and-hold capacitor can be held for a long time by clever circuit methods, but cannot hold indefinitely or without loss of information. Is there a way of using feedback or other techniques to store an analog value in analog circuits?
One method that extrapolates from the sample-and-hold or zero-order hold (ZOH) circuit recognizes first that the discharge of the hold capacitor caused by leakage current is a systematic error, ε, that can be both predicted in polarity (capacitors discharge; their voltages decrease with time) and reduced to leave a zero mean error with some deviation. Then by implementing a statistically-significant number of these ZOHs in parallel and averaging their outputs, something like an indefinite memory might be achieved with negligible deviation. A proposal is shown below. Sampling switches open at the same instant, when capacitor leakage has yet to occur and when errors ε1 = ε2 = 0V.
For ε2 = ε1 , Δε = 0V and vc = vi . The systematic discharge of the capacitors is subtracted out, leaving their difference in discharge rates, which is random for similar capacitors. Either could discharge faster than the other and errors ε2 and ε1 have a random component so that generally, ε2 ≠ ε1 and Δε could be of either polarity with a mean of zero. Consequently, the Law of Large Numbers is applied to stochastic circuits to maintain the stored value. However, this scheme seems rather inelegant and circuit-consuming even if it were to work. Despite statistical methods for reducing deviation, all the capacitors eventually discharge. A variation is to emphasize time scales and the lack of need to store information indefinitely, only long relative to more dynamic neuronic processes. Yet the human brain stores information for a lifetime.
Dynamic RAM refreshes capacitors to indefinitely retain without loss of the contents of digital memory. What is different is that it is single bits that are being refreshed.
Another of the more obvious schemes is to use gate charge storage, as in E2 PROM memories. While this is a possibility, it might be infeasible because of the need for rapid and frequent updating of memory. In organic neural systems, synaptic weights (or in the cerebellum, Perkinje cells) are updated at a lesser rate than neuronal firings.
Learning is slower than neural activation dynamics. Yet the time-scale is not too much different. With E2 PROM as an option, one can always do what Altera does for its FPLAs and back up the analog information in digital form, with DACs and distribution methods. It is a possibility, but only for long-term memory or “mind capture” and probably not for operating memory.
In the second part of this blog, we'll continue our look at possible methodology for neuronic analog memory.