Large-Scale Integration: Neuronics, Part 1

In two previous blogs, Neuronics Creates Highly Efficient Memory, Part 1 & Neuronics Creates Highly Efficient Memory, Part 2, we talked about neural computing and how as a science or technology, it has come into its own. It is largely associated with digital computing, but there are indications that there are some very powerful computing possibilities using analog circuit topology. If we can show that analog has a higher information coding density than digital, then we are on the right path.

Is there a way of implementing the capabilities of Jim Albus’s CMAC scheme using analog circuits, by analog processing? One problem that immediately arises in attempting to convert any digital scheme to analog is that digital memory differs from analog memory in that a digital state can be held indefinitely with no loss of information while in analog, the energy of a storage reactance eventually decays because of resistance.

A voltage across a sample-and-hold capacitor can be held for a long time by clever circuit methods, but cannot hold indefinitely or without loss of information. Is there a way of using feedback or other techniques to store an analog value in analog circuits?

One method that extrapolates from the sample-and-hold or zero-order hold (ZOH) circuit recognizes first that the discharge of the hold capacitor caused by leakage current is a systematic error, ε, that can be both predicted in polarity (capacitors discharge; their voltages decrease with time) and reduced to leave a zero mean error with some deviation. Then by implementing a statistically-significant number of these ZOHs in parallel and averaging their outputs, something like an indefinite memory might be achieved with negligible deviation. A proposal is shown below. Sampling switches open at the same instant, when capacitor leakage has yet to occur and when errors ε1 = ε2 = 0V.

For ε2 = ε1 , Δε = 0V and vc = vi . The systematic discharge of the capacitors is subtracted out, leaving their difference in discharge rates, which is random for similar capacitors. Either could discharge faster than the other and errors ε2 and ε1 have a random component so that generally, ε2 ≠ ε1 and Δε could be of either polarity with a mean of zero. Consequently, the Law of Large Numbers is applied to stochastic circuits to maintain the stored value. However, this scheme seems rather inelegant and circuit-consuming even if it were to work. Despite statistical methods for reducing deviation, all the capacitors eventually discharge. A variation is to emphasize time scales and the lack of need to store information indefinitely, only long relative to more dynamic neuronic processes. Yet the human brain stores information for a lifetime.

Dynamic RAM refreshes capacitors to indefinitely retain without loss of the contents of digital memory. What is different is that it is single bits that are being refreshed.

Another of the more obvious schemes is to use gate charge storage, as in E2 PROM memories. While this is a possibility, it might be infeasible because of the need for rapid and frequent updating of memory. In organic neural systems, synaptic weights (or in the cerebellum, Perkinje cells) are updated at a lesser rate than neuronal firings.

Learning is slower than neural activation dynamics. Yet the time-scale is not too much different. With E2 PROM as an option, one can always do what Altera does for its FPLAs and back up the analog information in digital form, with DACs and distribution methods. It is a possibility, but only for long-term memory or “mind capture” and probably not for operating memory.

In the second part of this blog, we'll continue our look at possible methodology for neuronic analog memory.

Related posts:

4 comments on “Large-Scale Integration: Neuronics, Part 1

  1. Davidled
    November 8, 2013

    If the circuit is learned from training, this type circuit would be designed in the IC with multiple circuits. Bit information (0 or 1) would be learned in the digital circuit. I wonder if charging voltage of CAP is a learned weight factor in the analog circuit.

  2. etnapowers
    November 13, 2013

    The ZOH circuit of figure is a good solution to sample an analog value , an important point is the management of the switches, who charge the holding capacitors. An error in the timing of these switches can generate an unpredicted charge of the capacitor, and cause the failure of the sampling and hold process.

  3. etnapowers
    November 13, 2013

    The reliability of the switches in ZOH circuit of the figure , is very important. Many times the switches are blocked in a state , and this is due to many factors: temperature, iterative sampling with an high number of commutations etc… If one of the switches fails the conversion A/D will be ineffective .

  4. SunitaT
    November 30, 2013

    The component Neuronics is an inter-disciplinary association between technical and medical know-how in scientific neuroscience. Neuron ICs are applications and communications processors. They are artificial to include layers one thru six of the OSI communications procedure, so as to individual the Applications Coating needs to be planned, reducing expansion costs for Lon Works-proficient apparatuses. Neuron integration chips are accessible with opposing speeds, memory type, interfaces and capacity.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.