In the previous part of this blog (Neuronics: Long-Term Memory, Part 1), we were exploring the basis for the development of a technology that could store analog quantity indefinitely. We looked at squaring and multitanh circuits.

A somewhat different inverse-function technique is applied to the multitanh circuit by driving the input of the multitanh function directly with the circuit input. Then as *v _{i} * increases,

*v*will reach the first maximum, the peak of the first sine cycle. As

_{o}*v*increases further,

_{i}*v*decreases, reaches a minimum, and increases again until it reaches the second peak and descends again. A reference voltage source at the sine center voltage can activate feedback circuits at each cycle and send back positive feedback to hold the multilevel hysteresis circuit at a given cycle.

_{o}The result is not a function but something more like a hysteresis loop, except that there are more than two stable points for a given input, as shown below. Consequently, more than one bit of information can be stored by such a circuit indefinitely.

Recall from the previous part, we were looking at the generation of a repeating sine function that could be extended for multiple cycles.

We also considered a square root circuit that could rotate graphical representation of the function by 90°.

Continuing, the inverse function has five distinct values for a given input voltage. Which state corresponds to the output voltage depends on the history of the input. The even-numbered nodes are stable because the inverse function has a negative slope at their zero-crossings, and would correct any deviations from the stable value at the zero-crossing. (This is analogous to synchronous motor stability.) Because these “nodes” are stable, output values can be retained indefinitely.

From Barrie’s research on multitanh circuits in the past, it appears that the number of cycles of such a function can be extended indefinitely so that the information content of a multitanh hysteretic switch depends only on how many BJTs, *I's* , and *R's* are added to the feedback circuit. Such a scheme could employ a million BJTs, operating at low currents. The lower current limit is determined mainly by the low-level injection characteristics of the BJTs. This can be as low as a fraction of a nanoampere. MOSFETs have different characteristics, though the same kind of multivalued hysteretic circuits are also a possibility for them. Thus, ultra-large-scale integration of neuronic BJT circuits might be viable.

For 262,144 = 2^{18} BJTs, and using 6 bit switches, then the number of synaptic weights that can be stored on one chip is about 2^{18 – 6} = 2^{12} = 4096. With 8 weights per neuron, this results in 512 neurons. This is enough for some significant neuronics applications. This memory scheme ostensibly achieves the indefinite retention of a state voltage value for a weighting function. It is not the only possibility for analog storage. The memory is volatile (as is organic memory to a significant extent).

Another scheme that avoids the dissipative discharge of hold capacitors stores information in nonlinear oscillators that are described in the phase-plane by (closed and stable) limit cycles. Because they are nonlinear, if a particular parameter is set right, an initial voltage value can cause frequency *bifurcation* of the limit cycle so that it is possible to select one of multiple oscillation frequencies, with one bit per stage of frequency bifurcation. The variation required to cause successive stages of frequency splitting decreases with splitting stage, where each new stage requires a parametric ratio change of 1/δ where δ is the *Feigenbaum number* : δ ≈ 4.66920.

Consequently, every additional bit of storage requires a range extension of log_{2} δ ≈ 2.22 bits. Thus 6 bits requires that the oscillator parameter — some analog quantity — be selectable over a range of 13.34 bits or about 4 decades of range. The desirable feature is that information is encoded as frequency, and this can be measured to great accuracy, though its conversion for application to other neural inputs of successive stages is problematic. Or is it? It seems that organic neurons operate in part on information encoded by the frequency of arriving pulses.

Two ideas have been presented for the indefinite, though volatile, storage of information in analog form. Volatility is a secondary concern because circuits can conceivably (like organic brains) be powered for the lifetime of circuit use. We have yet to make an attempt at an analog CMAC, and some ideas for it are coming in the next article of this series.

**Related posts:**

- Neuronics: Long-Term Memory, Part 1
- Large-Scale Integration: Neuronics, Part 1
- Large-Scale Integration: Neuronics, Part 2
- Neuronics Creates Highly Efficient Memory, Part 1
- Neuronics Creates Highly Efficient Memory, Part 2
- Getting From Scopes to Semiconductor Innovations
- Between Discrete & Integrated Circuits
- ASICs vs. Semi-Discrete Design

EPROM and other memory chips store information, but this method is using the weighted value to get the information. If there is a schematic in the blog, it would be more understandable. It seems like neuron chips stores a zipped information.

Digital memory is well-established and is not the goal of the neuronics quest; analog memory is – that is, memory that can indefintely retain a given value using circuits of continuous functions. Because this kind of memory has not been demonstrated, this two-part neuronics article on analog memory is an exploratory foray into the subject, leaving the reader to ponder how it might be implemented in circuit-diagram detail.

Another distinction to be made in any particular memory technology is volatility, most of today's storage devices store data by means of circuits that are either latched in a high or low state, and its latching effect holds only as long as power is maintained on those devices.

@Dennis: Are you referring to the zone of operation or the typical currents?

The Mosfets implementation of tanh has been explored?

@Netcrawl: I totally agree on your point. Once the power is off, is there a chance for this type of implementation to keep the data?

I borught up volatility in the article, noting that the best neural circuits, organic brains (like ours), need to remain powered or they (quickly) malfunction. However, this is also true of Altera FPGAs, for instance, which must be loaded when powered on. The best situation is to leave the neuronic circuits on, especially if they are low-power.

I presented the neuronal circuit in the context of BJTs for which a diff-pair has a (large-signal or total-variable) tanh transmittance. MOSFETs diff-pairs do not have the tanh function but all that is needed is a smooth function, one that has a continuous derivative over a bounded output range. MOSFETs should be able to be used in this case.

“The best situation is to leave the neuronic circuits on, especially if they are low-power”

That's absolutely true. This is possible when the neuronic circuit can access a energy stored in a mode that is independent from the supply.

@Dennis, thank you for your explanation, If I'm not wrong the architecture of neuronal circuit that you presented might be realized with each diff-pairs couple of switches provided that the continuous derivative is guaranteed. This is very interesting because it allows the circuit maker to utilize the tecnology and the switch type that is more suitable to his scope, depending on the availability.