In Part 1, we were just starting to examine a memory method proposed by Jim Albus that used analog encoding to greatly expand memory capacity.
Albus hit upon an idea not all that different from that of Pentti Kanerva in Sparse Distributed Memory (MIT Press, 1988). It is also presented in various publications, from refereed journals to my best recommendation for engineering readability: Albus's own Brains, Behavior and Robotics (Byte Books, McGraw-Hill, 1981). The papers from Albus's group at NIST are also good, especially if you are interested in robotics.
Albus has multiple original and interesting ideas, which have generated a following of researchers who have implemented his Cerebellar Model Arithmetic Computer (CMAC). Control systems have been implemented with it (usually for robot arms, but it could be applied to audio amplifiers, too) that have learning behaviors uncannily similar to that of humans and animals acquiring motor skills.
This learning involves improvement in accuracy with repetition (which would be expected of any learning scheme), but it also learns to generalize from learned behavior so that similar tasks never performed before also benefit from prior learning, and it does so more quickly than other neuronal schemes. The CMAC learns the multidimensional function -- a function of a vector -- that defines the desired behavior for a given input.
A novelty of this scheme is to effectively (and hugely) increase computer memory by letting every input to it access multiple memory locations instead of one. The contents of the multiple locations are then added.
Conventional memory, as illustrated by Albus (p. 158), is as shown below, where S is an input state, a vector of addresses. And h(S) is the acquired value, the sum of the content of these addresses. In conventional memory, S is a scalar (a single address) and h(S) is its content.
In Albusís scheme, memory stores values at addresses that are like neuronal synaptic weights except the neurons share synapses. It is shown (from p. 159 of his book) below.
If N memory locations are accessed per unique state input and the memory has M locations, then the effective number of addresses in memory space are M taken N at a time, or:
RAM consisting of only 64 memory locations addressed four at a time can contain 635,376 values in function-space, h(S). This scheme effectively increases memory capacity immensely, though at the cost of interference caused by overlapping use of a given location by multiple input states. In practice, for N = 80 and M = 2048, impressive motor control of robot arm simulations have been achieved. Albus has also implemented it on actual robot arms with good success.
Can this scheme benefit from the higher per-node information density of analog circuit implementation? This quest is to be continued in future parts of this neuronics thread.