Advertisement

Blog

Neuronics Creates Highly Efficient Memory, Part 2

In Part 1, we were just starting to examine a memory method proposed by Jim Albus that used analog encoding to greatly expand memory capacity.

Albus hit upon an idea not all that different from that of Pentti Kanerva in Sparse Distributed Memory (MIT Press, 1988). It is also presented in various publications, from refereed journals to my best recommendation for engineering readability: Albus's own Brains, Behavior and Robotics (Byte Books, McGraw-Hill, 1981). The papers from Albus's group at NIST are also good, especially if you are interested in robotics.

Albus has multiple original and interesting ideas, which have generated a following of researchers who have implemented his Cerebellar Model Arithmetic Computer (CMAC). Control systems have been implemented with it (usually for robot arms, but it could be applied to audio amplifiers, too) that have learning behaviors uncannily similar to that of humans and animals acquiring motor skills.

This learning involves improvement in accuracy with repetition (which would be expected of any learning scheme), but it also learns to generalize from learned behavior so that similar tasks never performed before also benefit from prior learning, and it does so more quickly than other neuronal schemes. The CMAC learns the multidimensional function — a function of a vector — that defines the desired behavior for a given input.

A novelty of this scheme is to effectively (and hugely) increase computer memory by letting every input to it access multiple memory locations instead of one. The contents of the multiple locations are then added.

Conventional memory, as illustrated by Albus (p. 158), is as shown below, where S is an input state, a vector of addresses. And h(S) is the acquired value, the sum of the content of these addresses. In conventional memory, S is a scalar (a single address) and h(S) is its content.

In Albus’s scheme, memory stores values at addresses that are like neuronal synaptic weights except the neurons share synapses. It is shown (from p. 159 of his book) below.

If N memory locations are accessed per unique state input and the memory has M locations, then the effective number of addresses in memory space are M taken N at a time, or:

RAM consisting of only 64 memory locations addressed four at a time can contain 635,376 values in function-space, h(S). This scheme effectively increases memory capacity immensely, though at the cost of interference caused by overlapping use of a given location by multiple input states. In practice, for N = 80 and M = 2048, impressive motor control of robot arm simulations have been achieved. Albus has also implemented it on actual robot arms with good success.

Can this scheme benefit from the higher per-node information density of analog circuit implementation? This quest is to be continued in future parts of this neuronics thread.

Related posts:

6 comments on “Neuronics Creates Highly Efficient Memory, Part 2

  1. Davidled
    November 2, 2013

    It sounds like memory access is based on weight of each node similar to Neural Network. Long time ago, Motorola engineer implemented analog circuit of neural network.  I concern a little bit about the calculation time of summation to access the contents of memory by address.

  2. Netcrawl
    November 3, 2013

    @Dennis that was great! interesting one, communication delay (the delay between the processor and main memory) is the bottelneck of most system performance, complex memory hierarchies are require to handle this one,

    In modern CMPs several processors cores are accessing the same shared memory resources simultaneously, this is where the problem start, that why we need something new capable of everything-a well-designed memory systems should be used to optimize the speed and access efficiency of shared memory resources. 

    @Daej I heard that one, its fascinating and quite interesting, But I think they no longer interested on that, Google already owned Motorola, so we expect Google to change Motorola's focus.   

     

  3. Davidled
    November 3, 2013

    I really consider the memory reading access time as part of memory cycles in the computer architecture. This method requires the extra cycles to calculate the summation. But extra cycle causes delay for other instruction. Also, memory reading would be checked to avoid reading error, which is requiring other cycles to meet address and data bus timing requirement.

  4. samicksha
    November 5, 2013

    This sounds interesting and somewhat new to me, curious to learn more on same, can we relate this to the amount of information which can be stored in the network and to the notion of complexity.

  5. rfindley
    November 6, 2013

    @Dennis, This is essentially holographic memory.  I use a vaguely similar method in my neural network work.  It is very effective for memory compression.

    I assume the “address decoder” is a fixed function in CMAC?  I'm curious if/how related or similar data is grouped for maximum compressibility with minimal loss of data quality.

     

  6. D Feucht
    November 7, 2013

    You pose a most pertinent question. First, I should note that Albus's scheme is not analog at all; in my reading of his work (and speaking to him), all of it has been implemented on digital computers.

    To select which addresses should be read (and added) for a given input, Albus and his group used hash coding, but I do not recall the hash algorithm as being given. It is certainly a factor that needs significant consideration. As I recall, Albus's implementations tried to randomize the hashing as much as possible – that is, be as random in selection of addresses as possible.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.