# Neuronics: Distributed-Memory Addressing, Part 1

The analog memory quest of the last article in the Neuronics series ended with the idea that multiple positive feedback loops might implement memory that could be used as the weighting coefficients of electronic neurons. These circuits are inherently nonlinear; capturing the system in discrete states requires nonlinear functions. Think of a ball rolling randomly on a planar surface. There is nothing to constrain it, and its position is indeterminate.

A function with extremes — hills and valleys — can capture the ball and hold it if it does not acquire more kinetic energy than is required to escape from a valley. These valleys can be imagined as stable regions of feedback-circuit operation, where the loop gain is insufficient to cause the circuit to leave its present state. A cyclic nonmonotonic function might provide the required mechanism for information storage.

We now return to the CMAC system of Jim Albus, which is essentially a way of reducing the number of required computer memory locations far below the number of addresses presented to it. Using a given memory location for multiple input states (where the input is a vector of N input channels) vastly reduces the required memory size. The CMAC is essentially an associative memory that inputs N analog values (as addresses) and outputs a single analog value. Mathematically, it is a function generator that inputs a vector quantity and outputs a scalar value; it is of the form f (x ), where x is a vector.

We will now examine how to address ordinary digital memory CMAC style. In later parts of this series, we'll adapt these concepts to an analog implementation.

The problem to be solved is how to address CMAC memory, or how to select which memory locations should be accessed for a given input state vector. In ordinary computer memory, a 16-bit address can be viewed as a 16-dimensional vector (N =16). Each vector component is a line or channel that has a resolution of one bit (Q =1), which can address two locations. In the CMAC setup, each of the N lines corresponds to a dimension in N -dimensional space. They can each have a resolution that is not constrained to one bit but can be R >1 bits with R =2r distinguishable states as addresses. Then the total memory that can be addressed has a size of rN bits with RN =2r N memory locations.

When a memory address is viewed as a vector of N components, each of which is a variable, then the distinction between digital and analog becomes one of the resolution of the vector components. For r =1, the components are digital in that they represent boolean or bivalent {0,1} values. There is a continuum between digital and analog in that analog and digital come together in low-resolution analog. Digital is analog at its lowest non-zero resolution.

In the next installment, we'll look at methods for the reduction of address space highlighted by the hashing function for CMAC memory.

1. Netcrawl
February 24, 2014

@Dennis Great post about addressing distributed memory, hardware implementation in CMAC is still difficult to achieve, CMAC's conceptual memeory need a huge space to address te encoded input. Hash algorithm are applied to reduce the space to more reasonable size but these approacxh has some serious drawbacks- collision.

2. D Feucht
February 24, 2014

Netcrawl,

The tradeoff you refer to between collisions and shrinking address-space through hashing will be addressed more in the next three parts of this sub-series within neuronics.

In the end, I hope to present an analog CMAC implementation concept, though the present description is from a purely digital implementation using ordinary memory. The collisions cause us to start thinking about CMAC output in terms of signal to noise ratio. It is as though purely digital computing, with its accuracy down to the bits themselves, is being displaced by some other more analog concepts. And indeed, that is the case. CMAC can be envisioned as how to make digital computers do analog computing.

3. Netcrawl
February 25, 2014

@Dennis thanks for the great info, hope to see more about CMAC and memory collision in the next part of this series.

This site uses Akismet to reduce spam. Learn how your comment data is processed.