Neuronics: Distributed-Memory Addressing, Part 3

Albus devised a method of mapping from the input vector (where each vector component is the value of one of the multiple dimensions of input space) by a coding scheme like that of Gray-code encoders or multiphase motor drives. A 2-phase encoder scheme encodes one variable, the rotary position angle, into two digital lines or channels of one-bit resolution. Each channel has half the angular resolution of the encoded variable and they are offset from each other so that the cycles are separated by 90 degrees, resulting in four position states encoded per cycle by the two channels.

Albus noted that this is how mossy fibers in the cerebellum function. He expanded the number of channels to K = 2k . Each of the N input values is consequently encoded at higher resolution than one bit and at a lower resolution of Q = 2q . The K encoder channels with Q states each are offset from each other and overlap like the channels of a Gray-code encoder so that they have a resolution of KQ = R . Within one state of Q resolution of an input, the different values of the K channels encode to a resolution of K . For a given input value, a change of resolution R from it will cause a change in one of the K channels. Thus, the resolution of R is now distributed among K channels, each having a reduced resolution of Q . The result is distributed memory : K address-spaces of size Q N .

In ordinary addressing, the address space is of size R N = (KQ ) N . In bits (or on a log2 scale),

r = (k + q )•N

where r = log2 R , k = log2 K , and q = log2 Q . The total number of addresses — the physical memory size — is Q N or qN bits for each input. Each of the N inputs is decomposed into K values having q bits each. Then the total number of memory locations addressed is A = KQ N , or in bits is

a = k + qN

For each input state, CMAC address decoding outputs K memory addresses of q bits each. The contents of these addresses are added to result in the final output value.

For a one-dimensional (or scalar) input, N = 1, and for equal k and q values, r = a = k + q and there is no advantage to CMAC address coding. As N increases, the advantage increases in that a < r . For example, suppose a 3D printer has three analog Gray-scale encoders, one for each axis. Then the input vector has 6 components (3 encoders × 2 tracks per encoder); N = 3. K = 2 for the two channels per encoder. Let each analog encoder track have a resolution of q = 9 bits. Suppose the encoder outputs are used to address a memory that outputs a function of the vector input — an associative memory — that is used to control a print gun. The number of memory locations required to store a value of the output function for each point in input space (which corresponds to the working volume of the 3-D printer) is R N = (K + Q )N or in bits of address,

r = (k + q )•N = (1 + 9)•3 = 30

or about 1 G locations. The CMAC scheme separates the tracks to reduce addressing to KQ N or in bits of address,

a = k + qN = 1 + 9•3 = 28

With K = 2, the reduction is not much, only 4 times. If encoders were used with twice the number of tracks, or K = 4 (k = 2) and Q reduced accordingly to q = 7 so that the encoder resolution, k + q , remains at 10 bits overall, then r remains unchanged but

a = k + qN = 2 + 7•3 = 23

and the CMAC memory requirement is about 8 M locations, 128 times less than for conventional memory addressing.

5 comments on “Neuronics: Distributed-Memory Addressing, Part 3

  1. Davidled
    April 6, 2014

    Address bus, data bus and control bus would be indicated in the system bus. Most memory requires Read, Write and CS (enable chip) signal. Memory address would be total number of physical word and total number of bits in the physical word.

    I wonder how decoder selects the specific address location while they meet the timing requirements in the Microprocessor: read and write cycle time diagram. Generally, 74 series decoder logic chip would be used. Then machine language would be implemented based on system diagram of memory design. If state machine diagram is indicated, it would better to figure out this method to other application besides 3D Printer.

  2. D Feucht
    April 6, 2014

    The other application, which is the main one in mind for neuronics, is analog computation. The fourth part of this article will expand on this.

  3. samicksha
    April 7, 2014

    In last blog we discussed about one of the major problems cited in practical use of CMAC is the memory size required, which is directly related to the number of cells used, can we account on hash function to increase memory space for us.

  4. Sachin
    April 12, 2014

    Most memory requires write, read and CS signals, and in any given system bus, control bus, data bus and address bus must be indicated. I think that analog computation is very important since even the machine language is implemented based on analog computation perspective, and this must be based on system bus diagram of memory status.

  5. SunitaT
    May 10, 2014

    The conceptual building blocks of an architecture that supports Event Processing; that is, an event processing system, should provide core functions such as event-processing logic, and connect event producers and consumers through events. A useful model for thinking about such architectures and system is the event-processing network (EPN) construct, a conceptual formulation that describes the structure of event-processing systems and the common features that they should all support.

Leave a Reply