The tradeoff for less memory is that a given memory location with CMAC addressing might be used for multiple input states, resulting in collisions for some memory locations. Because a larger input space has been mapped into the smaller address space, interference is inevitable. By adding outputs from a unique combination of memory locations for each input state, interference adds a small amount of noise to the output value if the input space is sparse — that is, if few collisions occur. Then the interference is of an acceptable amount. A plot of the tradeoff is shown below.
The optimum K might be taken as the value where the noise effect on the output function value is reduced to be numerically equal to the probability of interference between input states, or…
In practice, among robot arm researchers, good results have been obtained in the dynamic control of arms for K = 80 and N = 12, with RN = 4096. Then Kopt = 60 which is not too different from 80. If memory contains 8-bit byte values, then the total range on the output function, in bits, is k + 8. The maximum error that a shared location can contribute to the output value is thus 8 bits out of k + 8. Typically it will be a small fraction of 8.
By choosing an overlapping address coding scheme like Gray coding, where only one bit changes in the code among adjacent states, then states that are close in N -space to learned states will code to most of the same memory locations and put out a function value that is close to that of the learned state. In other words, for similar states, similar values will be output, even though behavior using the new states has never been used in training the memory. This means that the CMAC memory can generalize from what has been learned for continuous functions. In any localized region of N -space, training a state in the region will also provide some training for all the points in the region.
In summary, CMAC distributed-memory address decoding relative to ordinary linear decoding reduces the address space by decomposing it into K address spaces with fewer address bits (q •N ) per space. A 16-bit memory space (a = 16), for instance, could be decomposed by using the four least-significant bits for k . For N = 3, then…
…and the least-significant byte of the 16-bit address decomposes into 4 LSBs for K and 4 MSBs for Q . The channel encoding (Q ) no longer has the higher resolution added by the lower 4 bits and can address only 16 bytes per input vector component. For 3 dimensions, the address-spaces of Q resolution consists of q •N = 12 bits and there are K = 16 of them.
In applications of neural networks, such as robot arm control or machine vision, N > 1 and the advantages of CMAC memory increase. In electronics historically, electronic circuits have commonly entailed single-variable input-output systems. Most analog measurement instruments and waveform sources have N = 1 or 2 (such as stereo or video color encoding). Distributed memory illustrates an opportunity to expand to multiple-I/O analog systems. As N becomes large, the number of circuit connections dominates over the processing in complexity.
Computing becomes largely an addressing process instead of an ALU process and is connectionist in emphasis. IC implementation of connection-dominant circuits relies heavily on monolithic multilayer interconnect capability. Emerging 3D IC technology is bound to greatly aid the connection problem and allow conceptually interesting neuronic analog circuits to be implemented. Meanwhile, existing “2+ -D” IC technology will allow for lower-N circuits to be built. Still needed are analog circuits that do CMAC coding, and that is the topic of an upcoming neuronics article.