Advertisement

Blog

Large-Scale Integration: Neuronics, Part 2

In the first part of this blog (Large-Scale Integration: Neuronics, Part 1), we were looking at possible methodology for neuronic analog memory. We'll continue that discussion and look at ways to mimic the biological version of the neuron.

Neural computing has been carried out almost exclusively on digital computers as simulations. While this is illuminating, it gives few clues toward analog implementation of memory. If we look into some of the schemes that have been developed by neural-net researchers, we can begin to see the possibilities for analog implementation. The mainstream schemes for the basic computing cell, the electronic neuron, implement one to three layers (or what we would call stages) of these cells with a relatively large number of them per stage.

The typical neuron is itself a two-stage unit. The first stage is linear and scales inputs, xi , according to the stored values, or weights, wi . Then the combined result, which can also be represented mathematically as the vector dot product of an input vector with a weight vector, is input to the second, nonlinear stage. This function can vary; it is often the logistics function, xo •(1 – xo ) but is given here as the function implemented by a BJT differential-pair. It has been shown that successful learning techniques can be applied if the function has a continuous derivative.

The nonlinearity is that of a bounded function. The input has a theoretically infinite range, but the output is constrained to be within the range of ±1. For outputs from a stage of n neurons, the resulting output vector is constrained to be within an n-dimensional hypercube with vertices of (1, 0, 0, …, 0), (0, 1, 0, …, 0) to (0, 0, …, 1).

If we consider the above analog neuron to be our basic building block, then how can we implement memory needed for wi ? It is not unreasonable to suppose that some kind of feedback will be necessary. In analog circuits with bistable states, some form of positive feedback is always required. To write values of wi , it will be necessary to access the first neuron stage from another input than x to change w . Yet this does not directly address the problem of how the w values are to be sustained. A possibility to be explored is a set of neurons implementing positive feedback loops that affect only w values. For a nearly-linear feedback system, the result is a sine-wave oscillator. For circuits with bounded outputs, such as the electronic neuron, distinct bistable or multistable vector states are possible. This idea will be explored further in the next episode before returning to the problem of analog CMAC implementation.

If you, the reader, have any ideas about how this might be implemented, please offer them for discussion.

Related posts:

10 comments on “Large-Scale Integration: Neuronics, Part 2

  1. etnapowers
    November 12, 2013

    “It has been shown that successful learning techniques can be applied if the function has a continuous derivative”

     

    Dennis, could you provide some reference?

  2. etnapowers
    November 12, 2013

    I think that the w vector could be implemented by a iterative loop, which check the error ( i.e. the difference between the output and a reference value ) and updates the w vector .

    A microcontroller could be suitable to this scope.

     

  3. D Feucht
    November 12, 2013

    References:

    Introductory:
    Neural Computing: Theory and Practice, Phillip D. Wasserman, Van Nostrand Reinhold, 1989. Look at the back-propagation training algorithm in chapter 3 for a start.

    More advanced:

    Artificial Neural Systems, Patrick K. Simpson, Pergamon Press, 1990.

    Key words are: back-propagation, convergence

    These books refer to the earlier literature where mathematical proofs are given for some of the control aspects.

     

     

     

     

  4. samicksha
    November 13, 2013

    Not much into meuronics though but yes, it is intersting topic, although convergence depends on a number of factors and further on the cost function and the model. Would like to leran more on same…

  5. etnapowers
    November 13, 2013

    Thank you for your feedback Dennis. I found this interesting link on the web, I think it represents a good explanation of the back propagation algorithm.

     

     

  6. etnapowers
    November 13, 2013

    Yes samicksha, the algorithm convergence is very important and it is a important topic in all disciplines of engineering, that deal with iterative algorithms to define  the parameters of a system.

  7. D Feucht
    November 14, 2013

    Thanks for the link to the useful introductory tutorial on back-propagation. In neural language, a “layer” is what in circuit language we call a “stage”.

  8. etnapowers
    November 15, 2013

    @Dennis: you're welcome, I'm looking forward to read the next episode, to know about distinct possible bistable or multistable vector states. I think that these states are due to the neurons when implementing positive feedback loops, if stability will be granted it should be a very interesting subject.

  9. D Feucht
    November 30, 2013

    etnapowers

    I hope I don't disappoint you with the future articles in this series, but my intent is not to retrace what has occurred in artificial neural network (ANN) development, though it is quite interesting. I hope to make a brainstorming attempt at an analog sparse-memory circuit, the analog functional equivalent of Albus's CMAC.

    The previous considerations were for the purpose of envisioning how analog long-term memory might be implemented. This is needed in the analog CMAC though it can also be used for storing neuron weights in ANN designs. So it is a generally useful “component” of neuronics.

  10. etnapowers
    December 5, 2013

    @Dennis, the Albus's CMAC is another interesting example of cerebellar model , so the analog realization of this circuit is interesting as well. I've read and commented the next articles of this series, really nice blogs.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.