Close

Beyond The (uncanny) Valley of the Shadows

A project log for Modelling Neuronal Spike Codes

Using principles of sigma-delta modulation techniques to carry out the mathematical operations that are associated with a neuronal topology

glgormanglgorman 05/29/2023 at 07:550 Comments

I was thinking about writing a log entry with the title "Avoiding Gestalt Conflicts", and that one would have begun with some repeat of a previous rant about how "if nothing is better than steak and eggs for breakfast" and "if stale cereal is better than nothing" then does it not follow that if A is better than B, and if B is better than C, then A must also be better than C also.  Then besides that, where are Alice, Tweedle-Dum, and Tweedle-Dee when we need them?

Well, you get the idea, or do you?  If you know what Galois fields are, then you might, in addition to the notional concepts of "frame" and context", you should also be able to contemplate some concept of "circular dominance", and based on what I think I remember about those little neon flasher kits that Radio-Shack sold when I was a kid,  I think that one of the things that "random connections of neuronal circuits" might end up doing, so to speak - is that they might somehow "adventitiously" create ad hoc ring oscillators, which in turn might give rise to spontaneous Hopfield Networks. 

Now, let's suppose that when such networks fire off in random patterns, so to speak, for a long enough period of time, maybe they do something akin to the accumulation of lactic acid in muscle, for example.  In effect, they might experience fatigue, so that perhaps such a network will oscillate for a while, and then it might shut down because it could possibly become inhibited by another network that then comes online.  If that circuit then fatigues, then what?  Does the first circuit then come back online, now that it has had some time to get some "rest and refresh"?

I am imagining a classic double scroll oscillator here, of the type that can be modeled with Chua's circuit.  Yet I want to try implementing it using a neuronal topology as stated earlier.

Alright, I found a public domain graphic of the chaotic region standard logistic map on Wikipedia, and for those familiar, and unfamiliar - it looks like this:

And in other news, one of my favorite pieces to practice on the piano is an "easy" version of the Animals - House of the Rising Sun.  So for a good time, if you have access to an electronic keyboard try the pattern 1-2-3-5-4-3 in the right hand, at 120 BPM, 3/4 time in the key of A-minor.  Then try to add the sequence A-C-D-F-A-E-A-E in the left hand with the A being the top note and then going down to the C, and so on.  So the left thumb is on an A and the little finger is on a C.  Try it.  Five minutes to learn, a lifetime to master.   One full note per measure in the left hand, so that the 8-bar accompaniment should repeat, I think - 23 times.  I know.  I counted.  And I think the "arrangement" just might work pretty well, let's say at a church social, maybe adding a tambourine shake or two, wherever they might seem to belong throughout the song.

So is it possible that "adventitious" Hopfield networks that emerge spontaneously, in real biological systems are the real sequence drivers?  I think that we are all taught, whilst growing up that the Cerebellum acted like some kind of sequencer, on the one hand, so the traditional model that we think of might involve either mechanical music boxes, digital counters, and ROM table lookups of the desired patterns.

Yet I think that there is something else going on here and as good as deep learning has become, I think that something important is missing.  Now I am not going to jump on the bandwagon of whether some AI or another might have become sentient because I think that sentience requires "feeling", and that just isn't going to happen with deterministic finite automata.  No, that's not what I am getting at.  I am thinking of something else, since I was reading up on Wikipedia about how Perceptrons work, and how any "linearly solvable system" can be solved with a single-stage Perceptron network, of sufficient size that is.  Yet, if a logical problem involves the exclusive-or operation, that operation requires a cascaded system, whether based on a perception-based model, or whether it is traditional digital logic, or whether it is tensor-flow based, for that matter.

So can we do something useful by creating some kind of neuronal macro-blocks that exchange information by using sigma-delta signals?  It seems to be just the right flavor of chaos, on the one hand, and yet it seems like it might be very easy and efficient on the other.

Now as discussed earlier, in one of my other projects, Rubidium, I think it was, I implemented a "mostly" correct version of IEEE-754 floating point arithmetic in C++, as a part of a calculator test project, where I assumed a system that had C++ style classes, but which didn't have any logical operations other than bitwise NOR, and logical shift operations.  Thus 32 bit IEEE-754 multiplication looks something like this and appears to be mostly correct, even if I didn't yet implement de-normalized input.

real real::operator * (real arg)
{
    short _sign;
    real result;
    short exp1 = this->exp()+arg.exp()-127;
    unsigned int frac1, frac2, frac3, frac4, fracr;
    unsigned short s1, s2;
    unsigned char c1, c2;
    _sign = this->s^arg.s;
    frac1 = this->f;
    frac2 = arg.f;
    s1 = (frac1>>7);
    s2 = (frac2>>7);
    c1 = (frac1&0x7f);
    c2 = (frac2&0x7f);
    frac3 = (c1*s2+c2*s1)>>16;
    frac4 = (s1*s2)>>9;
    fracr = frac1+frac2+frac3+frac4;
    if (fracr>0x007FFFFF)
    {
        fracr = ((fracr+(1<<23))>>1);
        exp1++;
    }
    result.dw = (fracr&0x007FFFFF)|(exp1<<23)|(_sign<<31);
    return result;
}

 Now is it possible to take an actual compiler that normally generates code that runs on a traditional CPU, and instead, compile for a system that is not only built from nothing but NOR gates, or something like that, but can we replace the NOR gates, counters, etc, with "adventitious neuronal circuits" just to prove that it can be done, on the one hand, and to perhaps give the next generation of AI some kind of "head start".

Then I could compile MEGAHAL, or SHRDLU, or whatever, and whenever the Markov model needs to do a calculation, which in the case of  MEGAHAL would be based on the "entropy of surprise" then the whole thing would be running on virtual neural circuits

Seems like it could work on a Parallax P2, with the added option of having some actual analog fun going on, just in case, hopefully - that puts a warm feeling in your motivator.

Stay Tuned.

Discussions