Close

Meet the New Bot, Same as the Old Bot?

A project log for Modelling Neuronal Spike Codes

Using principles of sigma-delta modulation techniques to carry out the mathematical operations that are associated with a neuronal topology

glgormanglgorman 06/11/2023 at 12:340 Comments

I saw an article online about a chatbot called ChatPDF, which automatically trains on any pdf that you feed it.  It can be found www.chatpdf.com.  So, naturally, I had to try it with my very own prometheus.pdf, which you can find elsewhere on this site, i.e.,, hackaday.io, in one of my other projects.

Here are some initial results:

So I asked it about "motivators", and it can not find a reference in that article.  Neither can it find, a reference to Art Official Intelligence", even though there is a whole log entry with that title.  Looks like it gives a canned GPT-4 answer about Eliza, and is blissfully unaware of anything about the experiments that I was doing, such as comparing how the Eliza "conjugation" method might be thought of as operating in a similar fashion to how the C pre-processor works.  It does at least get partial credit for figuring out that one of the things that I am interested in is how pre-defined macros can be used to implement at least some of the essential parts of a lexer, and/or compiler.  Yet, it completely misses any opportunity to discuss how this might relate to a way of doing context-free grammar, i.e., as an alternative to BISON, YACC,  or EBNF.

Conclusion:  If they are using GPT-4 (3.5?) as the back end, then it would appear that GPT-4 doesn't know much of anything at all about AI, even to the point that perhaps it has no real "understanding" of the foundations laid by Chomsky and others.

Not that anyone is going to be able to lead GPT-4 to the edge of the uncanny valley any time soon - that is, so as to push it off into some abyss.  Yet, such thoughts are deliciously tempting.  I have been wanting to say something like that for a while now.  Thank you ChatPDF, for giving me a reason.

In the meantime, I uploaded an updated C++ version of MegaHal to the Code Renascence project on GitHub, where I have moved all of the functions into C++ classes or namespaces, so it will soon be possible to experiment with multiple models running simultaneously, or in separate threads and so on.  There are still a few global static pointers to some dictionary objects and the like that I need to deal with, but otherwise getting the code to a point where it can be run, as stated with multiple models, or with concurrent threads is maybe 99% done, as far as the essential parts of the AI engine are concerned. 

Discussions