For whatever it is worth, at this point in time - if I add up the word count for every project that I have ever submitted to Hackaday, well, right now I don't quite know if I have hit the 50,000 or even the 100,000-word mark yet.  The 100,000-word mark certainly seems doable, or even twice that, if I were to simply "crank out" some documentation for the source code, in order to get an "impressive page count".  Oops, not supposed to say "crank!"  Well, certainly in any case, there is easily enough material that could perhaps be reformatted into an actual "book", which could contain sections on such things as compiler design, microcontroller interfacing for AI and DSP applications, robotics control theory, music transcription theory, and the like.  Writing a book certainly seems like a good idea, especially in this day and age where it is possible to go to some place like Fed-Ex or perhaps elsewhere, and order "single copies" of any actual book, which can be therefore - printed on demand. 

Even if nobody buys books anymore.

Yet, there is also the newly emerging field of content creation for use with "Large Language Models", which is a very wide-ranging and dynamic frontier.  Contemporary reports suggest that the Meta corporation has an AI that can pass the national medical boards and which needed only eight billion nodes to train an AI which results in a 3GB executable that can be run standalone as an app without needing to connect to the cloud for processing.  I haven't checked to see if there is a download for my iPhone, or for my Samsung Galaxy tablet, yet; but it seems like they are more on the right track than either OpenAI or Google Bard, at in least one respect.

Yet if it indeed turns out that you can model a neural network with less than a thousand lines of code, and if eventually, everyone ends up running the same code; then it seems like it is going to be the case that LLMs, in one form or another are in some ways going to become like the "new BASIC", that is based on their potential to revolutionize computing; just as Microsoft BASIC, APPLE II BASIC, and Commodore BASIC did in the 70's.  Even if BASIC was actually invented by someone else in the 60's, somewhere else, as we all know.

So meet the new bot, same as the old bot?

This will not be without some controversy, that is - if we really dig into the history of AI, and look at some of the things that others tried to accomplish, and what therefore might be accomplished today, like if the original checkers' program was run on modern hardware; or if we ask, "just what was the original Eliza really capable of", and so on.  Not that these old applications won't require modifications to take advantage of larger memory, faster CPUs, and parallelism. Yet. therein lies another murky detail; in that just as many older computer programs were copyrighted, as well as video games, every now and then even a long thought-dead company like Atari seems to re-appear; since it seems like there just might still be some interest in a platform like the 2600.  Yet, will it be hackable?  Will there be an SDK?  Will someone make a Chat-GPT plugin cartridge that provides a connection to the Internet over WiFi, but pretends to be an otherwise normal 2600 or 7800 game with LOTS of bank-selected ROM, so that it can do any additional "processing" either on the card or on the cloud, just because it would be fun to do; and it would be cheap!

Of course, I haven't seen the inside of the new 2600 yet, but if it were up to me; I would be using a Parallax Propellor P2 as a cycle-accurate (if possible), or at least cycle-counting 6502 emulator; running on at least one cog; while supporting sprite generation, audio generation, full HDMI output; with or without "upscaling" to 480, 720 or even 1080  modes; even when running in classic mode; while providing a platform for connecting things like joysticks; VR headsets; or whatever; that is for any devices that simply need a "box' to plug into, let's say if you are running an Internet-based game where you getting streaming video over your gigabit fiber; you are still going to need a device that your keyboard or hand controllers can either plug into; and of course if you are familiar with how the Propeller smart pins work; and the fact that the same pins that you can feed audio or hook potentiometers up to, can also do I2C, SPI, VGA or even HDMI - well; like I said, I haven't seem the insides of the new box, so as to be able to commit to how to go about the jailbreak.  Update:  I looked for some info about the 2600+ and apparently they are using the "Rockchip 3128 SOC" which I don't think I have ever taken notice of until now; so go figure.  No information about a jailbreak therefore, other than that I strongly suspect that if they put in some kind of DRM it will be another complete failure; just like the VCS system from a couple of years ago.  Too bad, so sad.

Just thinking out loud.

Thinking along another line; in my "gearing up" entry to the 2023 contest, I created a project entitled "Using AI to create a Hollywood Script", and then instead of asking Google Bard to "give me an outline for a reality-based television sitcom entitled - how to lose your shirt in the restaurant business", I instead asked for the outline for a book of the same name. Since as WOPR would quite possibly have said in the movie Wargames, "What's the difference?"  Ah, yes - "What is the difference indeed".  Then things got interesting: After cranking out a few thousand words, based on the outline that I was given, it occurred to me to try my hand at a few unsolved physics problems; because - well, why not?  So now I am addicted; as if the writing bug has bitten, and left its mark.  I added the new material, 99% of which I created myself, to a training set that I am using with a version of Mega-Hal, and then asked Mega-Hal to solve the Yang-Mills conjecture.  And I got an answer!

MegaHal: "cut the microsoft apron strings" and
MegaHAL: get back onto itself? Splicing space-time back onto the walls
MegaHAL: of some issues like having a mass m

Well, not quite ready for the Journal of Physical Review Letters, but interesting.  Very Interesting!

But perhaps better than some of the offerings that are presently available on what is as of late being described as "Linear TV".  Yet this presents a conundrum, if SAG-AFTRA has its way, "material created by an AI cannot be used as source material for union work."  Yet the studios don't care about writers in any case.  They hate writers, even more than they hate actors.  Maybe when I finish my "book" I will try to register it with the WGA, since even though most film producers don't care about "principles of compiler design", they do like things like "warp drive", "astrogator", "kryptonite", and "Herculaneum", and so on, even to the point of trying to register trademarks, or whatever else they can, whenever and wherever they can. Which can be a very murky area as far as copyright law in intellectual property is concerned.  Doctor McCoy was right in "The City on the Edge of Forever" when he proclaimed "Murderers!  Assassins!"  Just for a different reason.  Was he talking about the script, the contract, the studio, or something else?

More on the topic would be to ask whether a possible line of dialog "Somewhere in the sky, even for an A.I." is copyrightable.  That is if that particular line was generated by an A.I., where I created the training set; as well as the prompts; and which was output therefore, based on my own interactions with that very same AI so that I otherwise can maintain complete control over it?  Likewise, what about "Pay no attention to the geometrization of space-time!" or "Somewhere in the sky, the man behind the curtain!"  Now we can spin up mud faster than the aftermath of Hurricane Hillary did at Burning Man, and not just because Heinlein (I think) once successfully sued the producers of the movie Robo-Cop, for plagiarizing material from a published short story entitled "Brillo".  Clearly, this sort of thing could turn into a complete disaster for Open-AI and Chat GPT - which is already being sued for plagiarizing over 100,000 books that are currently under Copyright.

Nonetheless, at least for the purposes of this project; as long as I rely in part on some material that was made available by the University of California San Diego, with respect to discussions about UCSD Pascal, I should probably point out - as I do on the Git-Hub repositories, that they made that material available for "non-commercial", "research", and "educational use".  Likewise, the source that I am using for the version that I am running of Mega-Hal is derived from a GPL source; which as I read it, does not restrict me, if I create "a library of graphics images using Gnomivision" if you read some of the earliest GNU licenses - in effect restricting only the source codes and binary executables derived from "Gnomovision" itself - which historically gave rise to any of a number of variations of what is now known as "Creative Commons Licenses".

Interestingly enough, the Copyright Office has a "Notice of Proposed Rulemaking" where they are soliciting public comments on this exact subject; and I don't know if either they, the WGA, or SAG-AFTRA have considered the case where the same author who created the training set, then uses an AI based on that training set, and NOTHING else, to create another work.  Obviously, the movie Interstellar could not have been made, if tools like Mathematica, MatLab, or whatever else they might have used, besides the rendering software, whether THAT was based on Unity, Unreal Engine, Blender, or whether it was based on something else, or could have been.

Boycott Television.  Buy a Pencil.  Learn to Write.