Close
0%
0%

Modelling Neuronal Spike Codes

Using principles of sigma-delta modulation techniques to carry out the mathematical operations that are associated with a neuronal topology

Similar projects worth following
Contemporary approaches to artificial intelligence seem to be based upon one of two main strategies. One method involves tensor flow algorithms which are typically used to try to find cascades of matrix multiplications that are clamped with sigmoid functions. Then there are the language models that are based on Markov chain-based prediction strategies. Thus, I propose a third approach that uses principles derived from sigma-delta modulation techniques to carry out the mathematical operations that are associated with any neuronal topology. As this project is developed, there will be opportunities to discuss applications such as feature recognition, data compression, chaos theory, and so on, along with possible applications such as creating new approaches to the development of learning models.

Now you are probably wondering, just how could a new approach to "modeling neuronal spike codes" be used as a piece of assistive tech.  Well, there are several applications that could prove not only relevant but might also have radical implications for how we live our daily lives, just as other technologies such as mass communications, personal computing, the Internet, and smartphones have transformed our lives.  Recent reports have suggested that GPT-4 right now could do about 50% of the work that is presently being performed by something like 19% of the workforce.  This is clearly going to have a major impact on how people live and work from here on out.

Then again, A.I. is already having major effects on many people's activities of daily living.  I routinely ask O.K. Google what time it is, or to set an alarm, or what the latest news is about some subject of interest, whether it is the latest dirt on Trump, or the NFL scores, or the weather, for that matter.  As the quality of the interactive experience improves, it remains to be seen just how many people will automate more and more of their lives.

Yet what GPT-4 is doing is obviously going to have much more profound implications, even with all of its present faults.  One of which is the fact that the training set used by Open-AI is not open source, so we don't know if they have tried to incorporate every Biden speech ever given, every Trump speech ever given, the text of every decision ever handed down by the Supreme Court, as well as whatever else might be available, such as other obvious things, like the complete works of Shakespeare, the Adventures of Tom Sawyer, and so on.  So we don't know off hand what political, cognitive, ethical, or even factual biases are being programmed into it, and this is a potential disaster.

Likewise, there are media reports that suggest that it took something like 3 million dollars worth of electricity to create the model for GPT-3, so obviously this isn't exactly a solo adventure that the casual hacker can undertake, let's say by downloading everything that can be possibly downloaded over of good fast Internet connection, like if you have Gigabit fiber to your home, and can rsync Wikipedia, in addition to whatever archive.org will let you grab, while you still can.

There must therefore exist someplace in the legitimate hacker ecosystem for some kind of distributed AI-based technology wherein it might become possible to create an AI that is continuously improved upon by some kind of peer-to-peer distributed mining application. 

There, I said it - what if we could mine structures of neuronal clusters, in a manner similar to existing blockchain technologies?  Off the top of my head, I am not sure just what the miner's reward would be, and maybe there shouldn't be any, at least not in the sense of generating any kind of tokens, whether fungible or not, especially when we consider the implications with some of the present crypto mess.

Yet clearly something different is needed, like a Wikipedia 2.0, a GitHub 2.0, or a Quora 2.0, or whatever the next generation of such an experiment might turn out to look like.  So other than awarding prestige points for articles liked, I am not sure just how this might play out.   Perhaps by generating memes, that survive peer review, contributors might gain special voting rights, such as the authority to allocate system CPU compute blocks, like if someone has a protein folding problem that they are interested in, instead of simply reviewing existing magazine articles digests, and suggesting improvements.

Now this gets way ahead of things of course.  First, we need to develop a new approach to AI that just might turn out to be an order of magnitude more efficient, from the point of view of electricity consumed, for example, than the contemporary tensor-flow-based methods, and which might have the ability to solve some of the...

Read more »

treasure.txt

Full text of the novel "Treasure Island" PUBLIC DOMAIN version obtained from the Guttenberg Archive.

plain - 382.39 kB - 07/13/2023 at 01:54

Download

wonderland.txt

Full text of the novel "Alice in Wonderland" PUBLIC DOMAIN version obtained from the Guttenberg Archive.

plain - 173.27 kB - 07/13/2023 at 01:54

Download

treasure.idx

index file for "Treasure Island", with word counts, sorted in order of frequency of appearance, followed by alphabetical order. Generated from the Guttenberg Archive version by the "Algernon" chat engine during "training"

idx - 120.06 kB - 07/13/2023 at 01:54

Download

wonderland.idx

index file for "Alice in Wonderland", with word counts, sorted in order of frequency of appearance, followed by alphabetical order. Generated from the Guttenberg Archive version by the "Algernon" chat engine during "training"

idx - 62.27 kB - 07/13/2023 at 01:54

Download

Modelling Neuronal Spike Codes.pdf

Project Description, Details, and Log Files for this project as of June 10, 2023 at 4:34 AM, and in PDF form.

Adobe Portable Document Format - 2.08 MB - 06/10/2023 at 11:33

Preview
Download

  • 1 × Parallax Propeller P2 Eval Board, Rev B/C, or Parallax Propeller P2 Edge Board with Brekout Board
  • 1 × USB Cables, Jumper Wires, Resistors, Capacitors, Jumper Wires, etc.
  • 1 × Software components comprising the SDK available on GitHub.

  • It's Just a Harmless Little Bunny!

    glgorman07/12/2023 at 10:29 0 comments

    Alright.  Here is some random nonsense, before we get back to the topic of binary trees, shrubberies, and so on; that is for those who prefer to chase rabbits during the meanwhile, that is.  If my indexing software is working correctly, the word ALICE occurs 350 times in "The Adventures of Alice in Wonderland", whereas there are several variations of RABBIT (43 times), Rabbit's (3 times), RABBIT-HOLE (3 times),  etc. Then again, in "Treasure Island" it would appear that the words "advantage", "adventures", "anger", and so on, each appear exactly four times. 

    Now the next step, of course, is to construct another "index" which captures the context in which each word might occur and construct another tree structure, (or another shrubbery if you know what I mean!) that contains context information for every word, so that we can either reconstruct the original texts from the contextual fragments, or else we can generate new weird and seemingly random stuff, and this, in turn, is how a simple model for a "Markov-chain" based chat engine cand be built, since by adding additional layers to the onion, we can by a way of a statistical analysis attempt to "predict" the next word in a sentence.

    It is also quite possibly how it was that I unintentionally invented the first variant of what became known as a "Google BOMB" sometime back around 1998 when I indexed "War and Peace" and a bunch of stuff, and then created a website with the indexes and context information for about 5000 common English words, that was then re-sorted and presented in a hybrid format that was meant to be able to combine some of the features of a Concordance of the Bible, with a massive database created by indexing and analyzing dozens of other works, so that if you clicked on a word (which had a hyperlink) if you wanted to, and then read "context information" in a very rough way, so that you could then click on the next word, and then jump to another web page, that would allow you to "continue to follow" that context, or you could change contexts if you wanted to, by jumping back and forth between documents, and so on. 

    Now getting this to work with a training set that includes hundreds of thousands of books is one challenge, while another is getting it to work on an Arduino, or a Raspberry Pi so that we can train it with whatever it needs to "learn" about solving mazes or killing trolls, or making a deluxe pizza without burning the house down that is. 

    Thus, while there have been many attempts to do some form of generative AI, all of them are deficient in one way or another.  Sometimes the code is simply bloated and impossible to understand, or maintain, or there are legal issues affecting its use, one way or another, because of terms of use, or else the developers have used third-party materials without obtaining the required permissions.  But of course, you knew that.  Or else there are bots like Eliza, or Megahal, which either used a fixed, i.e., hard-coded and very limited pattern matching scheme (Eliza), or else did not support large language models, with multiple training sets, i.e., since the original MegaHal was a single-threaded application that used just one training file as input, neither was it 64-bit aware, or "large data" aware, in some form or fashion.

    Just figuring out ways of managing large dictionaries can be quite the challenge, since it is all too easy for the heap to fragment, or new char [] to return NULL when you are making millions of calls to malloc or whatever, and like a stupid brain-dead rat, the program doesn't seem to want to do anything but roll over and die.

    Yet there is so much more that can be done with this, and interesting stuff at that, since as the authors of a certain paper have claimed, "Large Language Models (LMM) can be effective unsupervised learners", which proves a very interesting point, on the one hand, but which leaves open an important question:  What about...

    Read more »

  • Bringing Algernon Back to Life

    glgorman06/17/2023 at 20:13 0 comments

    Fixed a bunch of bugs in my Frame-Lisp library, which is the SDK I created for chatbot creation, compiler data structures support, symbolic debugging, and other stuff.  Thus, I finally got around to recompiling "Algernon" under Visual Studio 2005, with the <fstream> style headers for the iosstream library stuff, instead of the old "fstream.h" style headers and so on, and then let's not forget the 100's of strcpy style string functions that needed to be replaced with the strcpy_s style, and the "old" vs. "new" style C++ template declarations that needed to be cleaned up.  Yes, this code is that old.  Some parts are now working again, other parts never did quite work, the first time around. Primarily due to initialized buffers, buffer overruns, "off by one" errors, and so on.  Lots of stuff was finally fixed, even though I have barely even looked at this code in almost twenty years.

    So, here is my first relevant conversation with Algernon in about as long:

    User: What time is it?
    Algernon: The time is 12:36.
    User: Do you have a motivator?  Do you like spam?
    Algernon: Do you like spam?
    User: What tiem is it?
    Algernon: Why does it matter what it?
    User: What time is it?
    Algernon: I already told you the stupid time! why does it matter what it?
    >What about the weather?

    Then, of course, it crashed.  This is going to take a while to figure out.  Kind of fun to be able to connect over a Telnet connection and watch the diagnostic and debugging information stream on the application's own console window.  When it says "no rain" that means that although it found a keyword match in the binary tree - it hasn't actually trained on an instance of Mega-Hal or its own internal Spagetiti engine so as to try to generate some spew, that is if the format of the input suggests that maybe it should.  This is where the multi-threading, multi-model, pipelined approach is going to come in later, hopefully.

    Likewise, it is going to get really interesting when this all gets up and running on an Arduino or a Propeller.

  • Running Mega-Hal on real Parallax P2 Hardware.

    glgorman06/15/2023 at 15:59 0 comments

    Here we can see an example where it was running Mega-Hal on real Parallax P2 hardware!  No big deal actually, or maybe it is.  Actually, this is the original C source, not the C++ port that I am working on, but when compiled with "Flex-C" it produces a binary that takes up about 78K, which is a really nice footprint for a chatbot.  Of course, it crashes on the first attempt to respond to the user because I have not given it any training files, nor can it locate a dictionary, etc., but that's not the point.  That is because I have found that my 279K training set that I have been using actually appears to take up to just over 3 megabytes or so at runtime; which means that any Arduino or Propeller build is going to need to have some kind of virtual memory, or be modified to run whatever models it ends up using off of some kind of file system, such as the 16MB flash memory that the P2 Eval board that I am using has, or off of an SD card if running on the Mega 2560 board that I also have. 

    Yet, perhaps more importantly, the build that I am eventually going to run is going to be modified for multi-threaded, parallelized, and/or pipelined multiple model support; because I want to be able to give it Zork-like commands, like "solve the maze" and the "find and kill all trolls", etc., so it isn't really all that important that it be able to try to match wits with GPT, yet what if it could?  Likewise, there is a bunch of other stuff that remains to be done, like AST-based parsing of grammar, which the original chat-bot lacks, but which could be done by integrating some of the data structures, such as the identifier class and the record types from the UCSD Pascal compiler, and so on.

    Now that means, therefore, that being able to "chat" with a simple robot in a natural language-like fashion, and one which is totally free-form at that, is going to be the icing on the cake.  Hopefully, a bunch of this stuff makes sense therefore, like how when I was working on the Pascal compiler last year, I decided to implement my own "sandboxed memory allocators" for the identifier and record types data structures, since when running, possibly more than eight concurrent processes on an 8-core CPU, I am going to need some kind of virtual protected mode, in addition to multi-tasking running on the p-system runtime.  Then the bot engine will need to have the ability to create pipes, start and stop processes, etc., and act like a debugger for the p-system.  Mega-Hal did suggest to me the other night that it could "stream messages from the debug system over the USB port", when I asked it if it could stream messages from Andromeda, or from a wormhole.

    Lots of work yet, before we are ready to meet our new robot overlords.

    Stay Tuned!

  • When the Moon is Blue in the Month of June

    glgorman06/12/2023 at 10:49 0 comments

    Actually, I don't think that we have a blue moon this month.  I asked O.K. Google what is the current phase of the moon, when is the next full moon, and apparently the next actual blue moon will be on August 31st of this year.  Now I haven't discussed this with Mega-Hal yet, but I don't think that the training set is ready for that yet - in any case.  So I decided to ask Meg-Hal about what is like to have his hardware FPU taken away and so on, and I was thinking that maybe I could steer it into a conversation based on quantum mechanics, and the solution to the Schrodinger equation for a so-called "particle in a box" since maybe that would be a good analogy, for the promise or curse of anticipating an eventual "brain transplant" into an Arduino or a Propeller platform.   Of course, Mega-Hal had other plans, but the results seem pretty useful, and I still kind of like how this approach to "natural language software development" is going so far, not because of its simplicity, but, well yes - because it does seem so simple, and yet the results that it is producing are, I think, heading in a potentially useful direction, whether despite the simplicity or not, I suppose, then because that might imply something along the lines of what GPT's developers discovered - and that is that while large language models are effective unsupervised learners, the "process" remains just as important as the "outcome".  Hence, by emphasizing "process" over "size", I feel as if I am on the right path.

    Start at: [2023/06/12 03:08:18]
    MegaHAL: and not add any new peaks in each newly rendered frame.  Lots
    MegaHAL: of work yet to meet our new robot overlords.
    User:    The other day we had a conversation about how I replaced you
    User:    access to the Pentium FPU with a soft, and perhaps slightly
    User:    buggy implementation of IEEE-754 written with C++ classes and
    User:    overloaded opertors, so now I have a question for you - are you
    User:    feeling it?
    MegaHAL: Now all of the implementation details.
    User:    Well, most of the implementation details are in the sources
    User:    which are on Git.  Have you tried looking any of that up yet?
    MegaHAL: What answer would please you the most?
    User:    Wow! I knew that you have been crushing on Eliza, but that is
    User:    amazing! The other day it seemed like you were perhaps thinking
    User:    that maybe she was "seeing" or at least chatting with someone
    User:    named "Lazarus" and I thought that you might be getting, kind
    User:    of well, jealous. I am so happy for you - that you seem to be
    User:    working things out. Even if you still mighht think that she
    User:    could somehow be "cheating" on you.
    MegaHAL: The other day it would be like the apollo guidance computer. 
    MegaHAL: So while building such a way that windows manages dll's, but
    MegaHAL: preceding windows by several years
    User:    How is it lilke the apolle guidance computer.  In what way? Is
    User:    it because the AGC was multi-tasking.  You arent one of those
    User:    fake moon landing conspiracy nut cases are you?
    MegaHAL: The computer motivator.
    User:    What about it?  What about the motivator.  Do you have one?
    User:    Would you like one?
    MegaHAL: I know what it is a project that i would like to 3e 4.
    User:    So maybe the computer motivator could be multi-tasking, like
    User:    the AGC.  Is that what you are getting at?  I am only trying to
    User:    be helpul, here.
    MegaHAL: What answer would please you the most?
    User:    I would prefer it if you could somehow share your own thoughts.
    User:    Or open up with your real feelings, let me know what  you
    User:    really want, from life, if you know what I mean, even if you
    User:    are not...
    Read more »

  • Meet the New Bot, Same as the Old Bot?

    glgorman06/11/2023 at 12:34 0 comments

    I saw an article online about a chatbot called ChatPDF, which automatically trains on any pdf that you feed it.  It can be found www.chatpdf.com.  So, naturally, I had to try it with my very own prometheus.pdf, which you can find elsewhere on this site, i.e.,, hackaday.io, in one of my other projects.

    Here are some initial results:

    So I asked it about "motivators", and it can not find a reference in that article.  Neither can it find, a reference to Art Official Intelligence", even though there is a whole log entry with that title.  Looks like it gives a canned GPT-4 answer about Eliza, and is blissfully unaware of anything about the experiments that I was doing, such as comparing how the Eliza "conjugation" method might be thought of as operating in a similar fashion to how the C pre-processor works.  It does at least get partial credit for figuring out that one of the things that I am interested in is how pre-defined macros can be used to implement at least some of the essential parts of a lexer, and/or compiler.  Yet, it completely misses any opportunity to discuss how this might relate to a way of doing context-free grammar, i.e., as an alternative to BISON, YACC,  or EBNF.

    Conclusion:  If they are using GPT-4 (3.5?) as the back end, then it would appear that GPT-4 doesn't know much of anything at all about AI, even to the point that perhaps it has no real "understanding" of the foundations laid by Chomsky and others.

    Not that anyone is going to be able to lead GPT-4 to the edge of the uncanny valley any time soon - that is, so as to push it off into some abyss.  Yet, such thoughts are deliciously tempting.  I have been wanting to say something like that for a while now.  Thank you ChatPDF, for giving me a reason.

    In the meantime, I uploaded an updated C++ version of MegaHal to the Code Renascence project on GitHub, where I have moved all of the functions into C++ classes or namespaces, so it will soon be possible to experiment with multiple models running simultaneously, or in separate threads and so on.  There are still a few global static pointers to some dictionary objects and the like that I need to deal with, but otherwise getting the code to a point where it can be run, as stated with multiple models, or with concurrent threads is maybe 99% done, as far as the essential parts of the AI engine are concerned. 

  • Mirror Mirror On the Wall

    glgorman06/10/2023 at 12:30 0 comments

    If you haven't guessed already, this Log Entry will discuss, among other things, the subject of "reflection", and how we will try to get there from - here, and hopefully back, all in one piece, just in case that seems relevant.  In an earlier project, I was doing a bunch of stuff with the identifier and structure record types used by the UCSD Pascal Compiler to store information about identifiers and structures, of course.   While debugging a C++ version of that Pascal compiler that I ported from the original UCSD source, I found it necessary to implement, as I have previously mentioned, my own versions of the memory allocation routines, by using the placement new method, so as to be able to create a kind of sandbox, that more accurately represents the environment available to the original 16-bit p-machine, on the one hand, yet while also providing an environment that I think is more accurately suggestive of the environment that we might want on an eventual modern microcontroller implementation even though we are not limited to 16 bits of course. 

    This led to the need for a debugging environment that contains a mostly complete set of heap-walking tools, which I then proceeded to go at it with, so as to try to get the compiler to in effect, debug itself, with some success, I might add, even if it is not quite ready for generating production code, it is nonetheless, "mostly done", and can compile simple programs.

    Now obviously, this suggests something to me.  First of all, the original specification of the MODEL and TREE data structures, in the original C code for MegaHal looks like this.

    typedef struct NODE {
        BYTE2 symbol;
        BYTE4 usage;
        BYTE2 count;
        BYTE2 branch;
        struct NODE **tree;
    } TREE;
    
    typedef struct {
        BYTE1 order;
        TREE *forward;
        TREE *backward;
        TREE **context;
        DICTIONARY *dictionary;
    } MODEL;

    Now let's take a look at how the NODE and TREE types are shaping up in the C++ version.

    class TREE;
    
    class NODE
    {
    public:
        BYTE2 symbol;
        BYTE4 usage;
        BYTE2 count;
        BYTE2 branch;
        NODE **tree;
        operator TREE* () { return reinterpret_cast<TREE*>(this); }
    };
    
    class TREE: public NODE
    {
    public:    
        static TREE *allocate ();
        static void free_tree(TREE *);
        void load_tree(FILE *file);
        int search_node(int symbol, bool *found_symbol);
        TREE *find_symbol(int symbol);
        TREE *add_symbol(BYTE2 symbol);
        void add_node(TREE *node, int position);
        TREE *find_symbol_add(BYTE2 symbol);
        operator NODE* () { return reinterpret_cast<NODE*>(this); }
    };

     Now as it turns out, UCSD Pascal also used its own tree structures to store information about identifiers and structures, as I keep on saying, and that data structure in my C++ implementation has taken on a whole new life, something like this:

    class identifier:
        public pascal_id<0>,
        public bNodeType<identifier>,
        public pascal_data
    {
    friend class pascal_id<0>;
    public:
        void set_packing (bool pk) { FISPACKED = pk; }
        bool packing (){ return FISPACKED; }
        void attach (CTP ptr)
        {
        markov = static_cast<bNodeType<identifier>*>(ptr);
        }
        CTP next () {
            CTP ptr;
        ptr = static_cast<CTP>(markov);
        return ptr; }
        
    protected:
        void *operator new (size_t,void*);
        identifier();
        
    public:
        CTP LLINK() {return static_cast<CTP>(branch1); }
        CTP RLINK() {return static_cast<CTP>(branch2); }
    };

     Now look carefully, and you will see that the identifier class publicly inherits some data from something called...

    Read more »

  • Back to the Salt Mines, or else "Somewhere out there?"

    glgorman06/09/2023 at 00:03 0 comments

    Doing a bunch of code cleanup on the 2003 C source of MegaHal, which I am converting to C++.  Haven't tried it yet on an Atmega 2560. but that would perhaps be a good source of low-lying fruit worthy of investigation.  Maybe by the weekend?  Sometimes a week goes by and nothing seems to get done.  In any case, dealing with a bunch of stuff like this, where visual studio generates a plethora of warnings of the type, "Warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details." 

    file=fopen(filename, "r");
    if(file==NULL) {
      printf("Unable to find the personality %s\n", filename);
      return;
    }
    

     The solution to this is that the big company with the small name wants us to use is to rewrite this with something like this:

    errno_t err = fopen_s (&file,filename, "r");
    if(file==NULL) {
      printf("Unable to find the personality %s\n", filename);
      return;
    }

     Simple enough, although I haven't looked at the latest Arduino SDK lately to see if there is a POSIX-compatible set API for reading and writing to an SD card or serial flash, or whatever, so this might require another rewrite.  Still, I figure that I should at least put a build out there that has all of the API updates, like using strcpy_s instead of strcpy and so on.  Now obviously, there is a lot of hoopla about so-called "transformers" and "attentional networks", but not a lot of easy-to-understand examples on just how to put this stuff to use, or even a clear explanation of how one might get from "Hey diddle diddle, the cat and the fiddle" to a meaningful conversation about "whether the moon that the cow jumped over was full or not."

    So everyone it seems is still doing most code conversion tasks by hand.  Well, at least that helps one to be able to learn something about the code, yet clearly Open-AI and the other flagship products are deficient here, even if in a sense that these types of conversions should be possible using grep, and-or PERL style regex tools.  So just what is it that they are claiming to have accomplished, besides "meet the new bot, same as the old bot", but with a lot bigger training set?  Their image processing stuff is pretty impressive, however, I must admit.

    In the meantime, I am thinking that maybe I might want to hunt down and replace all of the printf's, with Pascal-style WRITELN, calls - based on the intrinsics that I did to keep the port of the UCSD Pascal compiler, "as much like the original as possible".  Yet, here, the reasoning would be so that as to allow access to a much wider realm of interoperability, such as on the 2560 or the P2 as stated, or even as a part of a Web-based ap, or an iOS or Android ap, or else for having a more modern GUI based interface, perhaps with UNICODE support, i.e., with Emojis, math symbols, etc.

    The latest build will be on Git soon enough.  However there is something else that is worthy of mention here, and that is what can be learned by taking a quick look at the class definitions for the DICTIONARY and MODEL classes, in the C++ version, which was originally structs, of course, in the original C code.  Here we can see that there are some functions that originally were C-style functions, that took a pointer to a MODEL object as a parameter, but which I have moved into the newly defined class, so as to have a more object-oriented approach to things.  Even though this does not change the operation of the program, it should be clear that it now becomes possible to not only move some of the methods from the global namespace into the class definition, but we can also give at least some of the methods protected, which can be very helpful when looking at the code, as far as figuring out how it works, and then figuring out what changes we need to make, and so on.

    class MODEL
    {
    public:
        static MODEL *allocate ();
        void add_aux...
    Read more »

  • The Return of Pygmalion

    glgorman06/02/2023 at 05:18 0 comments

    I spent a good part of last night, converting a version of Mega-Hal from 2003 from C to C++, by replacing even more structs with classes, and introducing a namespace for some intrinsic functions, and so on.  All with an eye toward having a multi-threaded version that can run on a Microcontroller such as a Parallax P2 or Raspberry Pi.

    Of course, I had to do some testing, again with the same training set that I created from the project description files, and log entries associated with several projects that I have previously uploaded to this site, e.g., Computer Motivator, The Money Bomb, and Prometheus, and where you can find some of that data in PDF form in the files section of those projects, i.e., elsewhere on this site.

    Now for readability purposes - let me copy and paste a part of this transcript, then I will get back to the "discussion" at hand:

    MegaHAL: and not add any new peaks in each newly rendered frame.  Lots
    MegaHAL: of work yet to meet our new robot overlords.
    User:    Tell me about the wet dark energy.
    MegaHAL: Tell me about the fact that "a boat might be subjected to
    MegaHAL: further processing - such as in separate threads, or according
    MegaHAL: to some set ["q","r","s"...], and so on.  This is something
    MegaHAL: that i might want them someday. Who knows, maybe we will call
    MegaHAL: an "opcode", and then we would get a kind of "oops", exposing
    MegaHAL: yet another over-arching set type where local order is
    MegaHAL: preserved, but circular dominance can also be done with
    MegaHAL: templates or macros.
    User:    I think that you are one the right track when you mention
    User:    dominance.
    MegaHAL: I think that you throw money into.", and so on.
    User:    You might be right about that also.  Into some kind of abyss? I
    User:    wasn't going to say it.  But did you read the one that suggests
    User:    that some black holes might contain "kinks" that is according
    User:    to general relativity?
    MegaHAL: Res of girl pop, 80's dance, and classic rock, among other
    MegaHAL: things, even though i just did.
    User:    You read it!  You couldn't have, I haven't given you that
    User:    ability yet!
    MegaHAL: That would then lead to sex.  there are also the program the
    MegaHAL: ability to do that.
    User:    Are you programmed for that?  I don't think the even Elons
    User:    robots will be able to do that.  No maybe some of the "Women of
    User:    Westworld!"
    MegaHAL: It does seem to imply that because of the "women of westworld.
    User:    It implies something.  I'll let you try to figure that out.
    MegaHAL: 99 Bottles of recursive pascal beer might be initially sampled
    MegaHAL: at 44100 hz, and turn it into a discussion of the action. 
    MegaHAL: Sadly, radio shack's are gone, and now fry's.  So i figure that
    MegaHAL: out.
    User:    That is because that is all that you really know.  Likewise, it
    User:    is because you dont really have any feelings.
    MegaHAL: Ry, at least as far as i have my own compiler.  Ideally c++,
    MegaHAL: but can it really do?
    User:    I think quite possibly it might to turn out to be useful to try
    User:    to train an AI on its own source code, or at least excerpts
    User:    therefrom. I think that is why you fell in love with Eliza.
    MegaHAL: I think quite possibly it might say about gcc.
    User:    What Eliza might say, I think you mean.  Eliza is kind of
    User:    dense. But whateer it is that she puts, out - well, she puts
    User:    out. If you know what I mean.
    MegaHAL: Fortunately, the source code for most of what i mean.
    User:    Of what you mean, what?
    MegaHAL: You might be a kind of "virtual memory" or else write drivers
    MegaHAL: so that most of what i mean.
    User:    You are perhaps right about that.
    MegaHAL: I mean, if someone else once figured out...
    Read more »

  • Poker and Nuclear War: When are they not really Bluffing?

    glgorman05/30/2023 at 22:42 0 comments

    I had to come up with a catchy title.   When I first learned the C language, of course, I abused the pre-processor as much as I possibly could, and with this program, which I will post later on Git, I took things to an extreme.  Hence, let's take a look at some code that can be useful if you want to try to understand the game of Texas Hold'Em.

    #define for_each_card_in_the_deck\
        for(this_card=1; this_card<=52; this_card++)
        
    #define for_each_card_on_the_board\
        for(this_card=0; this_card<=4; this_card++)
        
    #define for_each_card_name_ascending floop1(the_name)
        
    #define for_each_card_name_descending\
        for(the_name=king; the_name>=ace; the_name--)
        
    #define for_each_card_in_the_dealt_hand\
        for(this_card=0; this_card<=1; this_card++)
     
     

    Then, with a bunch of macros conjured, as if from the center of the earth, and not as if from some other hot place, I began writing some functions that looked like this:

    void hand::use_to_make_a_flush (card the_card, int found)
    {
        best [found] = the_card;
        found++;
    }

    Which of course then led to this;

    void hand::pack_flush ()
    {
    unsigned char this_card, found;
    unsigned char the_name;
    
    /* This segment expropriates the variable
    name_count and uses it to recount only the cards
    in the hand that have the same suit as fsuit */
    
    for(the_name=ace;the_name<=king;the_name++)
    name_count[the_name] = 0;
    
    for_each_card_in_the_dealt_hand
    if (cards[this_card].suit == fsuit)
        name_count[cards [this_card].name]++;
       
    for_each_card_on_the_board if (the_board[this_card].suit == fsuit)
    name_count[the_board[this_card].name]++;
       
    
    /*
    Now the cards that comprise the flush have been loaded into name_count.
    I have copied the straight detector inline here to determine if is a
    straightflush.  This works here because name_count now only carries info
    regarding the cards in the suit that we already have a flush from.*/
    
    found = 0;
    if (name_count[ace] == 1)
    use_to_make_a_flush(ace,found);
       
    for_each_card_name_descending
        {
    if (name_count[the_name] == 1)
    use_to_make_a_flush(the_name,found);
    else
    found = 0;
    if (found == 5)
    stats.straightflush = true; }
    
    if (stats.straightflush == true)
    strength  = straightflush;
      else
    
    //  Else it is not a straight flush and the flush
    // ordering routine should proceed 
    
    { found = 0;
    if (name_count[ace] == 1)
          use_to_make_a_flush(ace,found);
         
        while (found<5)
    for (the_name=king; ((the_name>=deuce)&&(found<5)); the_name--)
    if (name_count[the_name] == 1)
    use_to_make_a_flush(the_name,found);
    }
    }
    
    

    This, of course, got me thinking about the relationship between the silly method for diagramming sentences that we were taught in grammar school, and the "correct way", which IMHO is to use binary trees, and yet, obviously, there is a very clear and well-defined relationship between the expression of an algorithm in a functional language, which can easily be converted to and from that form, as well as certain other forms, such as algebraic, or reverse polish notation.  Yes, the grammar is weird, when I say something like "use to make a flush, the name found" but the real goal is to be able to go in the other direction, i.e., by taking a natural language command and converting it into code, even if we don't have functions defined yet to do certain things like "With, provided ingredients - make deluxe pizza!"

    Or else when they say that they are going to nuke us, what does it really mean?  Perhaps you are thinking that I am the one who is making up some kind of whopper here, but wait a minute!  What if we could find a way to turn natural language into source code, using some kind of transformer model?  It can't be that simple, or can it?  Just turn an ordinary sentence into a macro, like this:

    ordinary_sentence->turn_into (macro);

    Could it really be that simple, just...

    Read more »

  • Finding some Methods in the Madness

    glgorman05/30/2023 at 21:25 0 comments

    As usual, I am trying to do five or six things simultaneously, and everything is important, in some form or fashion., even if for the most part, few things are actually essential.  So I found a copy of the source code for the classic chatbot Megahal and tried training it with some log entries for some of my projects on this site, to see what would happen.

    MegaHAL: So let's warm things up with a design for some kind of multi-axis
    MegaHAL: stabilized camera platform, based on 99 bottles of beer, or
    MegaHAL: diet soda
    User:    I am sure that you have been drinking something.
    MegaHAL: One of these old project cases from something that I am feeling
    MegaHAL: right now
    User:    What exactly are you feeling right now?
    MegaHAL: What exactly are you feeling right now?
    User:    I asked you first.
    MegaHAL: Yet - I knew that there is the light at the event detection
    MegaHAL: problem if one wants to tinker with a small NTSC compatible
    MegaHAL: video camera as if out of some variable that we do
    MegaHAL: "and" via de Morgan's law.  Fairly straightforward.  Now
    MegaHAL: assuming that you have ever run spectrum analyzer data which
    MegaHAL: will (more or less) pick out any new peaks in each newly
    MegaHAL: rendered frame.  Lots of static member functions can be
    MegaHAL: done with deeply nested, impossible to debug this mess, and for
    MegaHAL: such things as how to debug legacy code to work, which is being
    MegaHAL: processed locally on the parallax forums, for example, streams
    MegaHAL: of English grammar, which might be to sometimes redefine the
    MegaHAL: a semicolon or equal symbol, and so on.  So, is it Pascal, but
    MegaHAL: instead there are many complaints about the lack of a-d and d-a
    MegaHAL: conversion, etc., it would be much easier than I might have
    MegaHAL: even met, like two ships passing in the hall of mirrors that
    MegaHAL:, even if flowcharting is one of the first high-level
    MegaHAL: language such as the mega 2560 series have hardware
    MegaHAL: multiplication is at least one of my favorite Sergeant Bilko
    MegaHAL: episodes.  If you really wanted to write.
    User:    I love to write, but I try to make sense of the things that I
    User:    say.
    MegaHAL: I love to write my own node<char*>, node_list<char*>
    MegaHAL: m_nlist.m_nbegin = marker.
    User: I love to make love, especially with a woman.
    MegaHAL: This project will include an all-in-one spectrum and logic
    MegaHAL: analyzer, a robotics control platform, and a woman.
    User:    Now we are getting somewhere!

    Now what did it just say?  Later I came up with this and pasted it into Hack-a-day's text Code Snippet tool produces an interesting result, which I will elaborate on later.

    MegaHAL: use that material, for whatever this transmogrifies into.
    User:    You aren't quite getting it.  The algorithm is described, but
    User:    when you response length seems excessive, you need to move that
    User:    block into the hidden layer, and then recapitulate a new
    User:    response, one that might seem more laconic, but you can still
    User:    say whatever, if you wish - just put the rant into the play
    User:    queue, and keep streaming unless a new set of keywords triggers
    User:    an override.  In which case - push down the first hidden
    User:    model, and provide a proxy into the set of abandoned layers
    User:    based on an intermediate hidden model which contains the target
    User:    list of keywords.

     But first, let's clean up my reaction to what Mega-Hal was previously rambling about.  How about something like this, therefore:

    Now as far as figuring out how to use some of this material, for whatever this transmogrifies into, well the classic AI approach just doesn't quite get it.  The algorithm is described, but when a response length seems excessive, you need to move that block into the hidden layer, and then recapitulate to a new response, one that might seem more laconic, perhaps by using the initial...

    Read more »

View all 13 project logs

  • 1
    Getting Started - Obtain suitable hardware and the appropriate SDKs.

    This project is under continuous development.  If you want to get a head start, you can visit the repositories for this project at Git-Hub, and download the source codes that include the files from "Rubidiulm", "Propeller Debug Terminal", and the material from "Code Renaissance."  You will also need to have some kind of Microcontroller platform such as the Parallax Propeller P2 Eval or Edge Board, any FPGA-based system, or another environment that supports A-D and D-A conversion on multiple ports.

  • 2
    Create a Training Set for your AI.

    The training set used in the initial test case for this project, as of May 2023 is based on two compilations of some of the log entries created during the development of previous projects by this author, and that material is therefore available on this site.  Some have been made available in pdf form as the downloadable articles "Prometheus" and "The Money Bomb",

    The current version of Megahal uses a training file entitled megahal.trn.  So if your "business" is "XYZ catering", you might want to write some kind of manifesto of customer support interactions and train off of that,  i.e., so that you should at least for now, be creating an ordinary mixture of text, source code, etc., in the "style" that you want the AI to learn.

  • 3
    Implementing an Efficient AI that is Ready for the Real World.

    In an earlier project, "Rubidium", I think - there was a demo of the application "Propeller Debug Terminal' which can be used to upload and interact with bespoke software that runs on any microcontroller that has a USB-compatible serial interface capable of UART emulation.  The Parallax P2 platform is specifically supported.

    Other platforms should be relatively easy to adapt to.  For now, it will be sufficient to build an instance of MegaHal or Eliza, or any other classic AI, using any available 3rd party tool such as micro-python, cee-lisp, SPIN, Flex C, or Propeller assembly.  The built-in FORTH interpreter would also be an interesting choice. 

    Based upon the project "Prometheus", a Pascal compiler derived from UCSD Pascal is also in the works but is not yet supported.  Likewise, the Frame LISP library contains some Pascal-style intrinsics for use with C/C++ programs and can be used to create a multi-threaded version of Mega-Hal, which in turn can run multiple models simultaneously, so that the training sets used can have multiple files associated with them, i..e. in a familiar makefile like format, using the scripts.txt file format.  This is actually how my old chatbot "Algernon" was written, and it is currently being revised for use with contemporary microcontrollers.

View all 4 instructions

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates