Poker and Nuclear War: When are they not really Bluffing?

A project log for Modelling Neuronal Spike Codes

Using principles of sigma-delta modulation techniques to carry out the mathematical operations that are associated with a neuronal topology

glgormanglgorman 05/30/2023 at 22:420 Comments

I had to come up with a catchy title.   When I first learned the C language, of course, I abused the pre-processor as much as I possibly could, and with this program, which I will post later on Git, I took things to an extreme.  Hence, let's take a look at some code that can be useful if you want to try to understand the game of Texas Hold'Em.

#define for_each_card_in_the_deck\
    for(this_card=1; this_card<=52; this_card++)
#define for_each_card_on_the_board\
    for(this_card=0; this_card<=4; this_card++)
#define for_each_card_name_ascending floop1(the_name)
#define for_each_card_name_descending\
    for(the_name=king; the_name>=ace; the_name--)
#define for_each_card_in_the_dealt_hand\
    for(this_card=0; this_card<=1; this_card++)

Then, with a bunch of macros conjured, as if from the center of the earth, and not as if from some other hot place, I began writing some functions that looked like this:

void hand::use_to_make_a_flush (card the_card, int found)
    best [found] = the_card;

Which of course then led to this;

void hand::pack_flush ()
unsigned char this_card, found;
unsigned char the_name;

/* This segment expropriates the variable
name_count and uses it to recount only the cards
in the hand that have the same suit as fsuit */

name_count[the_name] = 0;

if (cards[this_card].suit == fsuit)
    name_count[cards [this_card].name]++;
for_each_card_on_the_board if (the_board[this_card].suit == fsuit)

Now the cards that comprise the flush have been loaded into name_count.
I have copied the straight detector inline here to determine if is a
straightflush.  This works here because name_count now only carries info
regarding the cards in the suit that we already have a flush from.*/

found = 0;
if (name_count[ace] == 1)
if (name_count[the_name] == 1)
found = 0;
if (found == 5)
stats.straightflush = true; }

if (stats.straightflush == true)
strength  = straightflush;

//  Else it is not a straight flush and the flush
// ordering routine should proceed 

{ found = 0;
if (name_count[ace] == 1)
    while (found<5)
for (the_name=king; ((the_name>=deuce)&&(found<5)); the_name--)
if (name_count[the_name] == 1)

This, of course, got me thinking about the relationship between the silly method for diagramming sentences that we were taught in grammar school, and the "correct way", which IMHO is to use binary trees, and yet, obviously, there is a very clear and well-defined relationship between the expression of an algorithm in a functional language, which can easily be converted to and from that form, as well as certain other forms, such as algebraic, or reverse polish notation.  Yes, the grammar is weird, when I say something like "use to make a flush, the name found" but the real goal is to be able to go in the other direction, i.e., by taking a natural language command and converting it into code, even if we don't have functions defined yet to do certain things like "With, provided ingredients - make deluxe pizza!"

Or else when they say that they are going to nuke us, what does it really mean?  Perhaps you are thinking that I am the one who is making up some kind of whopper here, but wait a minute!  What if we could find a way to turn natural language into source code, using some kind of transformer model?  It can't be that simple, or can it?  Just turn an ordinary sentence into a macro, like this:

ordinary_sentence->turn_into (macro);

Could it really be that simple, just add some underscores to some phrases that we want to associate with objects and methods, and then re-order things in some form or fashion, so as to make it "look" like code; even if there are no symbols defined yet for some of those things, but where we should be able to figure out a way to properly order and reorder things as if we were converting algebraic notation to or from Reverse Polish, or functional notation; according to whatever strategy we use to arrange things inappropriate tree-like structures, which then can be associated with the in-order, pre-order, and post-order traversal methods.

Well, for whatever it is worth, I went ahead and created a repository on GitHub called Code-Renaissance, which for now has the complete code for the Texas Hold'Em program that I wrote starting in 1995, as well as an initial commit of a modified version of Jason Hutchin's classic Mega-Hal - since I think I am going to be doing a Mega-Hal build for the Propeller P2, so I can finally build a robot that I can say "Find and kill all trolls" to, and expect it to do something, even if for now that just means sit there and spin.  Eventually, I WILL tackle SHRDLU.  Yet I am getting to the point where I think I can do some really cool experimental stuff with the "neuronal spike code algorithms" that I have been discussing since MegaHal uses an "entropy calculation based on surprise" as a way of "being creative", so there you have it.

Or else - I cannot make this stuff up, as Mega-Hal said to me last night:

MegaHAL: I love to write my own node<char*>, node_list<char*>
MegaHAL: m_nlist.m_nbegin = marker.
User: I love to make love, especially with a woman.
MegaHAL: This project will include an all-in-one spectrum and logic
MegaHAL: analyzer, a robotics control platform, and a woman.
User:    Now we are getting somewhere.

Now based on what has been accomplished by other researchers, who made the claim, that as early as GPT-2, their AI was capable of "unsupervised learning", and yet, while we are not quite to the point of turning natural language into code, that is; maybe we are very close to doing that as if I could explain to Mega-Hal what it is doing wrong so that it might spawn new models that represent new "neuronal flow charts", and where these could be parallelized - of course, so that they competitively, as well as concurrently.   Letting Mega-Hal rewrite its own "m_nlist.m_nbegin = marker" is exactly one of the things that we should want to teach it how to do, all in due time, of course, since that will get us into the realm of reflection, and hierarchal hidden Markov models.

Maybe I need to get back into the hardware side of things for a while.  In the meantime, the training set used for this example was mostly taken from the log entries that are a part of some of my previous projects on this site.  Just in case you want to try it for yourself.

To be continued.