Close
0%
0%

Multi-Function Selective Firing Neurons

An actualization of a thought experiment on how a neural network can be better modeled using neurotransmitters.

Similar projects worth following
The intended goal is to create a better model of the human brain by taking into account the possibility that neurotransmitters don't just pass messages from one neuron to another, they can define the function of the neuron themselves. This would be analogous to using a switch statement in C++ that, depending on the input, calls different functions that can be enacted on whatever data is passed to it. If done this way, then some neurons can be selected to carry out memory functions while others can carry out computational functions and yet, the entire network would be made for the same type of neurons.

After designing the model, I plan to program a microcontroller to run a simulation of the model.

Off the cuff and top of my head, the project will be based on the combination of the following network types: Multilayer Perceptrons Net, Hopfield Net, Associative Memory Net. Even though each one has its strengths and weaknesses, I hope that having them work together to the point that a stable network can be made.

Being that this network will be made to use the same neurons for all functions, there will have to be a way to select which function that a neuron will take. In comes neurotransmitters... or at least a abstract version of it. While most artificial neural nets use single weights for each connection, mine will use multiple weights for each connection.

The selection of which of the weights will define the function will not be a simple summation function of all weights. Rather, the selection will be done by the individual summation of a specific weight from all connected neurons, all other weights will be ignored. For example, if a set of neurons fire their neurotransmitters that are specific to memory, the receiving neuron will respond to weights of these neurotransmitters and will fire based on which interval the evaluation of the weights fall in, even if it means the the interval will result in the firing of a differing set of weights.. The values of the other weights are not passed.

Each weight will cause the neuron to react in a different way. One of them may help in learning by changing the weights specific to memory by allowing a neuron to increase/decrease one of weights of another neuron. Another weight may cause a neuron to change another neuron's threshold to a specific weight or even which interval that the weight falls in to fire the neuron (possibly changing its function).

By causing different weights to define different functions for each neuron, it would help more accurately represent how the neurons of the brain respond differently to the various neurotransmitters that are present (or in some case not present) in the brain. Once I put together an proper evaluation function, I'll post that up as it may help to better understand the process.

  • So many details, so little brain to made it fix...

    Rollyn0105/13/2015 at 07:10 0 comments

    Made some edits and additions to some of my pseudocode. In the meanwhile, I'm going to start investigating which of the development boards I have that I should use to run the final simulation on. I have a SAM4S Xplained Pro Starter Kit from Atmel and a Stellaris LaunchPad LM4F120 Evaluation Kit from TI. Although both have Cortex-M4 based MCU, the SAM4S kit has a higher clock speed and more memory to work with. It even comes with a memory card reader built into the development board. I might try to take advantage of that somehow. We'll see. If I can make it so that the amount of neurons and layers can be easily expanded, this would truly be something else. Most of what I could glance at seems to have trouble will memory issues in storing multiple links for each neuron. I don't know, maybe I can fix that. Again, we'll see.

    Note: Thanks to the HackADay for letting me "swipe" the SAM4S at the Disrupt Hackathon. ;) Wish I could contact the Microchip guy who was there. Might have to investigate that too.

  • And again, and again, and again with the loops...

    Rollyn0105/12/2015 at 23:35 0 comments

    So, I've come up with some pseudocode for determining the highest and lowest intervals. It will compare the range of the bounds and will also compare the respective upper and lower bounds to see which is higher or lower. To do this properly, this will be broken into two functions as shown:

        intervalH (n)
            h = 0
            for t = 2 to 8 ++1;
                if (b_(n,t) - a_(n,t) > b_(n,t-1) - a_(n,t-1)) and (a_(n,t) > a_(n,t-1))
                        then h = t;
                        else h = t - 1;
            next t;
        return h;
        intervalL (n)
            h = 0
            for t = 8 to 2 --1;
                if ( b_(n,t-1) - a_(n,t-1) < b_(n,t) - a_(n,t))  and ( b_(n,t-1) < b_(n,t));
                        then h = t - 1;
                        else h = t ;
            next t;
        return h;
    

    With so many loops, it's no wonder why most designers of neural networks go for the single weight system. As simple as this all seems, I can already tell that the computational time of all of this we definitely eat up a lot of processing power just to do one iteration of forward propagation, not to mention what it will take for the many iterations for the network to learn or do anything. This will not discourage me though. I will still press on. As I said before, it is only a model of abstraction that can be, with time, optimized later to overcome such challenges. However, if the first steps are not taken, no one will get anywhere.

  • A picture is worth a thousand code lines...

    Rollyn0105/12/2015 at 07:15 0 comments

    Well, found a picture to help give a visual. It was initially a .svg file but I was still able to switch it to a .png file for posting. And in keeping with proper manners, the relevant info on the picture is as follows:

    "http://commons.wikimedia.org/wiki/File:Colored_neural_network.svg#/media/File:Colored_neural_network.svg">Colored neural network" by Glosser.ca - Own work, Derivative of File:Artificial neural network.svg. Licensed under http://creativecommons.org/licenses/by-sa/3.0">CC BY-SA 3.0 via Wikimedia Commons.

  • I pseudo want to code, it's too hard...

    Rollyn0105/11/2015 at 18:53 0 comments

    This is the best I can come up with for now. My scattered brain and I can't seem to get it together to come up with something better. Oh well, this is only the beginning. I will hope to have a method to determine the highest and lowest interval that I can sneak in there. This is just the main bulk of how the neurons will be activated by the weights of their connections. How this will come together completely is another question but I'm excited that I've made it this far.

    A simplified pseudo-code for averaging and activation function will be something close to the following:

    i = neurons being read; j = neurons reading; x = first neuron in a column; y = last neuron in a column;
    for j = x to y ++1;
        A_j = 0; h1 = 0; h2 = 0; ht = 0; F_t = 0;
        for t = 1 to 8 ++1;
            for i = x to y ++1;
                h1 = h1 + A_i*W_(i,t);
                h2 = h2 + A_i;
            next i;
        if h2 == 0 then ht = 0 else ht = h1/h2
        if a_(i,t) ≤ ht and ht ≤ b_(i,t) then F_t = 1 and A_j = 1;
        next t;
        for t = 1 to 8 ++1;
            if F_t = 1 then
                for p = 1 to 8 ++1;
                    if F_p = 1 then change(t, p, j);
                next p
            next t;
    next j
    
    change(t, p, j)
        switch (t);
        case
            (t = 1) : b_(j,p) = b_(j,p) + 1;
            (t = 2) : b_(j,p) = b_(j,p) - 1;
            (t = 3) : a_(j,p) = a_(j,p) + 1;
            (t = 4) : a_(j,p) = a_(j,p) + 1;
            (t = 5) : W_(intervalH(j),p) = W_(intervalH(j),p) + 1 of highest interval;
            (t = 6) : W_(intervalH(j),p) = W_(intervalH(j),p) - 1 of highest interval;
            (t = 7) : W_(intervalL(j),p) = W_(intervalL(j),p) + 1 of lowest interval;
            (t = 8) : W_(intervalL(j),p) = W_(intervalL(j),p) - 1 of lowest interval;
        break
    return
    

    Note: I was pleasantly surprised that the code snippet function recognize my pseudo code as VBscript. I was just throwing things together, who knew?

    Edit: Updated the code to reflect the use of functions that can determine the higher and lower intervals.

    Edit: Updated the code to temporarily deactivate the neurons while they read their input of weights. This will make sure they they will only active when their intervals are triggered and not have to go through the process of flipping their status for each individual weight check. Even if the neuron was active before and is later switched off, the interval will still trigger it back to its active state because it is checking the active status of neurons in the preceding layer. So, as long as it was active before, it will still go back to its active state so long as nothing has changed that would cause it to deactivate (all of the intervals fail to trigger).

  • But I hate math...

    Rollyn0105/11/2015 at 05:33 4 comments

    As for the equation that will be used to determine whether a neurotransmitter triggers one of the intervals in a neuron, it will be as follows:

    \color{White} \large f(t)= (\sum_n 〖A_n*W_n,_t 〗)/(\sum_n A_n )

    where the numerator represents the sum of the specific neurotransmitter (t) from the neurons that are active and connected, while the denominator represents the sum of how many neurons are active (where n is the set of neurons that are active). This function will be compared with the corresponding interval to see if it is within the range to trigger that particular action and render a particular neuron active. This is just an averaging function. It might need adjusting later but, for now, it should suffice for what I'm trying to accomplish. That's all as is. Hopefully, I'll have a flowchart and some pseudocode to post later.

    Note: Special thanks goes to @M. Bindhammer for helping me correct the equation's presentation.

  • Cake? One layer or five?

    Rollyn0105/11/2015 at 05:18 0 comments

    Well, as mentioned before, this project is meant to combine different type of neural networks into one single framework. However, the overall structure will be based on the multilayer perceptron. This topology relies on different layers of neurons that are typically visualized as vertical columns where each neuron in a column is connected to every neuron of the column adjacent to it. The first perceptron only had two layers; one for input, one for output. More layers were added later in order for the network to deal with larger more complex problems (voice analysis and filtering, data mining, etc) with the additional layers being named "hidden" layers.

    The amount of layers I have settled on will be three (with 10 neurons each) which will be "hidden" and two layers that will serve as input and output. My project will not be using backpropagation. Any difference in actuation will be feed forward through the network from the receptors (input layer) till it gets to the actuators (output layer) creating a feedback loop. As such, and to still keep it a bit simple, there will be twenty receptors and five actuators. The receptors will only have their activation status changed, allowing their weights to be read by other neurons. Intervals will not be necessary as their will not be what activates the receptors. Meanwhile, the actuators will only have intervals that can be triggered to determine whether the actuator is activated or not. They will not possess any weights themselves as the are the end-point of the chain and only the activation status is important as with the receptors.

  • When one neuron talks to another...

    Rollyn0105/08/2015 at 20:44 0 comments

    Well, I started on how to describe some of the behavior of the neurons on an individual level. Here's the current scoop.

    Each neuron will have a list of variables for use in carrying out the propagation of an input signal.

    \color{White} \large  W_n,_t; [a_n,_t,b_n,_t ]; A_n

    where:

    • \color{White} \large W_n,_tis the weight of a neurotransmitter(t) of a specific neuron(n)
    • \color{White} \large [a_n,_t,b_n,_t ] is the interval that has be in for the neuron to trigger a specific function
    • \color{White} \large A_n is the status of neuron(n) where one means it's active and zero means it's inactive

    List of neurotransmitter(weights), initial intervals and functions

    Weight

    Interval

    Function taken by neuron

    t=1

    a=0, b=100

    Increases b of triggered interval

    t=2

    a=0, b=100

    Decreases b of triggered interval

    t=3

    a=0, b=100

    Increases a of triggered interval

    t=4

    a=0, b=100

    Decreases a of triggered interval

    t=5

    a=0, b=100

    Increases weight of highest interval

    t=6

    a=0, b=100

    Decreases weight of highest interval

    t=7

    a=0, b=100

    Increases weight of lowest interval

    t=8

    a=0, b=100

    Decreases weight of lowest interval

    A neuron will be active only if one or more of its intervals are triggered. When active, only the weights of that neuron can be read by the other neurons connected to it, the weights of other neurons that are connected are ignored. While the upper and lower bounds of an interval can take on negative values (i.e. -1,-100), the weights themselves cannot. This is to be more consistent with the notion that a neurotransmitter in biological organisms can be either present in a certain amount or not present at all. It can also help to properly model how certain neurons wouldn't activate even though there's a high amount of a particular neurotransmitter (medication that is ineffective in correcting chemical imbalances).

    That's all I have for now. How the neuron determines the highest/lowest interval or evaluation function to match against the list of intervals hasn't been done yet, but hopefully it will be soon.

View all 7 project logs

Enjoy this project?

Share

Discussions

Bruce Land wrote 10/21/2015 at 11:32 point

I think that this type of neuron interaction is very likely to occur in real life. There are so-called 'neuromodulators' which overlap with transmitters and change cell function in very complex ways.

  Are you sure? yes | no

Rollyn01 wrote 10/21/2015 at 14:18 point

It's nice to see that you got my message. Thank you. However, it was my other project that I wanted you to give a look over. https://hackaday.io/project/8137-prime-factor-reduction-seive. You've already provided a lot of excellent info for this one. 

  Are you sure? yes | no

Bruce Land wrote 05/11/2015 at 11:26 point

The FPGA model partly simulates glutamate neurotransmitter action. There are at least a dozen neurotransmitters and probably over 100 different neurotransmitter receptors. There are nonlinear conductance increasing and decreasing transmitters, as well as modulators which increase/decrease the effectiveness of other transmitters.  

  Are you sure? yes | no

Rollyn01 wrote 05/11/2015 at 15:21 point

Some of them act as catalyses to others? That would make sense. The Hopfield network was designed to replicate low energy optimization functions of neural nets. I've often wonder if this was structural-based, or whether catalyses was also involved in some way. Good to know.

  Are you sure? yes | no

Rollyn01 wrote 05/11/2015 at 01:00 point

Those projects look very nice. As for the excitation values typically used, I've often wonder why it should only be +1 or -1. I can see how it would simplify the network, but aside from this, does such a model lend itself to truly mimicking how neurons function in the presence of neurotransmitters? The aim of my project is to include in an abstract form of the function of neurotransmitters. This would help others better gauge how the interaction between neurotransmitters and neurons can give rise to certain behaviors of the system as a whole (the human mind). Or, at least, I'm hoping that's what it can lead to. I'm not a neurologist (I barely know anything about human anatomy), so I can't bring that knowledge into it. I just hope it can help start others on the way to doing so.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates