scalable hardware for neural network system

Similar projects worth following
scalable hardware for neural network system

  • quick projects

    3drobert04/07/2022 at 21:22 0 comments

    I have some delay with this project because I still have preparing my new cave and tools. And work for my job :)

    I have finished some quicks projects before too.

    - one tester to measure kilovolts.

    - one controller for the shaker and the resin scatter.

    - touching again BJT amplifiers (pre and power) a little and learning about speaker, triodes and pentodes.

    - welded one frame to mount trifasic motor and grinder

  • Move finished

    3drobert03/08/2022 at 17:26 0 comments

    finally on my new house/laboratory but one week without internet because internet companies (now Movistar).

  • LTP & LTD

    3drobert02/17/2022 at 18:32 0 comments

    Still I'm neofite about LTP and currently I'm reading and investigating. This is for other things differently of backprop and the robot part I'm currently to have and it's not have relation "by the moment".

    About LTP I was commenting about one filter on weights but reading something more it's not well oriented that I said.

    Long term potentiation happens on high frecuency pre & post sinaptic activity.

    Long term depression happens on low frequency pre & post sinaptic activity.

    Simply I will need detect an high and low pre-post activity and his frecuency value

    10Hz = 0.1 = LTD

    100Hz = 1.0 = LTP

    then only multiply this delta value by the weight value (value+=inputNeuron*weight*0.1) for a controlled value propagation like momentum or a capacitor (low farads=voltage increase more quickly=LTP)(high farads=voltage increase more slowly=LTD) and starting with 0.5 I think... and getting back to 0,5 "with the time" :)

    I will use this thread for updates about this. :D

  • manual plasticity

    3drobert02/13/2022 at 05:43 0 comments

    for future experiments on the way to achieve some LTP system. 

    Neurons can join or out dinamically according to things I will going to test in this mode...

    Only hidden neurons can "explore" and when this is allowed to join and this is near to some input neuron then act as "parent" else act as child.

    as note: I have an ancient GT220 (3 displays)...

    Maybe I can add some value like band pass filter that make adapting over time and it is displaced to the value/range more used habitually(according to amplitude, variations over time...) to multiply along after with a neuron output or weight.

    I think store this value along the weight is more adecuate if this only happens in presinaptic and postsinaptic activity at same time, something only detectable by the relation and which this value update would create some kind of complete route starting from some input value until to some output in order to sensibilize for a past/specific input type. 

    It stays frozen only amplyfing for specific range of input values that was the more used making the function like some type of memory for a specific input or task without needs of disconnect weights for a new task and would do much more less effect over other neurons having trained weights for multiples task.

    the thickness would go narrowing on the time and making displacement every time more dificulty and it allow more refinement, force, contrast and differentiation for that route or input state allowing to put new neurons on the net for this or other tasks (with wide thickness and starting with low amplitude) because the differentiated neuron already only is accepting for specific weight values and transparent for others values because thickness already is short magnifing only where gradient descent led it.

    Also I want test what happens if this differentiated weights are detected by the neuron and then could hold his output for more time according to the quantity of these

  • allowing to load stored experiences

    3drobert02/11/2022 at 21:03 0 comments

    to test some models more fast now I can load experiences from file that had saving last experiences qith same input.

    and UI organization

    this seems like

  • specification updates

    3drobert02/09/2022 at 01:41 0 comments

    more specification information about gbrain & grel functioning

  • training continue

    3drobert02/08/2022 at 00:54 0 comments

    While I'm training I'am adding new things attempting don't spoil the training. Now I can set any layer to be used as input or output and continue receiving or sending by TCP. Also I can create new layers, connect them...

    Now I'm showing the inference for all channels when learning is on. 5 experiences inyected at same time (*2= batch of 10).
    Right black margin is the getted error for each channel (not appreciable because error is low)

    on this only one channel is used to perform single inference. The other channels is showing some output but is because is receiving from bias neuron



    Now I have seen the other channels didn't get the error and them batch is a  ̶f̶. Fixed up too

    and seeing this last one I have seen another big problem now fixed too :)

  • updates

    3drobert02/07/2022 at 03:28 0 comments

    Many more enhancements, important corrections, oversights, refactorings... and realizing that another things I needed is a long training and seeing the reward plot specially.

    By the moment it's looking good and seems that learn correctly instead of unlearning :D

  • Reward system enhancements

    3drobert02/02/2022 at 19:05 0 comments

    Detecting two triangle shapes to get a line and be to able to perform dot product to get the angle

  • Releasing ​OpenCV windows

    3drobert02/02/2022 at 16:01 0 comments

    OpenCV images & controls inside OpenGL context

View all 89 project logs

Enjoy this project?



dearuserhron wrote 09/14/2021 at 21:17 point

The idea of using multiple small MCUs instead of big one always was on my mind. The hardest part is to get them talk to each other.

  Are you sure? yes | no

3drobert wrote 09/14/2021 at 21:40 point

I try to divide the work as much as possible so that the master only has to listen to the data of its associated sensor from the slaves and send it all via WIFI to the PC. And receive the action also via WIFI and indicate it to the slave that has that action associated with it.

Even one of the 3 MCUs of the slaves is just to coordinate the other two which are one MCU for the gyroscope and the other MCU for the servo. So that the information is quickly available and flows without much of a jam in making calculations when the master requests things from the slaves.
I have activated PLL to achieve 48MHz too

  Are you sure? yes | no

kavinjhon8 wrote 08/09/2021 at 06:19 point

Thanks for sharing this project details. I also work on different projects. Feel free to visit:

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates