Still I'm neofite about LTP and currently I'm reading and investigating. This is for other things differently of backprop and the robot part I'm currently to have and it's not have relation "by the moment".
About LTP I was commenting about one filter on weights but reading something more it's not well oriented that I said.
Long term potentiation happens on high frecuency pre & post sinaptic activity.
Long term depression happens on low frequency pre & post sinaptic activity.
Simply I will need detect an high and low pre-post activity and his frecuency value
10Hz = 0.1 = LTD
100Hz = 1.0 = LTP
then only multiply this delta value by the weight value (value+=inputNeuron*weight*0.1) for a controlled value propagation like momentum or a capacitor (low farads=voltage increase more quickly=LTP)(high farads=voltage increase more slowly=LTD) and starting with 0.5 I think... and getting back to 0,5 "with the time" :)
for future experiments on the way to achieve some LTP system.
Neurons can join or out dinamically according to things I will going to test in this mode...
Only hidden neurons can "explore" and when this is allowed to join and this is near to some input neuron then act as "parent" else act as child.
as note: I have an ancient GT220 (3 displays)...
Maybe I can add some value like band pass filter that make adapting over time and it is displaced to the value/range more used habitually(according to amplitude, variations over time...) to multiply along after with a neuron output or weight.
I think store this value along the weight is more adecuate if this only happens in presinaptic and postsinaptic activity at same time, something only detectable by the relation and which this value update would create some kind of complete route starting from some input value until to some output in order to sensibilize for a past/specific input type.
It stays frozen only amplyfing for specific range of input values that was the more used making the function like some type of memory for a specific input or task without needs of disconnect weights for a new task and would do much more less effect over other neurons having trained weights for multiples task.
the thickness would go narrowing on the time and making displacement every time more dificulty and it allow more refinement, force, contrast and differentiation for that route or input state allowing to put new neurons on the net for this or other tasks (with wide thickness and starting with low amplitude) because the differentiated neuron already only is accepting for specific weight values and transparent for others values because thickness already is short magnifing only where gradient descent led it.
Also I want test what happens if this differentiated weights are detected by the neuron and then could hold his output for more time according to the quantity of these
While I'm training I'am adding new things attempting don't spoil the training. Now I can set any layer to be used as input or output and continue receiving or sending by TCP. Also I can create new layers, connect them...
Now I'm showing the inference for all channels when learning is on. 5 experiences inyected at same time (*2= batch of 10). Right black margin is the getted error for each channel (not appreciable because error is low)
on this only one channel is used to perform single inference. The other channels is showing some output but is because is receiving from bias neuron
Many more enhancements, important corrections, oversights, refactorings... and realizing that another things I needed is a long training and seeing the reward plot specially.
By the moment it's looking good and seems that learn correctly instead of unlearning :D
I try to divide the work as much as possible so that the master only has to listen to the data of its associated sensor from the slaves and send it all via WIFI to the PC. And receive the action also via WIFI and indicate it to the slave that has that action associated with it.
Even one of the 3 MCUs of the slaves is just to coordinate the other two which are one MCU for the gyroscope and the other MCU for the servo. So that the information is quickly available and flows without much of a jam in making calculations when the master requests things from the slaves. I have activated PLL to achieve 48MHz too
The idea of using multiple small MCUs instead of big one always was on my mind. The hardest part is to get them talk to each other.