Here, you can witness my meanderings regarding so called Neural Networks. From a position of knowing almost nothing about them, I hope to expand my knowledge through research and experimentation to reach the stage that I can take part in a Kaggle object inference competition and live amongst the Gods, or Gladiators, of these networks!
After messing about with some networks for a couple of months in an attemt to design computer vision for machines, I started to notice a few re-occurring topics and buzz words such as 'prototext' and 'weights'.
Curiosity has now got the better of me and I worked out that the various prototext files associated with a network describe it's structure in reasonably simple and human readable terms eg:
I presume that this is a python layer, but I'm not sure, but it does not matter for now.
The fantastic Nvidia Digits software enables print out of a fancy graphical representation of the whole network and, starting with the renowned bvlc_googlenet.caffemodel, I thought I'd try and hack it and learn something through experimentation.
One of the first thing I looked for was symmetry and repetition, with the desire to simplify what initially look very complicated. I noticed that the above layer describes a 'link' between other blocks of layers that seem to repeat themselves about 6 times:
...... in the massive bvlc_googlenet network:
..... and in this way I managed to simplify it by removing what looked like about 6 large blocks of repeating layers to this:
...... And looking at this diagram very carefully, there's still one big block that repeats and should also be able to be removed. I tried removing it, but unfortunately gave this error: