I maked a simply USBTCP class to use without neural nework but using the OpenCV part.
Also I can select the capture source for the OpenCV image (desktop or webcam) and able to select who to be the communication (USB or TCP)
For example... I can use NGBrain class with OpenCV desktop mode and receive data by TCP from blender about number of sensors/actions to create respective neurons and to get data of them. Then I send actions to blender by TCP too.
Or I can use NGBrain class with OpenCV webcam mode and receive data by USB from master MCUs about number of sensors/actions to create respective neurons and to get data of them. Then I send actions to MCUs by USB too.
Also can be OpenCV webcam mode and TCP... or OpenCV desktop mode and USB...
With USBTCP class I don't receive data (TCP or USB) to create neurons or use the neural network. This get from OpenCV desktop or webcam mode and receive/send from TCP or USB by deterministic code.
Already I'm using USBTCP class to receive OpenCV webcam image connected to JetsonNano and then move two wheels of a car (above is the Jetson) to follow one double triangle marker that moves.
Now I'm going to use the Jetson python codes to run detectNet and so the car chases cats (or others) instead the double triangle. Like a M3gan ñ_ñ