I maked remote control for the vehicle using WebRTC.
I have a website hosted in Glitch where I've running NodeJS+WebRTC.
First user that enter in website application acts as WebRTC server and this have controls to drive the client vehicle.
JETSON in vehicle has connected a WIFI adapter and this adapter is connected to mobile ISP data.
When Jetson verify for internet connection opens Chromium and load the Glitch website as second user (client). Then it make P2P pair with server, share webcam connected to Jetson and then active WebUSB to communicate Chromium with the microchip MCU by USB.
Server can previsualize the client webcam and send commands to it by WebRTC DataChannel.
Now the vehicle has the following functions:
- Automated drive by detecnet pursuing silhouettes.
- Short distance remote control by two ESP8266 modules (Access point in remote controll and Station in vehicle)
- Long distance remote control by WebRTC and the ISP data.
Now I need to see how I going to attach batteries (which I don't have) :)
If some day we are confined again (I hope that doesn't happen) I will be able to see what happens in the street xDDD
Remote control ready and setted to RC or USB modes only. I thinked I can drive a big vehicle sitting using remote control directly and I don't need two manual controls separately.
I maked a simply USBTCP class to use without neural nework but using the OpenCV part.
Also I can select the capture source for the OpenCV image (desktop or webcam) and able to select who to be the communication (USB or TCP)
For example... I can use NGBrain class with OpenCV desktop mode and receive data by TCP from blender about number of sensors/actions to create respective neurons and to get data of them. Then I send actions to blender by TCP too.
Or I can use NGBrain class with OpenCV webcam mode and receive data by USB from master MCUs about number of sensors/actions to create respective neurons and to get data of them. Then I send actions to MCUs by USB too.
Also can be OpenCV webcam mode and TCP... or OpenCV desktop mode and USB...
With USBTCP class I don't receive data (TCP or USB) to create neurons or use the neural network. This get from OpenCV desktop or webcam mode and receive/send from TCP or USB by deterministic code.
Already I'm using USBTCP class to receive OpenCV webcam image connected to JetsonNano and then move two wheels of a car (above is the Jetson) to follow one double triangle marker that moves.
Now I'm going to use the Jetson python codes to run detectNet and so the car chases cats (or others) instead the double triangle. Like a M3gan ñ_ñ
I try to divide the work as much as possible so that the master only has to listen to the data of its associated sensor from the slaves and send it all via WIFI to the PC. And receive the action also via WIFI and indicate it to the slave that has that action associated with it.
Even one of the 3 MCUs of the slaves is just to coordinate the other two which are one MCU for the gyroscope and the other MCU for the servo. So that the information is quickly available and flows without much of a jam in making calculations when the master requests things from the slaves. I have activated PLL to achieve 48MHz too
The idea of using multiple small MCUs instead of big one always was on my mind. The hardest part is to get them talk to each other.