Old laptop wifi was a fail.
Macbook wifi was a fail.
For something which comes nowhere close a 5 year old laptop, it's impressively packed with 0201's.
The HDMI, wifi, & servos actually fit into all the USB ports with some bending.
Back to problematic USB dongles & a phone interface it was. This also meant the beloved IR remote was useless. This would require yet another enclosure for the battery, confuser & USB dongles. The enclosure would need a phone holder. It probably needs a fan.
The RTL8188 driver on the jetson manages to stay connected, but still drops out for long periods when it's more than a foot away. Like all wifi drivers, if it's not connected eventually powers down & has to be restarted.
Wifi rapidly degrades beyond a few feet, probably because of the high bitrate of 2megbit. UDP is essential, but the wifi is so bad it might be necessary to use USB tethering. Restarting the app brings it back, but recreating the socket does not. Android might restrict wifi usage a number of ways.
The MACROSILICON HDMI dongle is able to output 640x480 raw YUV, which kicks the framerates up a notch. It only uses 25% of 1 core so overclocking would make no difference. Compression of JPEG frames for the wifi has to be done in a thread. The HDMI output has to be kept in YUV outside of the GPU. The neural network needs RGB, so the YUV to RGB conversion has to be done in the GPU. This arrangement got the 256x144 model from 6.5 up to 8fps. This stuff definitely needs to be in a library to be shared with truckcam.
A key step in CUDA programming is dumping the error code when your CUDA functions return all 0.
cudaDeviceSynchronize(); cudaError_t error = cudaGetLastError(); if(error != 0) printf("%s\n", cudaGetErrorString(error) );
The most common error is too many resources requested for launch which means there either aren't enough registers or there are too many kernels. The goog spits out garbage for this, but lions got it to work by reducing the blockSize argument. This increases the gridSize argument.
Portrait mode is best done by loading 2 models simultaneously. A 160x240 body_25 was created for portrait mode. This runs at 7fps because it has slightly more neurons. It does much better than stretching 256x144. The resident set size when loading 1 model is 1 gig & 2 models is 1.4 gig.
All 4 modes need to be paw debugged. Screen dimensions on the Moto G Pure are different than the Wiki Ride 3. The mane trick is the neural network scans the center 3:2 in portrait mode while it scans the full 16:9 in landscape mode. It might be easier if all the keypoints are scaled to 0-1 instead of pixel coordinates.
That leaves bringing up the servo head with the jetson. The good news is this app is pretty solid after 3 years.
There is a person outline detector which might give better results. Testing different models is long & hard. The entire program has to be rewritten. They have to detect sideways & overlapping animals.
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.