Close

Build and Deploy a Custom Mobile SSD Model for Raspberry Pi

A project log for Ai Equiped Wasp (and Asian Hornet) Sentry Gun

A powerful laser guided by cameras will vaporize these pests in flight. Hopefully.

capt-flatus-oflahertyCapt. Flatus O'Flaherty ☠ 01/13/2019 at 14:220 Comments

This was a long and tedious 3 stage process involving building up a specialised Mobile SSD version of caffe, converting the resulting trained model to Intel OpenVino format and then deploying it on the Raspberry Pi. Fortunately, the results were 'not too bad', although far to slow to guide anything other than a laser on a gimbal whilst hoping that the wasp remains stationary long enough to zap it. Certianly not fast enough to zap them 'in flight'.

https://github.com/chuanqi305/MobileNet-SSD 

Step 1 is to clone the particular flavour of caffe to train Mobilenet SSD:


$ sudo apt-get update && sudo apt-get upgrade
$ sudo apt install git
$ git clone https://github.com/weiliu89/caffe.git
$ cd caffe
$ git checkout ssd

Next, is to configure the makeconfig file:

Since I was using the Jetson TX2 for training:

Makefile.config:

INCLUDE_DIRS += /usr/include/hdf5/serial/
LIBRARY_DIRS += /usr/lib/aarch64-linux-gnu/hdf5/serial
CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
             -gencode arch=compute_35,code=sm_35 \
             -gencode arch=compute_50,code=sm_50 \
             -gencode arch=compute_52,code=sm_52 \
             -gencode arch=compute_61,code=sm_61 \
         -gencode arch=compute_62,code=sm_62 \
         -gencode arch=compute_62,code=compute_62

BLAS := atlas
# BLAS := open

 After renaming the config example to makeconfig, I installed a whole load of dependencies:

$ sudo apt-get install libboost-system-dev libboost-thread-dev libgflags-dev libgoogle-glog-dev libhdf5-serial-dev libleveldb-dev liblmdb-dev libopencv-dev libsnappy-dev python-all-dev python-dev python-h5py python-matplotlib python-numpy python-opencv python-pil python-pip python-pydot python-scipy python-skimage python-sklearn

$ sudo apt-get install python-setuptools
$ sudo apt-get install autoconf automake libtool curl make g++ unzip
$ sudo apt-get install protobuf-compiler

$ export PROTOBUF_ROOT=~/protobuf
$ git clone https://github.com/google/protobuf.git $PROTOBUF_ROOT -b '3.2.x'
$ cd $PROTOBUF_ROOT
$ ./autogen.sh
$ ./configure
$ make "-j$(nproc)"
$ sudo make install
$ sudo ldconfig
$ cd $PROTOBUF_ROOT/python
$ sudo python setup.py install --cpp_implementation

$ sudo apt-get install --no-install-recommends build-essential cmake git gfortran libatlas-base-dev libboost-filesystem-dev libboost-python-dev

$ sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev \
libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler


$ sudo ln -s /usr/lib/x86_64-linux-gnu/libhdf5_serial.so.10.1.0 /usr/lib/x86_64-linux-gnu/libhdf5.so
$ sudo ln -s /usr/lib/x86_64-linux-gnu/libhdf5_serial_hl.so.10.0.2 /usr/lib/x86_64-linux-gnu/libhdf5_hl.so

 Then I set some pathways:

$ export CAFFE_ROOT=/home/tegwyn/caffe
$ export PYTHONPATH=/home/tegwyn/caffe/python:$PYTHONPATH

… Which I had to perform every bash session as I was too lazy to write it into bash,rc.

Jetson TX2 ARM only: 

sudo ~/jetson_clocks.sh && sudo nvpmodel -m0
$ sudo ln -s /usr/lib/aarch64-linux-gnu/libhdf5_serial.so.10.1.0 /usr/lib/aarch64-linux-gnu/libhdf5.so
$ sudo ln -s /usr/lib/aarch64-linux-gnu/libhdf5_serial_hl.so.10.0.2 /usr/lib/aarch64-linux-gnu/libhdf5_hl.so

Next actually install and test this flavour of caffe: 

$ cd && cd caffe && make -j8
# Make sure to include $CAFFE_ROOT/python to your PYTHONPATH.
$ make py
$ make test -j8
# (Optional)
$ make runtest -j8

 Testing may go on for up to an hour and my test eventually failed when it tried to process 50 boxes simultaneously, which overloaded the system! Otherwise it was all ok.

Now we can install some image data to test the ability to train the Mobile SSD network:

$ cd /home/tegwyn/data
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
# Extract the data.
$ cd /home/tegwyn/data && tar -xvf VOCtrainval_11-May-2012.tar
$ tar -xvf VOCtrainval_06-Nov-2007.tar
$ tar -xvf VOCtest_06-Nov-2007.tar

 Lastly create a database and then start to train:

Create database:
$ cd $CAFFE_ROOT
$ ./data/VOC0712/create_list.sh
$ ./data/VOC0712/create_data.sh

Train and evaluate:
$ cd $CAFFE_ROOT
$ python examples/ssd/ssd_pascal.py
$ cd $CAFFE_ROOT && python examples/ssd/ssd_pascal.py

Next, swap out the images from above with the wasp images and labels and train. This was not as easy as it might seem as there are 21 different categories and I only wanted one. Eventually, after a lot of hacking about, I got it to work, but cant remember the exact procedure :(. Also, the Python script, (examples/ssd/ssd_pascal.py) had to be edited to avoid any attempt by the script to calculated mAP to test the accuracy. Batch and accumulation sizes were both set to 16 which allowed maximum benefit of the Jetson's memory.

Step 2: https://software.intel.com/en-us/articles/OpenVINO-Using-Caffe Use the model optimiser script in OpenVino to convert from caffe format to a .bin and .xml pair.

cd /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/

python3 mo_caffe.py --data_type=FP16 --input_model /home/tegwyn/Desktop/SSD_300_WASP01/VGG_VOC0712_SSD_300x300_iter_20211.caffemodel

[ SUCCESS ] XML file: /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/./VGG_VOC0712_SSD_300x300_iter_20211.xml
[ SUCCESS ] BIN file: /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/./VGG_VOC0712_SSD_300x300_iter_20211.bin 

Step 3: https://software.intel.com/en-us/articles/OpenVINO-Install-RaspberryPI Deploy on the raspberry Pi:

This step was actually relatively easy and the software installed nice and slick.

sed -i "s|/home/pi/inference_engine_vpu_arm/|$(pwd)/inference_engine_vpu_arm|" inference_engine_vpu_arm/bin/setupvars.sh

source <OVINO_INSTALLDIR>/bin/setupvars.sh'

Edit the bash.rc file to run  setup vars every bash session using VIM editor:

$ sudo vi /etc/bash.bashrc

Add:  source /home/pi/inference_engine_vpu_arm/bin/setupvars.sh  to the bottom.

I then downloaded intel models to: /home/pi/intel_models/  (350mb)

To run the 3 tier face detection interactive demo, using Neural Compute stick 2:

cd /home/pi/inference_engine_vpu_arm/deployment_tools/inference_engine/samples/build/armv7l/Release
./interactive_face_detection_demo -d MYRIAD -m /home/pi/intel_models/face-detection-retail-0004/FP16/face-detection-retail-0004.xml -d_ag MYRIAD -m_ag /home/pi/intel_models/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d_em MYRIAD -m_em /home/pi/intel_models/emotions-recognition-retail-0003/FP16/emotions-recognition-retail-0003.xml -d_hp MYRIAD -m_hp /home/pi/intel_models/head-pose-estimation-adas-0001/FP16/head-pose-estimation-adas-0001.xml 

  ............ WORKS !!!!

I then built the object_detection_sample_ssd model and ran the following python script on one .bmp image of a wasp:

cd /home/pi/inference_engine_vpu_arm/deployment_tools/inference_engine/samples/build/armv7l/Release
./object_detection_sample_ssd -d MYRIAD -i /home/pi/Desktop/SSD_300_WASP01/test_images/wasp02.bmp -m /home/pi/Desktop/SSD_300_WASP01/./VGG_VOC0712_SSD_300x300_iter_20211.xml   

 The result was a new 'inferred' .bmp image and the following text output:

[0,1] element, prob = 1    (8.82812,228.438)-(205.312,409.375) batch id : 0 WILL BE PRINTED!
[1,1] element, prob = 1    (163.75,202.812)-(362.812,420.312) batch id : 0 WILL BE PRINTED!
[2,1] element, prob = 1    (390.938,292.031)-(626.562,505.625) batch id : 0 WILL BE PRINTED!
[3,1] element, prob = 1    (160.625,386.25)-(382.188,623.125) batch id : 0 WILL BE PRINTED!
[4,1] element, prob = 0.999023    (7.8125,372.188)-(173.75,577.188) batch id : 0 WILL BE PRINTED!
[5,1] element, prob = 0.99707    (282.188,120.078)-(492.188,343.438) batch id : 0 WILL BE PRINTED!
[6,1] element, prob = 0.161255    (210.312,148.359)-(433.438,392.812) batch id : 0
[7,1] element, prob = 0.152832    (165,251.562)-(284.688,350.938) batch id : 0
[8,1] element, prob = 0.0758667    (230.781,305.625)-(328.75,431.25) batch id : 0
[9,1] element, prob = 0.0470276    (251.25,177.5)-(699.375,473.125) batch id : 0
[10,1] element, prob = 0.0458679    (2.1875,287.5)-(180.625,483.75) batch id : 0
[11,1] element, prob = 0.0388184    (219.062,294.375)-(510.938,445.625) batch id : 0
[12,1] element, prob = 0.0379333    (1.09375,404.062)-(52.1094,500.938) batch id : 0
[13,1] element, prob = 0.0323792    (89.0625,223.75)-(275.312,406.875) batch id : 0
[14,1] element, prob = 0.0202637    (295,202.969)-(451.25,503.75) batch id : 0
[15,1] element, prob = 0.0196075    (71.4844,375.938)-(289.062,617.812) batch id : 0
[16,1] element, prob = 0.0182953    (232.344,380)-(701.25,651.875) batch id : 0
[17,1] element, prob = 0.0160522    (393.75,131.875)-(618.125,342.5) batch id : 0
[18,1] element, prob = 0.0159302    (142.969,233.75)-(244.531,336.25) batch id : 0
[19,1] element, prob = 0.0155487    (288.125,263.438)-(371.875,375.938) batch id : 0
[20,1] element, prob = 0.0140076    (-69.6875,386.562)-(381.25,660.625) batch id : 0
[21,1] element, prob = 0.0131454    (154.375,432.5)-(478.75,731.875) batch id : 0
[ INFO ] Image out_0.bmp created!

Success! (After several days work)

Discussions