Close

4. Testing The Machine Learning Model With OpenMV

A project log for Smart Insect Repellent System

Using Machine Learning To Detect And Repel Dangerous Insects

guillermo-perez-guillenGuillermo Perez Guillen 04/27/2023 at 06:570 Comments

In this chapter I will show you the procedure to use OpenMV when we are going to test the machine learning model created with Edge Impulse on our Nicla Vision board.

Build the Firmware

Since the Nicla Vision doesn't have any on-board SRAM we need to build the machine learning model into the firmware and load it from the flash. To do so, go to https://github.com/openmv/openmv and fork the repository. 

image

Rename the machine learning model and the label file downloaded in Edge Impulse, in my case I renamed to bee_or_spider.tflite and bee_or_spider.txt respectively.

image

In your fork, replace the built-in machine learning model under src/lib/libtf/models with the model you downloaded from Edge Impulse Studio. Commit the files and push the commit to the repository. It will build a new firmware automatically.

image

You can inspect the build process under "Actions". Once the firmware for NICLAV, has been built you can download it from the firmware link.

image

Flash the Firmware

We can now return to OpenMV and flash the new firmware to the Nicla Vision.

image

Put the Nicla Vision in bootloader mode by double-clicking the reset button – the green LED will start flashing. Click the Connect button in the IDE – the dialogue to Load a firmware shown below will open.

image

Click OK, and navigate to bin file produced in the previous step and click Run.

image

The Nicla Vision will be flashed with the new firmware, which includes the Edge Impulse model.

image

In my case I uploaded my best machine learning model, which is 1.81 MB in size. In other words, I am using 90.5% of the flash memory of the Nicla Vision.

Run the Script 

The next step is to write a Python script in OpenMV to control the Nicla camera and use the ML library to classify the image stream and try to detect our target objects. The video stream is just a series of image frames which are passed to a TensorFlow object which classifies the frame using the model and calculates a confidence prediction. The complete script of the classification example is as follows:

# AUTHOR: GUILLERMO PEREZ GUILLEN

import sensor, image, time, os, tf, pyb

redLED = pyb.LED(1) # built-in red LED
greenLED = pyb.LED(2) # built-in green LED

sensor.reset()                         # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_vflip(True)
sensor.set_hmirror(True)
sensor.set_windowing((240, 240))       # Set 240x240 window.
sensor.skip_frames(time=2000)          # Let the camera adjust.

labels, net = tf.load_builtin_model('bee_or_spider_v2')
found = False

def flashLED(led): # Indicate with LED when target is detected
    found = True
    led.on()
    pyb.delay(2000)
    led.off()
    found = False

clock = time.clock()

while not found:
    clock.tick()
    img = sensor.snapshot()
    for obj in tf.classify(net, img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5):
        print("**********nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
        img.draw_rectangle(obj.rect())
        predictions_list = list(zip(labels, obj.output()))
        for i in range(len(predictions_list)):
            confidence = predictions_list[i][1]
            label = predictions_list[i][0]
            print("%s = %f" % (label, confidence))
            if confidence > 0.8:
                if label == "bee":
                    print("It's a BEE")
                    img.draw_string(5, 12, label)
                    flashLED(greenLED)
                if label == "spider":
                    print("It's a SPIDER")
                    img.draw_string(5, 12, label)
                    flashLED(redLED)

    print(clock.fps(), "fps")

How does it work?

  1. The bee, spider and unknow prediction scores are printed through the serial port;
  2. The conditional to activate the detection of a bee or a spider must be greater than 0.8;
  3. If the camera detects a bee, then the green LED lights for 2 seconds and prints the message "It's a BEE" on the serial port; and
  4. If the camera detects a spider, then the red LED lights for 2 seconds and prints the message "It's a SPIDER" on the serial port.

Test

Below I show you an image capture when the camera has detected a bee.

image

Also, below I show you an image capture when the camera has detected a spider.

image

Below I show you the tests carried out with my model created with Edge Impulse and OpenMV. As you can see, I did the tests with images of a bee and a spider printed on a cardboard card.

Discussions