Close

Solution #1: ADAS - CAS - Tweaking for Indian Conditions

A project log for Multi-Domain Depth AI Usecases on the Edge

SLAM, ADAS-CAS, Sensor Fusion, Touch-less Attendance, Elderly Assist, Monocular Depth, Gesture & Security Cam with OpenVINO, Math & RPi

anand-uthamanAnand Uthaman 10/24/2021 at 16:560 Comments

After trying out the assembled gadget for vehicles and people on the road, I wanted to tweak the solution to cater the solution for the Indian situation. The Indian traffic conundrum is so unique that it demands custom solutions. To start with, we need to train object detection models with Indian vehicles such as trucks, tempos, vans, autos, cycle rickshaws, etc.

Further, to enhance smart surround view, we need to train the model with Indian traffic signs and signboards to give more meaningful driver-assist warnings on Indian roads. It's a common sight in India, the animals like cows, pigs, buffaloes, goats, dogs, etc., cross the roads and highways. Hence, it's beneficial to detect them as well.

For the PoC, see the output of SSD-Mobilenet model trained to classify Indian traffic signs against Indian signboards. You can further classify the traffic sign to decipher the exact meaning of the sign.

trafficSign.gif
                               SSD-MobileNet model able to classify Indian Traffic Signs (Yellow Bbox) vs Sign Boards (Green Bbox)

The annotated Indian Traffic Sign dataset is provided by Datacluster Labs, India. They are yet to finish the annotation of "Indian Vehicles" database. It's just a matter of training time to make this gadget, tailor-made for India.

To find out the ROI from images, we have used SSD MobileNet trained on COCO filtered by potential objects. To detect only people and vehicles, you can use this model also to get better speed and accuracy. More importantly, the core task of custom object training and its deployment on IoT devices and Android mobiles are handled at depth in Solution #5.

The output of this model is sent from Node 1 to Node 2, where the LiDAR-Camsensor fusion happens, further pushing a message to Node 3. For the system to function, the 3 MQTT nodes should work in tandem, orchestrated by MQTT messages, published, and subscribed on respective topics.

# Sensor Fusion happens at Node 2
def on_message(client, userdata, msg):

    word = msg.payload.decode()

    # objAttributes contains label,
    # theta min and max separated by |
    objAttributes = word.split('|')

    now = time.localtime()
    if (now.tm_min * 60 + now.tm_sec - int(objAttributes[3])  >= 1):
        return

    theta1 = float(objAttributes[1])
    theta2 = float(objAttributes[2])

    dist = getObjectDistance(int(theta1) + 90 + 59, int(theta2) + 90 + 59)

    # convert distance from mm to cms
    dist = round(float (dist / 1000), 1)

    theta_mid = int((theta1 + theta2) / 2)

    # if near then announce an alert!
    # Passing the hue value on MQTT. 0 = Red. 0.3 = Green
    if (dist < 2.0):
    
        announceText = "ALERT ALERT "
        client.publish("object/flashlight", "0.0")
    else:
        announceText = ""
        client.publish("object/flashlight", "0.3"

Discussions