Soln #3: Touchless Attendance - Assembly and Initial Setup

A project log for Multi-Domain Depth AI Usecases on the Edge

SLAM, ADAS-CAS, Sensor Fusion, Touch-less Attendance, Elderly Assist, Monocular Depth, Gesture & Security Cam with OpenVINO, Math & RPi

Anand UthamanAnand Uthaman 10/25/2021 at 06:140 Comments

First, I have assembled the system as shown below. During the initial setup, the system built an image database of known persons. During the registration process, an affine transformation is applied after detecting the facial landmarks of the person, to get the frontal view. These images are saved and later compared, to identify the person.

                                                 Assembled Gadget: RPi with LiDAR and NCS2 on battery
The face recognition models done in OpenVINO are deployed to RPi, which is integrated with a Pi Cam and LIDAR. If the person is identified and is near to the door, then the 'door open' event is triggered. If someone is near the door but not recognized then a message should be pushed to the security's mobile. This is simulated by flashing 'green' and 'red' lights respectively, on a Pimoroni Blinkt! controlled using MQTT messages.

In order to avoid repeated triggers, the message gets published only when the same person is not found in the last 'n' frames. This is implemented with a double-ended queue to store identified information. If the person is identified, then the greeting message is pushed via the eSpeak text-to-speech synthesizer. Prior to this, the voice configuration setup was done in Pi.