GuardMyPi is a home security system centred around the Raspberry Pi 3. It utilises the Pi NoIR Camera to monitor a room or entrance point in a house. Facial recognition software is used to distinguish between house members (including pets) and intruders. If an intruder is detected, a notification is sent to the user via a web application. All image processing is completed using OpenCV
With a week left in our project we have managed to publish a pre-release of our code! With the necessary files to try our code out for yourself Guard My Pi could also be implemented by you! All required code can be found in the typical src folder!
Smile and wave ! Another method to unlock our system as you enter your residence is through gesture detection. Hand recognition by waving or holding your hand up the camera is also a method we can integrate into our code so the system knows all is well! It is also a potential method we are considering implementing in the locking function for our code (coming shortly!)
It's been a mad couple of weeks! However, a quick update with our Facial Recognition software.
While its has been tricky to refine, there has been a lot of progress using facial recognition as one of the methods (other ways coming up!) to unlock the device when entering the premises. Using HAAR cascades for face detection and a trained Fisher Face algorithm, we have began to refine the facial recognition process in our system. Here is a quick screen grab from the latest test of our code.
An essential part of the Guardmypi project is facial recognition allowing the system to distinguish between residents and intruders. The first method of facial recognition being tested utilises the HAAR Cascade, a machine learning-based approach where we have used many positive and negative photos to train the classifier. Running a more relaxed training model (see left, run time: 30mins) will allow the classifier to make more false positives during the training stages and allow for quicker model testing. Conversely, we have a stricter classifier training stage (see right run time 72+ hours) that will ideally train the model to have very few false alarms and ideally implemented in our final solution.
A glimpse of our human detection software in action! Using "You Only Look Once" (YOLO) pre-trained object detection models, we are able to detect when a human has entered the frame from a variety of distances and angles.