Close
0%
0%

Intelligent Door Lock (Alexa & Face Recognition)

An open-source intelligent vision door lock that can recognize guests, greet them by name, notify owner & be controlled by Alexa

Public Chat
Similar projects worth following
Security and accessibility are the main concern in today's world. We always try to keep our house secure and at the same time we want to make our home devices smarter and easily accessible even from a remote location. Imagine, you have a guest waiting at your front door and you are outside of the home. But you want to allow him inside your house. Or you are doing a very important task on your desk and want to know who came to the front door without leaving your seat. Making this project you can know it and allow your guest inside just by asking Alexa.

This is an open-source and intelligent lock that can notify and recognize a guest, greet the guest by name, introduce the guest to the owner and even remember an unknown for next time. This lock can be operated by Alexa. A custom Alexa Skill is developed for this. Using the skill owner can know the guest and welcome inside without leaving your seat!

Though several smart video door locks are available in the market, they are not open source and are very expensive. I love open source and hope you do too and as a maker, I always like to make my own stuff.  After doing a few days of research I made this Smart Lock + Video Doorbell and made it open source. Following this project, you will be able to make an intelligent video door lock/doorbell on your own and can modify it according to your choice.  You can do it at a very low cost. The only expensive parts here are Raspberry Pi and Camera module and will not exceed 60 USD in total.

This can be operated by Alexa and an Open source Alexa Skill was developed for this project. My skill is live at Amazon Store (Skill ID: amzn1.ask.skill.4ba64998-cb8f-461d-8712-16c5dfcfc9d3)

By following this guide you can make your own skill also.

Before going into detailed instructions let me first explain how it works. I am calling this device Intelligent Door Lock and for making the device I used a Raspberry Pi with an official camera module and an Arduino with a servomotor for controlling the lock. Using Arduino is optional and you can use Raspberry Pi GPIO for controlling the servo motor. For this demo project, I used a 3D printed servo-controlled lock but for practical use, you can replace it with an electrical solenoid lock. 

When a guest comes to your door and press the calling button, Raspberry Pi performs three tasks:

  • It takes a picture of the guest and uploads it to AWS S3 Bucket and S3 Bucket triggers an SNS notification to a specific topic.
  • It sends an email with the photo to the house owner.
  • It sends a greeting text to AWS Polly and then plays the audio greeting for the guest by name returned by Polly.

After getting the notification from AWS SNS or the email house owner can ask Alexa to introduce the guest by invoking the custom skill "Door Guard" and saying:

Alexa, ask the door guard who is at the front door? or

Alexa, ask the door guard who came?

Alexa triggers a Lambda function and the Lambda function does the following jobs:

  • Read the image uploaded to the S3 Bucket.
  • Sends a face search request for the image to AWS Rekognition.
  • After getting the face matches, the result is returned by Rekognition, Lambda search for the name to AWS DynamoDB and returns the name to the Alexa if found.

Alexa provides the name to the house owner and the house owner again calls Alexa to open the door for the guest. In this case, Lambda sends an open door command to AWS IoT to a specific topic. Raspberry Pi receives this command and sends it to Arduino using the serial port. Arduino controls the lock accordingly. The following block diagram can help for better understanding.

Work Flow

  • Preparing Raspberry Pi (installing required libraries)
  • Writing program for Raspberry Pi (for capturing an image on button press, uploading the image to S3, sending email to the owner, receiving a message from MQTT broker, greeting guest, sending a control signal to Raspberry Pi)
  • Setting AWS Services (AWS S3 Bucket, AWS DynamoDB, AWS Lambda, AWS SNS, AWS Rekognition)
  • Writing program for uploading Images of knows persons and storing Face Index in the DinamoDB table.
  • Making Custom Alexa Skill and writing code for Lambda function.
  • Writing code for Arduino.
  • Connecting all the hardware.
  • Testing & Debugging.

Short Demo Video

raspberry-pi-circuit_LitLZEPs22.png

Connecting button and speaker with Raspberry Pi

Portable Network Graphics (PNG) - 135.26 kB - 08/05/2021 at 07:53

Preview
Download

upload_multiple_image_with_name_py.py

Use this code snippet to upload multiple images with full names to S3 Bucket.

py - 615.00 bytes - 08/05/2021 at 07:51

Download

index_face_and_store_db_py.py

Use this code to index a face from S3 Bucket and store the index in DynamoDB with the full name.

py - 1.60 kB - 08/05/2021 at 07:51

Download

capture_n_upload_py.py

This code snippet captures a photo of the guest automatically when he presses the calling button and uploads the photo to the S3 Bucket and sends a notification to the house owner.

py - 1.07 kB - 08/05/2021 at 07:51

Download

arduino_door_guard_ino.ino

This Arduino sketch is used for receiving the command for controlling the lock. The command is receiving using USB serial cable. A servo motor is used to control a 3D printed lock.

ino - 1.59 kB - 08/05/2021 at 07:50

Download

View all 9 files

  • 1 × Raspberry Pi 3 or 4 Raspberry Pi Zero, Pi 1 & Pi 2 should work but not tested.
  • 1 × Raspberry Pi Camera Module
  • 1 × Servos (Tower Pro MG996R)
  • 1 × Arduino UNO This is optional and you can use Raspberry GPIO directly instead of Arduino
  • 1 × Speaker: 4W, 8 ohms

View all 7 components

View all 2 project logs

  • 1
    Setting up the Raspberry Pi

    Prepare your Raspberry Pi with latest Raspbian operating system and get ready to do some programming. If you are new in Raspberry Pi read this how to get started using Raspberry Pi guide. You can plug a mouse, keyboard, and monitor into your Pi or access it using SSH client like PuTTY. To know how to connect with PuTTY you may read this tutorial.

    Install python serial module using the command:

    sudo apt-get install python-serial
    

    Install AWS IoT SDK using following command:

    sudo pip install AWSIoTPythonSDK
    

    Details of AWSToTPythonSDK is here.

  • 2
    Installing & Configuring AWS CLI

    The AWS Command Line Interface (CLI) is a unified tool that allows you to control AWS services from the command line. AWS CLI helps you creating an AWS object from the command line without using a GUI interface. If you already have pip and a supported version of Python (integrated with latest Raspbian OS), you can install the AWS CLI with the following command:

    pip install awscli --upgrade --user
    

    You need to configure AWS CLI with Access Key IDSecret Access Key, AWS Region Name and Command Output format before getting started with it.

    Follow this tutorial for completing the whole process.

  • 3
    Setting up Amazon S3 Bucket, Amazon Rekognition and Amazon DynamoDB

    Amazon Rekognition is a sophisticated deep learning-based service from Amazon Web Services (AWS) that makes it easy to add powerful visual search and discovery to your own applications. With Rekognition using simple APIs, you can quickly detect objects, scenes, faces, celebrities, and inappropriate content within images. Amazon Rekognition also provides highly accurate facial analysis and facial recognition. You can detect, analyze, and compare faces for a wide variety of user verification, cataloging, people counting, and public safety use cases.

    Amazon Rekognition is based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily, and requires no machine learning expertise to use. Amazon Rekognition is a simple and easy-to-use API that can quickly analyze any image or video file stored in Amazon S3.

    Amazon Rekognition can store information about detected faces in server-side containers known as collections. You can use the facial information stored in a collection to search for known faces in images, stored videos, and streaming videos. Amazon Rekognition supports the IndexFaces operation, which you can use to detect faces in an image and persist information about facial features detected into a collection.

    The face collection is the primary Amazon Rekognition resource, each face collection you create has a unique Amazon Resource Name (ARN). You create each face collection in a specific AWS Region in your account.

    We start by creating a collection within Amazon Rekognition. A collection is a container for persisting faces detected by the IndexFaces API. You might choose to create one container to store all faces or create multiple containers to store faces in groups. You can use AWS CLI to create a collection or use the console. For AWS CLI, you can use the following command:

    aws rekognition create-collection --collection-id guest_collection --region eu-west-1
    

    The above command creates a collection named guest_collection.

    The user or role that executes the commands must have permissions in AWS Identity and Access Management (IAM) to perform those actions. AWS provides a set of managed policies that help you get started quickly. For our example, you need to apply the following minimum managed policies to your user or role:

    • AmazonRekognitionFullAccess
    • AmazonDynamoDBFullAccess
    • AmazonS3FullAccess
    • IAMFullAccess

    Next, we create an Amazon DynamoDB table. DynamoDB is a fully managed cloud database that supports both document and key-value store models. In our example, we’ll create a DynamoDB table and use it as a simple key-value store to maintain a reference of the FaceId returned from Amazon Rekognition and the full name of the person.

    You can use either the AWS Management Console, the API, or the AWS CLI to create the table. For the AWS CLI, use the following command:

    aws dynamodb create-table --table-name guest_collection \
    --attribute-definitions AttributeName=RekognitionId,AttributeType=S \
    --key-schema AttributeName=RekognitionId,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 \
    --region eu-west-1
    

    For the IndexFaces operation, you can provide the images as bytes or make them available to Amazon Rekognition inside an Amazon S3 bucket. In our example, we upload the images (images of the known guest) to an Amazon S3 bucket.

    Again, you can create a bucket either from the AWS Management Console or from the AWS CLI. Use the following command:

    aws s3 mb s3://guest-images --region eu-west-1
    

    Although all the preparation steps were performed from the AWS CLI, we need to create an IAM role that grants our function the rights to access the objects from Amazon S3, initiates the IndexFaces function of Amazon Rekognition, and create multiple entries within our Amazon DynamoDB key-value store for a mapping between the face and the person’s full name.

    To get the access use the file: access-policy.json

    { 
        "Version": "2012-10-17", 
        "Statement": [ 
            {
                 "Effect": "Allow",
                 "Action": [ 
                    "logs:CreateLogGroup", 
                    "logs:CreateLogStream", 
                    "logs:PutLogEvents" 
                    ], 
                    "Resource": "arn:aws:logs:*:*:*"
             }, 
             {
                 "Effect": "Allow", 
                 "Action": [ 
                    "s3:GetObject"
                     ], 
                    "Resource": [
                         "arn:aws:s3:::bucket-name/*" 
                          ]
              }, 
              { 
                "Effect": "Allow", 
                "Action": [
                     "dynamodb:PutItem" 
                ], 
                "Resource": [ 
                    "arn:aws:dynamodb:aws-region:account-id:table/family_collection"
                ] 
              }, 
              { 
                "Effect": "Allow", 
                "Action": [ 
                    "rekognition:IndexFaces"
                 ],
                "Resource": "*"
                     }
                 ]
            }
    

    For the access policy, ensure you replace aws-region, account-id, and the actual name of the resources (e.g., bucket-name and family_collection) with the name of the resources in your environment.

    Now, attach the access policy to the role using the following command.

    aws iam put-role-policy --role-name LambdaRekognitionRole --policy-name \ LambdaPermissions --policy-document file://access-policy.json
    

    We can almost configure our AWS environment. We can now upload our images to Amazon S3 to seed the face collection. For this example, we again use a small piece of Python code that iterates through a list of items that contain the file location and the name of the person within the image.

    Before running the code you need to install Boto3. Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2. You can find the latest, most up to date, documentation at Read the Docs, including a list of services that are supported.

    Install the Boto3 library using the following command:

    sudo pip install boto3
    

    Now, run the following python code to upload the images into S3 Bucket. Before running the code be sure that you keep all the images and the python file in the same directory.

    import boto3
    s3 = boto3.resource('s3')
    # Get list of objects for indexing
    images=[('afridi.jpg','Shahid Afridi'),
           ('sakib.jpg','Sakib Al Hasan'),
           ('kohli.jpg','Birat Kohli'),
           ('masrafi.jpg','Mashrafe Bin Mortaza'),
           ('ganguli.jpg','Sourav Ganguly')
          ]
    # Iterate through list to upload objects to S3   
    for image in images:
       file = open(image[0],'rb')
       object = s3.Object('taifur12345bucket',image[0])
       ret = object.put(Body=file,
                       Metadata={'FullName':image[1]}
                       )
    #print(image[0])
    #print(image[1])
    

    Now, add the Face Index to AWS DynamoDB with full name for every image using the following python code.

    import boto3
    from decimal import Decimal
    import json
    import urllib
    BUCKET = "taifur12345bucket"
    KEY = "sample.jpg"
    IMAGE_ID = KEY  # S3 key as ImageId
    COLLECTION = "family_collection"
    dynamodb = boto3.client('dynamodb', "eu-west-1")
    s3 = boto3.client('s3')
    # Note: you have to create the collection first!
    # rekognition.create_collection(CollectionId=COLLECTION)
    def update_index(tableName,faceId, fullName):
        response = dynamodb.put_item(
        TableName=tableName,
        Item={
            'RekognitionId': {'S': faceId},
            'FullName': {'S': fullName}
            }
        )
        #print(response)
    def index_faces(bucket, key, collection_id, image_id=None, attributes=(), region="eu-west-1"):
        rekognition = boto3.client("rekognition", region)
        response = rekognition.index_faces(
            Image={
                "S3Object": {
                    "Bucket": bucket,
                    "Name": key,
                }
            },
            CollectionId=collection_id,
            ExternalImageId="taifur",
            DetectionAttributes=attributes,
        )
        if response['ResponseMetadata']['HTTPStatusCode'] == 200:
            faceId = response['FaceRecords'][0]['Face']['FaceId']
            print(faceId)
            ret = s3.head_object(Bucket=bucket,Key=key)
            personFullName = ret['Metadata']['fullname']
            #print(ret)
            print(personFullName)
            update_index('taifur12345table',faceId,personFullName)
        # Print response to console.
        #print(response)
        return response['FaceRecords']
    for record in index_faces(BUCKET, KEY, COLLECTION, IMAGE_ID):
        face = record['Face']
        # details = record['FaceDetail']
        print "Face ({}%)".format(face['Confidence'])
        print "  FaceId: {}".format(face['FaceId'])
        print "  ImageId: {}".format(face['ImageId'])
    

    Once the collection is populated, we can query it by passing in other images that contain faces. Using the SearchFacesByImage API, you need to provide at least two parameters: the name of the collection to query, and the reference to the image to analyze. You can provide a reference to the Amazon S3 bucket name and object key of the image, or provide the image itself as a byte stream.

    In the following example, I used the following code in the Lambda function to search face by taking the image from S3 Bucket. In response, Amazon Rekognition returns a JSON object containing the FaceIds of the matches. Using the face ID retrieves the full name.

View all 7 instructions

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates