SafeBud offers five main features:
- Battery-powered mobile system
- 2500mAh Li-Poly battery for maximum uptime
- PIR sensor to trigger system wakeup
- Low-power mode utilization allows for a completely unplugged system
- Facial detection
- AWS to process images with OpenCV for facial and body detection
- Dynamic tracking
- Pan-tilt base follows detected objects using PWM
- WiFi Connection
- WiFi chip integration to upload pictures to the cloud to see video stream anywhere
- Real Time Alert System
- Twilio API sends text messages whenever an intruder is detected
SafeBud uses ATSAM4S8B for its central microcontroller, processing incoming images and sensor data. But since it does not have embedded camera and WiFi, we integrated an AWM136 for WiFi communication and an OV2640 to take images.
SafeBud will begin in default low power mode and rely a sensor trigger to wake the camera up into active filming mode. For the sensor, we are using PIR sensor (passive infrared sensor) to trigger the chip, since it is very effective at detecting human movements while ignoring other non-human actions. In our case, we used Adafruit's PIR sensor, at which the sensor is already installed on a board, and this particular version outputs a digital signal, so we do not have to use ADC to process data from the sensor. Digital signal from the PIR sensor will trigger an interrupt in the system, waking up the microcontroller and rest of the peripherals. Then, once the signal from the sensor goes low for certain time, the microcontroller will go back to sleep mode in order to save power. While in the sleep mode, all the pins are configured as inputs in order to save power, USB clock is disabled, and chip's clock speed is slowed. These are all restored when the microcontroller wakes up. We are currently trying to put camera into a power down mode by utilizing an inverter, since power down pin of the camera requires high signal, and the microcontroller cannot generate a strong enough of a signal while its in sleep mode as of now.
Measured current consumption from the microcontroller itself is around 3.2 mA, and when it goes to sleep mode, it drops down to roughly 2.2 mA, which corresponds to the amount of current consumption mentioned in the datasheet, considering differences in input voltages (3.7V in our device, while the datasheet used 3.3V) and temperature. According to the datasheet, the microcontroller can consume up to 7.5mA roughly, so just by putting into a sleep mode, we can save decent amount of power. WiFi chip can be put into a power down mode by sending a command through the microcontroller, which can save huge amount of current consumption (consuming about 0.77mA).
With 2500mAh battery, active WiFi chip consumes around 5.7mA, and the camera takes 25-50mA, so even without putting OV2640 and AMW136 into power-down modes, we can still expect approximately 30 hours of usage.
For the camera, we connected one of free GPIOs from the microcontroller to camera's power-down pin (PWDN pin). When PWDN pin receives a high signal, OV2640 goes to a sleep mode and only draws about 15uA of current. However, since we are putting the microcontroller into a sleep mode as well, we utilized an inverter; microcontroller will send a high signal when it is active, and a low signal while it is in sleep mode. Thus, the camera will be in hardware power-down mode when the microcontroller is not running, and vice-versa.
We used OpenCV's Haar-Cascade classifiers to detect faces and profiles in the camera's video stream. These classifiers came pre-trained and return the detected regions of interests as a list of coordinates. Because the ATSAM4S8B is not powerful enough to build OpenCV, we offset the workload to AWS.
We launched an EC2 instance and hosted a Flask server with Apache WSGI. The Flask server takes in images through POST requests, runs the classifiers on them, streams the images to a website, and returns the coordinates back to the embedded system. The server also uses Twilio's API to send a text message to the user each time an intruder is detected.
In order to send images from our embedded system to our Flask server, we used the AMW136 Wifi Chip and the ZentriOS Command APi to send HTTP Post requests. In each request we write the image as a hex byte array and define its content length. The Flask server receives the image data, saves it as a JPEG, and loads it using OpenCV's imread() function.
In order to achieve mobile tracking of intruders, we place our system on a pan-tilt base which uses two micro-servo motors. The pan-tilt base gives us a range of approximately 160 degrees horizontally and 60 degrees vertically. The servo motors move by running a PWM signal through them and changing the duty cycle depending on what position you want the motors to take. Each time our server detects an intruder and returns coordinates to the system, we convert those coordinates to a meaningful quantity shift in duty cycles which will center the pan-tilt base onto the region of interest and update the PWM signal. When the system goes back into sleep mode, the pan-tilt base resets into rest position.