close-circle
Close
0%
0%

Elephant AI

a system to prevent human-elephant conflict by detecting elephants using machine vision, and warning humans and/or repelling elephants

Similar projects worth following
The conflict that arises between humans and elephants in countries such as India, Sri Lanka, and Kenya, claims many hundreds of human and elephant lives per year. These negative interactions arise when humans meet elephants on their trails, when elephants raid fields for food, and when elephants try to cross railways. Machine vision and automated deterrence can mitigate such conflict.

INTRODUCTION AND GOALS

This is an evolution of my 'Automated Elephant-detection system' that was a semi-finalist in the Hackaday Prize 2016. The current project differs substantially in that it makes use of more advanced machine vision techniques, and eliminates the usage of RF communication and village base stations. Alternatively using 4G/3G/EDGE/GPRS on each elephant-detection device, and includes elephant-deterrence devices to completely eliminate interaction between humans and elephants whenever possible.

So, let's get to the primary goals of Elephant AI:

  • Eliminate contact between humans and elephants
  • Protect elephants from injury and death
  • Protect humans from injury and death

How will the Elephant AI accomplish these goals?

  • Detect elephants as they move along their regular paths. These paths have been used by elephants for many years (perhaps centuries) and often cut through areas now used by humans. Humans will be warned that elephants are moving on the paths so they can stay away or move with caution.
  • Detect elephants as they leave forested areas to raid human crop fields. At this point, elephant deterrence devices will attempt to automatically scare elephants. This will be using sounds of animals they dislike (e.g. bees and tigers, and human voices in the case of Maasai people in Kenya/Tanzania), and perhaps by firing chili balls into the paths of the elephants from compressed air guns.
  • Detect elephants before they stray onto railway lines. This can be done via a combination of machine vision techniques and more low-tech IR (or laser) break-beam sensors. Train drivers can be alerted to slow-down and stop before hitting the elephants who are crossing.

Just how bad is it for humans and elephants to interact? This video, shot several months ago, in India, gives some idea. It is really bad indeed. It causes great stress to elephants, and puts both the elephants and humans at risk of injury or death.

That's why Elephant AI wants to take human-elephant interaction out of the equation entirely!

HARDWARE SETUP

We need a daylight camera (IR-filtered) and a night camera (NoIR filtered + IR illumination array) since elephants need to be detected 24hrs per day! In my original project I completely forgot about this, then decided to multiplex cameras to one Raspberry Pi. It was actually cheaper and easier to use two raspberry pi's; each with its own camera. Night-time and daytime classification of elephant images both need their own trained object detector anyway, so I don't think it's such a bad solution (for now).

METHODS FOR IMAGE CLASSIFICATION

This is the main part of the project. In my original automated elephant detection project I'd envisaged just comparing histograms!! Or failing that I'd try feature-matching with FLANN. Both of these proved to be completely rubbish in regard of detecting elephants! I tried Haar cascades too, but these had lots of false positives and literally took several weeks to train!

Initially with ElephantAI I worked with an object detector using Histogram of Oriented Gradients (HOG) and Linear Support Vector Machines (SVM). That had promising results; giving only 26% false-positives with a dataset consisting of 350 positive elephant images and 2000 negative non-elephant images (see https://hackaday.io/project/20448-elephant-ai/log/57399-4-result-for-object-detector-using-histogram-of-oriented-gradients-hog-and-linear-support-vector-machines-svm) and I would expect improved results with larger datasets. And it did. I got a result of 16% false-negatives with 330 positive elephant images and 3500 negative non-elephant images (see result #5)

At present, I am working on differentiating between types of elephants using deep convolutional neural networks for image classification vs. classical machine-vision techniques I had previously employed. This is important because different types, or classes, of elephants will exhibit different behaviours! Some aggressive, some defensive,...

Read more »

  • 2 × Raspberry Pi 3 Model B [detection device]
  • 1 × Raspberry Pi Camera Module v2 (8MP) Standard [detection device] daytime usage [£29]
  • 1 × Raspberry Pi Camera Module v2 (8MP) NoIR [detection device] nightime usage (IR filtered) [£29]
  • 1 × Lithium Ion Polymer Battery - 3.7v 2500mAh
  • 1 × PowerBoost 500 Basic - 5V USB Boost @ 500mA from 1.8V+

View all 10 components

  • elephant-deterrence devices: parts for audio only pi zero version

    Neil K. Sheridan06/01/2017 at 21:23 0 comments

    So to keep costs down I'm trying using the pi zero at the moment.

    • Raspberry Pi Zero W with Soldered Header / Male Header - 40 Pin (2 x 20) - Straight [$19.37]
    • JustBoom Amp Zero pHAT Maximum output is 30 W (RMS) 55W (Peak) [$33.21]
    • SD card
    • Adafruit FONA 808 - Mini Cellular GSM + GPS Breakout [£50]
    • Slim Sticker-type GSM/Cellular Quad-Band Antenna - 3dBi uFL [£3.50]
    • Visaton Cabinet Speaker, 30W nom, 50W max, 8Ω [£21.06x2]
    • SIM
    • Sealed enclosures for speakers

    Details for JustBoom Amp Zero speaker power requirements :

    8Ω Speaker

    Amp Gain 26dB

    Supply Voltage(V) 24

    Peak Power(W) 55

    Max Power(W) 30

    Supply Power(W) 75

    So that will be awkward for batteries! And I don't even know if that will be loud enough!

  • Updates for comms / GSM/3G/4G comms for Raspberry Pi

    Neil K. Sheridan05/15/2017 at 19:54 0 comments

    I've been trying the Huawei E3272 and E3276 USB dongles..

    Anyway, some other hardware ideas I've come across and will investigate:

    PiAnyWhere 4G & LTE Hat for the Raspberry https://www.pianywhere.com/product/pianywhere-4g-raspberry-pi-hat/ £160

    PiAnywhere 3G https://www.pianywhere.com/product/pianywhere-3g-raspberry-pi-hat/ £100

    PiAnywhere GSM https://www.pianywhere.com/product/pianywhere-gsm-raspberry-pi-hat/ £60

    4G + GPS Shield for Raspberry Pi – LE910 - (4G / 3G / GPRS / GSM / GPS / LTE / WCDMA / HSPA+) https://www.cooking-hacks.com/documentation/tutorials/4g-gps-lte-wcdma-hspa-3g-gprs-shield-arduino-raspberry-pi-waspmote-tutorial/ E444 (euros) <- this was is pretty great because of the amount of software they've already written so it's super-easy to use! But very expensive!

    Note that I'm only using 4G or 3G for debugging with prototypes, the final elephant detectors don't need to upload their images. They only need GSM to communicate with deterrence devices and local people using SMS.

  • Alternatives to Raspberry-Pi cameras

    Neil K. Sheridan05/11/2017 at 19:01 0 comments

    I was thinking today about changing to use GoPro cameras instead. You can connect these to the raspberry pi using an HDMI-CSI bridge (e.g. https://auvidea.com/b100-hdmi-to-csi-2-bridge/)

    There's a video here about using GoPro Hero for computer vision with the raspberry pi:

    And you can also remove the IR-filtered lens from the GoPro Hero 3 or 4 and replace it with a non-IR filtered for night-vision. As shown in this video:

    The GoPro Hero 3 is around £130, whilst the Hero 4 is around £400. So this approach makes for more expense, but it's worth testing!

  • elephant-deterrence devices

    Neil K. Sheridan05/11/2017 at 18:34 0 comments

    The elephant detection devices will notify these to begin their deterrence scripts either via bluetooth or via GPRS/3G/4G modems. The scripts will contain instructions for playing audio from pseudo-randomly selected bee or tiger .ogg files stored on the memory cards.

    Thoughts on using more active approaches

    I did mention earlier the idea of firing chilli balls to deter elephants. There is some research on this published:

    Repelling elephants with a chilli pepper gas dispenser: field tests and practical use in Mozambique, Zambia and Zimbabwe from 2009 to 2013 (Le Bel et al., 2015)

    Here they fired ping pong ball projectiles containing chilli pepper using a pneumatic weapon. The idea has been tried by others too!

    So if I used this approach, I would mount a pneumatic weapon on a rotating platform. I'd add camera and take real-time video once we had elephants present (per notification from elephant detection devices). I'd track their motion using camera, then adjust position of rotating platform and thus weapon. And then go ahead and fire the chilli balls at the presumed location of elephants!

    Drawbacks are that chilli balls are quite large so you would need a large barrel size for the pneumatic weapon. That means it would need a large reservoir of compressed air + a compressor! Not to mention I am unsure how it could be automatically reloaded?

  • Elephant Behaviours

    Neil K. Sheridan05/10/2017 at 19:20 0 comments

    One of the important of aspects of this is project is that we're not just saying "hey there is an elephant" but we are detecting which class of elephants are present! Thus, we can infer from our imagery which type of behaviour is likely to be exhibited.

    1. Bull Elephants during Musth

    These are the most dangerous and aggressive elephants to encounter! Serum testosterone rises from a pre-musth median of 35.16ng/mL to a musth median of 63.88ng/mL in Asian bull elephants experiencing musth. This, and other physiological changes, impact their behaviour "behavior by males in musth is often abnormal, and can be bizarre and/or extremely aggressive" (Rasmussen et al, 1999). In addition, during musth, temporal gland secretions were found to contain increased acetone and other ketones. It is speculated that temporal gland swellings during musth may cause the elephants severe pain. Musth lasts for ~16 weeks in Asian elephants (Rajaram, 2006).

    These temporal gland secretions result in a thick tar-like deposit on the side of the elephants head which will can attempt to identify from imagery! Thus providing a warning that extremely dangerous elephants are nearby!

    [Image: By Yathin S Krishnappa - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=21406707 ]

    A bull elephant (African) is thought to come into musth for the first time in his late teens to early twenties. The elephants continue to undergo this period until their early 60s. However, some studies have concluded that older males will to some degree attenuate the aggressive behaviour exhibited by younger males during musth (Slotow et al., 2000).

    Additionally, bull elephants will exhibit a distinct vocalisation during their musth period. This is called the musth rumble and sounds like "pulsated "put-put-put" or "glug-glug-glug" quality, like water gurgling through a deep tunnel" (Elephant Voices). You can listen to them here at Elephant Voices.

    So we have a visual and auditory method of detecting elephants in the dangerous musth period!

    2. Female elephants with calves

    3. Tuskers

    Since both sexes of African elephant have tusks there doesn't seem much point in looking for these! However, in Asian elephants we can look for tusks as only males have these, with females sometimes having very small tushes. So we can use tusk yes/no to determine sex in Asian elephants.

    4. Lone elephants

    5. Herds of bulls

    6. Herds of females

    7. Mixed herds

  • Image classification with TensorFlow using Inception trained on ImageNet 2012 dataset

    Neil K. Sheridan05/03/2017 at 18:45 1 comment

    Today, I thought I'd try using TensorFlow with the Inception model already pre-trained using the ImageNet 2012 competition dataset! Just to see what kind of results I got for some of my elephants!

    I just used the classify_image.py code (licensed under http://www.apache.org/licenses/LICENSE-2.0) and the Inception v3 model. You can find the code here.

    So here we go:

    Indian Elephant 96.653

    Tusker = 0.014

    African Elephant = 0.002

    African Elephant 0.453

    Indian Elephant 0.325

    Tusker 0.152


    Indian Elephant 0.689

    Tusker 0.160

    African Elephant 0.125

  • #5 result for object detector using Histogram of Oriented Gradients (HOG) and Linear Support Vector Machines (SVM)

    Neil K. Sheridan05/01/2017 at 20:02 0 comments

    So this is my largest-scale training run for the classical machine-vision object detector using HOG + SVMs.

    Method

    I used 3500 non-elephant images for the negative training set. This consisted entirely of animals (e.g. rhinos 45, sloth bears 95, Indian cow breeds 41, tigers 210, water buffalo 210, ducks, people, etc.). I used 330 elephant images for the positive training set. I removed the elephant bums! The images were all cropped to 256 width * height with aspect ratio preserved. This may have caused problems which I will examine later. I did hard-negative mining with 50 images.

    Results

    I've used n=50 for the false-positive testing image set. It's the same set as I used in test #4 with a selection of animals which I thought likely to be present in same environment as elephants. The false-positive result was 16%. That's much better than my previous 26%!!

    I've only done n=10 for the false-negative testing image set so far. But I'm at 0% so far! Really great, especially considering I think the sliding window size was a bit off. And the hard-negative mining was lower than I'd planned.

    * update:

    Unfortunately it hasn't been as successful as I hoped! Once I increased increased my number of elephant images for the false-negative testing I got an awful lot of false-negatives!! It looks like this might be because the positive images have been resized badly and the sliding window size was changed to an inappropriate size!

    [Images: false-positive testing set with the false-positives marked with red crosses]

  • Retraining TensorFlow Inception v3 using TensorFlow-Slim (Part 2)

    Neil K. Sheridan04/29/2017 at 19:06 1 comment

    In this experiment I will not be using flowers, but elephants! I'm going to use 5 classes of elephants: baby elephants, elephant groups no babies, elephant groups with babies, lone female elephants, lone male elephants. I'll just start with 100 images for each class. So that's 500 images in the dataset. I'll take 80% for training and 20% for validation.

    Protocol for experiment:

    1. Convert dataset to TensorFlow's native TFRecord format. Here each TFRecord contains a TF-Example protocol buffer. First we need to place the images in the following directory structure "data_dir/label_0/image0.jpeg", "data_dir/label_1/image0.jpeg" etc. Then we can convert using a modified version of build_image_data.py (original is written by Google and licensed under http://www.apache.org/licenses/LICENSE-2.0). This is the original code on github.

    2. So we should have several TFRecord files created now!

    3. Next to define a Slim Dataset. This stores pointers to the data file, as well as various other pieces of metadata: class labels, the train/test split, and how to parse the TFExample protos. TF-Slim dataset descriptor using the TF-Slim DatasetDataProvider code :

    import tensorflow as tf
    from datasets import elephants
    
    slim = tf.contrib.slim
    
    # Selects the 'validation' dataset.
    dataset = elephants.get_split('validation', DATA_DIR)
    
    # Creates a TF-Slim DataProvider which reads the dataset in the background
    # during both training and testing.
    provider = slim.dataset_data_provider.DatasetDataProvider(dataset)
    [image, label] = provider.get(['image', 'label'])

    4. Downloading the Inception v3 checkpoint. Modify later for Inception v4 instead! (see https://github.com/tensorflow/models/tree/master/slim)

    $ CHECKPOINT_DIR=/tmp/checkpoints
    $ mkdir ${CHECKPOINT_DIR}
    $ wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz
    $ tar -xvf inception_v3_2016_08_28.tar.gz
    $ mv inception_v3.ckpt ${CHECKPOINT_DIR}
    $ rm inception_v3_2016_08_28.tar.gz
    5. Now we can retrain from the checkpoint we downloaded using train_image_classifier.py . See https://github.com/tensorflow/models/blob/master/slim/train_image_classifier.py for code.
    $ DATASET_DIR=/tmp/elephants
    $ TRAIN_DIR=/tmp/elephants-models/inception_v3
    $ CHECKPOINT_PATH=/tmp/my_checkpoints/inception_v3.ckpt
    $ python train_image_classifier.py \
        --train_dir=${TRAIN_DIR} \
        --dataset_dir=${DATASET_DIR} \
        --dataset_name=elephants \
        --dataset_split_name=train \
        --model_name=inception_v3 \
        --checkpoint_path=${CHECKPOINT_PATH} \
        --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits \
        --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits

    6. Next is to evaluate performance using the the eval_image_classifier.py. See https://github.com/tensorflow/models/blob/master/slim/eval_image_classifier.py for code.

    CHECKPOINT_FILE = ${CHECKPOINT_DIR}/inception_v3.ckpt  # Example
    $ python eval_image_classifier.py \
        --alsologtostderr \
        --checkpoint_path=${CHECKPOINT_FILE} \
        --dataset_dir=${DATASET_DIR} \
        --dataset_name=imagenet \
        --dataset_split_name=validation \
        --model_name=inception_v3
    

    7. Next to feed in a single image! See https://www.tensorflow.org/versions/master/tutorials/image_recognition but I haven't got that far yet!

  • Testing the object detector using HOG + SVMs (video)

    Neil K. Sheridan04/28/2017 at 19:12 0 comments

    As you can see it takes around 1 minute to detect the elephant in this image. This using an i5.

  • Summary of python code for Object Detector using Histogram of Oriented Gradients (HOG) and Linear Support Vector Machines (SVM)

    Neil K. Sheridan04/28/2017 at 18:29 0 comments

    Dependencies:

    from __future__ import print_function
    from sklearn.feature_extraction.image import extract_patches_2d
    from imutils import paths
    from scipy import io
    import numpy as np
    import random
    import cv2
    import cPickle
    from sklearn.svm import SVC
    

    1. Extracting features

    #init HOG detector
    
    hog = HOG(orientations=conf["orientations"], pixelsPerCell=tuple(conf["pixels_per_cell"]),
    	cellsPerBlock=tuple(conf["cells_per_block"]), normalize=conf["normalize"])
    data = []
    labels = []
    
    # TRAINING IMAGES
    # collect the paths to the training (positive elephant) images
    
    pos_paths = list(paths.list_images(conf["positive_images"]))
    print("1/3: Processing training images")
    
    for (i, pos_path) in enumerate(pos_paths):
    	# load training image
    	image = cv2.imread(pos_path)
    	# convert to grayscale
    	train_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    	# resize
    	train_image = cv2.resize(train_image, (122, 96), interpolation=cv2.INTER_AREA)
    	# put the train_image into a list called train_image_list
    	train_image_list = (train_image, cv2.flip(train_image, 1)) if conf["use_flip"] else (train_image,)
    
    	# loop for train_image in train_image_list
    	for train_image in train_image_list:
    		# extract features from train_image and add to list of features
    		features = hog.describe(train_image)
    		data.append(features)
    		labels.append(1)
    
    #NEGATIVE IMAGES
    
    neg_paths = list(paths.list_images(conf["negative_images"]))
    print("2/3: Processing negative images")
    
    or i in np.arange(0, conf["num_negative_images"]):
    	# randomly select a negative image
    	# extract patches
    	image = cv2.imread(random.choice(neg_paths))
    	if image is not None:
    	    image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
                #note here 3737: error: (-215) scn == 3 || scn ==4 in function cvtColor if u pass
                #a bad input image
    	    patches = extract_patches_2d(image, tuple(conf["window_dim"]),
    		max_patches=conf["num_distractions_per_image"])
    
    	
    	for patch in patches:
    		# extract features from patch 
    		features = hog.describe(patch)
    		# update list
    		data.append(features)
    		labels.append(-1)
    
    
    #SAVING DATA FILE
    print("3/3 Saving file")
    dataset.dump_dataset(data, labels, conf["features_path"], "features")
    

    2. Training classifier

    print("1/2: training classifier...")
    model = SVC(kernel="linear", C=conf["C"], probability=True, random_state=22)
    model.fit(data, labels)
    
    # save classifier to cpickle file
    print("2/2: saving classifier to cpickle file")
    f = open(conf["classifier_path"], "w")
    f.write(cPickle.dumps(model))
    f.close
    

    See http://scikit-learn.org/stable/tutorial/basic/tutorial.html for information on using scikitlearn

View all 21 project logs

Enjoy this project?

Share

Discussions

Thomas wrote 04/25/2017 at 19:17 point

Hi Neil, I think this here might be of interest: https://hackaday.io/project/561-summerize-total-motion-in-a-video

  Are you sure? yes | no

Neil K. Sheridan wrote 04/25/2017 at 19:38 point

Hi, Thanks! That does look interesting! Will go thru it!

  Are you sure? yes | no

Thomas wrote 04/25/2017 at 19:56 point

When I think of elephants, the first thing that comes into my mind is how they move. The idea of using optical flow for creating a "movement spectrogram" is intriguing. The first couple of lines in the Wikipedia article on optical flow point to interesting approaches:
https://en.wikipedia.org/wiki/Optical_flow

  Are you sure? yes | no

Neil K. Sheridan wrote 03/26/2017 at 20:21 point

yes! I'm going to post it later this week! I'm just taking out the bits that aren't relevant so it is easy to follow! 

  Are you sure? yes | no

jessica18 wrote 03/26/2017 at 17:23 point

can you post the code

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates