close-circle
Close
0%
0%

Elephant AI

a system to prevent human-elephant conflict by detecting elephants using machine vision, and warning humans and/or repelling elephants

Similar projects worth following
close
The conflict that arises between humans and elephants in countries such as India, Sri Lanka, and Kenya, claims many hundreds of human and elephant lives per year. These negative interactions arise when humans meet elephants on their trails, when elephants raid fields for food, and when elephants try to cross railways. Machine vision and automated deterrence can mitigate such conflict.

ELEPHANT AI SYSTEM


INTRODUCTION AND GOALS

This is an evolution of my 'Automated Elephant-detection system' that was a semi-finalist in the Hackaday Prize 2016. The current project differs substantially in that it makes use of more advanced machine vision techniques, and eliminates the usage of RF communication and village base stations. Alternatively using 4G/3G/EDGE/GPRS on each elephant-detection device, and includes elephant-deterrence devices to completely eliminate interaction between humans and elephants whenever possible.

* Thanks to  www.nerdycute.com for drawing our logo!

So, let's get to the primary goals of Elephant AI:

  • Eliminate contact between humans and elephants
  • Protect elephants from injury and death
  • Protect humans from injury and death

How will the Elephant AI accomplish these goals?

  • Detect elephants as they move along their regular paths. These paths have been used by elephants for many years (perhaps centuries) and often cut through areas now used by humans. Humans will be warned that elephants are moving on the paths so they can stay away or move with caution.
  • Detect elephants as they leave forested areas to raid human crop fields. At this point, elephant deterrence devices will attempt to automatically scare elephants. This will be using sounds of animals they dislike (e.g. bees and tigers, and human voices in the case of Maasai people in Kenya/Tanzania), and perhaps by firing chili balls into the paths of the elephants from compressed air guns.
  • Detect elephants before they stray onto railway lines. This can be done via a combination of machine vision techniques and more low-tech IR (or laser) break-beam sensors. Train drivers can be alerted to slow-down and stop before hitting the elephants who are crossing.

Just how bad is it for humans and elephants to interact? This video, shot several months ago, in India, gives some idea. It is really bad indeed. It causes great stress to elephants, and puts both the elephants and humans at risk of injury or death.

That's why Elephant AI wants to take human-elephant interaction out of the equation entirely!

HARDWARE SETUP

We need a daylight camera (IR-filtered) and a night camera (NoIR filtered + IR illumination array) since elephants need to be detected 24hrs per day! In my original project I completely forgot about this, then decided to multiplex cameras to one Raspberry Pi. It was actually cheaper and easier to use two raspberry pi's; each with its own camera. Night-time and daytime classification of elephant images both need their own trained object detector anyway, so I don't think it's such a bad solution (for now).

METHODS FOR IMAGE CLASSIFICATION

This is the main part of the project. In my original automated elephant detection project I'd envisaged just comparing histograms!! Or failing that I'd try feature-matching with FLANN. Both of these proved to be completely rubbish in regard of detecting elephants! I tried Haar cascades too, but these had lots of false positives and literally took several weeks to train!

Initially with ElephantAI I worked with an object detector using Histogram of Oriented Gradients (HOG) and Linear Support Vector Machines (SVM). That had promising results; giving only 26% false-positives with a dataset consisting of 350 positive elephant images and 2000 negative non-elephant images (see https://hackaday.io/project/20448-elephant-ai/log/57399-4-result-for-object-detector-using-histogram-of-oriented-gradients-hog-and-linear-support-vector-machines-svm) and I would expect improved results with larger datasets. And it did. I got a result of 16% false-negatives with 330 positive elephant images and 3500 negative non-elephant images (see result #5)

At present, I am working on differentiating between types of elephants using deep convolutional neural networks for image classification vs. classical machine-vision techniques I had previously employed. This is important because different types, or classes, of elephants will...

Read more »

  • 2 × Raspberry Pi 3 Model B [detection device dayPi and nightPi] £32
  • 1 × Raspberry Pi Camera Module v2 (8MP) IR-filtered Standard [detection device] daytime usage dayPi [£29]
  • 1 × Huawei E3531 2G/3G USB dongle £21
  • 1 × Case for elephant detection device For prototype we used: IP65 320 x 250 x 135mm Moulded Grey Sealed Adaptable Box, Weatherproof Enclosure with Lid (£25.00)
  • 1 × Case for elephant deter device For prototype we used: IP65 220mm x 170mm x 80mm Moulded Grey Sealed Adaptable Box, Weatherproof Enclosure Lid (£11.28)

View all 31 components

  • Guide to installing TensorFlow on Raspberry Pi

    Neil K. Sheridan10 hours ago 0 comments

    [ u n d e r    c o n s t r u c t i on ]

    This causes a lot of problems, evidenced if you search for issues related to it on the web! And it caused me a lot of problems certainly! So I'm writing this guide to illustrate approaches and things that can go wrong + fixing them.

    Please check you have Raspian "Jessie" as the OS on the Pi first! And that you don't have a small or nearly full SD card.

    I'm going to add screen-recordings of the installations to help!

    USING MAKEFILE TO BUILD TENSORFLOW

    This is from the install guide here: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/makefile

    This is an alternative to using bazel to build TensorFlow. You compile the TensorFlow and Protobuf libraries. Here's what protobuf is if you hadn't heard of it: https://github.com/google/protobuf

    1. Clone the TensorFlow repository to the Pi. git clone https://github.com/tensorflow/tensorflow.git

    2. Run the download_dependencies.sh script that TensorFlow team have written. You can see it here https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/makefile/download_dependencies.sh and you'll have it on your Pi at tensorflow/contrib/makefile/download_dependencies.sh . This will download all the required dependencies. This is one of the ways I got in a mess because I tried to get them all individually, and got in a muddle.

    3. Download the example graph for testing . Here it is: https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip

    4. Now with 

    sudo apt-get install -y autoconf automake libtool gcc-4.8 g++-4.8

    we will get our packages for installation. So the packages are:

    autoconf  https://www.gnu.org/software/autoconf/autoconf.html for auto configuration

    automake https://www.gnu.org/software/automake/ to auto generate makefile.in

    libtool https://www.gnu.org/software/libtool/ support for working with shared libraries

    gcc-4.8 gcc 4.8 compiler https://gcc.gnu.org/onlinedocs/4.8.1/ this is used instead 

    ** note that gcc 4.8 is used instead of gcc 4.9 which is installed on the Pi OS, because 4.9 is known to encounter an error involving  __atomic_compare_exchange.

    g++ 4.8 compiler

    ** If you partially 

    5. Building protobuf:

    cd tensorflow/contrib/makefile/downloads/protobuf/
    ./autogen.sh
    ./configure
    make
    sudo make install
    sudo ldconfig  # refresh shared library cache
    cd ../../../../..
    export HOST_NSYNC_LIB=`tensorflow/contrib/makefile/compile_nsync.sh`
    export TARGET_NSYNC_LIB="$HOST_NSYNC_LIB"

    6. Now compile the TensorFlow library using Makefile:

    make -f tensorflow/contrib/makefile/Makefile HOST_OS=PI TARGET=PI \
     OPTFLAGS="-Os -mfpu=neon-vfpv4 -funsafe-math-optimizations -ftree-vectorize" CXX=g++-4.8

    You can see what MakeFile does here:  https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/makefile/Makefile

  • #buildinstructions: light-sensor

    Neil K. Sheridana day ago 0 comments

    Let's get started with adding a light-sensor, so we can tell if it is daytime or night-time! We need a light-sensor with a digital output. Or we can build a circuit using a photo-resistor and a transistor instead!

    PARTS REQUIRED

    digital light sensor. Have used "kwmobile light sensor with digital output, photodetector, brightness sensor, light sensor for Arduino, Genuino and Raspberry Pi " which was £5.40

    jumper wires (you can test with female to female, but will need female to male when we share the light sensor between dayPi and nightPi)

    1.

    Connect the jumper wires to the light-sensor:

    2. Now we connect the jumper wires to the raspberry pi:

    Connect digital output (DO/yellow) to GPIO PIN 7 [board numbering system]

    Connect GND to GND

    Connect 5v input to a 5v output PIN on Pi

    3. Now that's all done! We can go ahead and test it using the following code:

    import RPi.GPIO as GPIO
    import time
    GPIO.setmode(GPIO.BOARD)
    GPIO.setup(7, GPIO.IN)
    try:
        while True:
               val = GPIO.input(7)
               if (val == True):
                    print("IT IS DARK CONDITION")
               else:
                    print("IT IS LIGHT CONDITION")
               time.sleep(3)
               GPIO.cleanup()
    except KeyboardInterrupt:
         GPIO.cleanup()
    

     Here it is in action! You can see it has it's own LED that turns on/off depending on light condition.

    4. Ok, we are all set for using the light-sensor now! I'll add the homemade circuit later!

  • #buildinstructions: allowing dayPi and nightPi to share PIR and light-sensor

    Neil K. Sheridana day ago 0 comments

    Here we show how to allow the dayPi and nightPi to share the PIR and light-sensor!

    PARTS REQUIRED:

    • half-length breadboard
    • PIR
    • light-sensor with digital output
    • numerous jumper cables!
    • dayPi
    • nightPi
    • * I used the Adafruit cobbler for testing

    Let's get started!

    1. Let's do the light-sensor first. This has a 5v input, a GND, and digital output (giving HIGH or LOW depending on lighting conditions). We need to connect all of these to independent terminal strips on the breadboard. So for example:

    5v goes to terminal strip 1. So we connect a jumper wire from this terminal strip to the 5v output on nightPi

    GND goes to terminal strip 2. So we connect two other jumper wires to this terminal strip. One will go to GND on nightPi and one will go to GND on dayPi

    Digital output goes to terminal strip 3. So we connect two other jumper wires to this terminal strip. One will go to GPIO 11 on nightPi, and one will go to GPIO 11 on dayPi. [note this is BOARD numbering for the GPIOs]

    Let's see what this looks like with a photo! This includes the wiring for the PIR, which is the same kind of thing but to GPIO 7 on each Pi for the digital output!

    2. Now let's to it for the PIR. In the photo you can see the light-sensor is wired to the terminal strips on right of divider, and the PIR is wired to terminal strips on the left of the divider (the middle groove of the breadboard).

    5v goes to terminal strip 1. So we connect a jumper wire from this terminal strip to the 5v output on dayPi

    GND goes to terminal strip 2. So we connect two other jumper wires to this terminal strip. One will go to GND on nightPi and one will go to GND on dayPi

    Digital output goes to terminal strip 3. So we connect two other jumper wires to this terminal strip. One will go to GPIO 7 on nightPi, and one will go to GPIO 7 on dayPi. [note this is BOARD numbering for the GPIOs]

    If you zoom into the photo, you should be able to follow the paths of the wires!

    Here's a close-up of the breadboard. You can see how 5v input to the light-sensor and PIR is sent to the first terminal strip, and then it meets the 5v output from the Pi there. Thus we supply power to the light-sensor and PIR. And GND from the light-sensor and PIR goes to the next terminal strip, and from there is sent to GND on both dayPi and nightPi. And digital outputs from light-sensor and PIR go to the next terminal strip down, and they meet wires which take them to the GPIO PINs on dayPi and nightPi.

    And in the below photos you can see the wires connecting with the dayPi and nightPi:

    3. Great! We are all ready to share the PIR and light-sensor now!

  • Testing of ElephantAI

    Neil K. Sheridan2 days ago 0 comments

    So, at this stage we have built, tested, and setup, all the computational components of the system, including their associated circuits, and we have added our final code to each of them!

    Remember we have three primary computational components:

    -- Elephant Detection Device

    which is comprised of:

    dayPi (Raspberry Pi)

    nightPi (Raspberry Pi)

    -- Elephant Deter Device

    which is comprised of:

    PiZero

    AmpZero

    Now let's get started on testing how the system interacts!

    For testing, I suggest having a monitor and keyboard/mouse for each of the three primary computational components. And of course, we want all of our associated circuits setup! You don't need the solar recharging circuit. You can just test the IR illumination devices by connecting a 12v battery to the optically isolated switching circuit.

    Let's see how we get on!

  • #buildinstructions for hardware: an optically isolated circuit for IR illuminator

    Neil K. Sheridan5 days ago 0 comments

    Here we are going to make a circuit to isolate the 12v IR illumination devices from the 5v raspberry pi! We need to switch them on using GPIO out from the pi, but we don't want to expose our raspberry pi to any risk of it getting 12 volts!

    Now, this circuit will be quite tricky if you are not used to working with its two main electronic components: a transistor and an optocoupler/optoisolator!

    Let's first investigate what these are:

    WHAT IS A TRANSISTOR AND HOW DOES IT WORK?

    Now, in our circuit we want to turn on the IR illumination device. We could do this with a push-button switch! But computers can't push switches! Well, we could make a robotic hand to push the switch I suppose, but that would be a bit much! So we use transistors in order to enable computers to 'push' switches!

    See it does look a bit switch like doesn't it! There are different types of transistors, but we are using a bipolar junction transistor (BJT) There are two different structures for these bipolar junction transistors: NPN and PNP. The structure is the type of semiconducting materials the transistor uses. You can read more about these here: https://en.wikipedia.org/wiki/Bipolar_junction_transistor#NPN

    Anyway, all we need to be concerned about is that we can switch the transistor on or off by altering the voltage applied to the base. If we apply a voltage to the base, then current will flow between the collector and the emitter - so switching our circuit on! If we don't apply a voltage to the base, then no significant current will flow from collector to emitter - and our circuit will be switched off!

    So, as you can see below, there are four pins on the (BJT) transistor I'm using: 

    1. BASE

    2. COLLECTOR

    3. EMITTER

    4. COLLECTOR (the top one)

    * the datasheet should explain which pin is number 1!

    Now we'll build a test circuit using batteries. Here's a simple test circuit we can setup using a breadboard, one resistor, an LED, a 9v battery, a smaller battery, and our BJT (NPN):  

    So, you should have a good idea of how a BJT functions now! Below you can see my circuit in action. Note there are two batteries: one for the LED circuit, and one to send voltage to the base pin on the transistor. In this video, there's also a resistor between the battery (which is 6v rather than 0.7v) and the transistor base pin:

    WHAT IS AN OPTOISOLATOR AND HOW DOES IT WORK?

    An optocoupler, or optoisolater, is a component that can allow two circuits to communicate with each other light instead of electrical signals. So the circuits are completely isolated electrically, communicating only via light. In the same way as our synapses electrically isolate our neurons from each other, communicating only with chemicals. The optocoupler uses semiconducting materials and LEDs to perform this function. 

    As you can see in the above schematic - it is basically an LED and a transistor. The LED on the left side is acting on the base of the transistor (on the right side).

    So in an optoisolator we have pin arrangement of:

    1. Anode for the LED (+ve)

    2. Cathode for the LED (-ve)

    3. Emitter for transistor

    4. Collector for transistor 

    Well, that's if it is 4 pin. Many are 6 pin, so we might have:

    1. Anode for the LED (+ve)

    2. Cathode for the LED (-ve)

    4. Emitter for transistor

    5. Collector for transistor 

    I have a 6 pin. The 4N25X. Datasheet: http://www.farnell.com/datasheets/90919.pdf

    If you look at the datasheet you can also find out which side is for the LED, and so which is pin 1! In my case it had a notch on the LED side (see image below). It's very important to check your datasheet - it's not a good idea to just muddle through by connecting things and seeing if it works!

    Ok, so we got the idea of an optoisolator!  In our scenario, we use it because we don't want the 12v IR illumination device in electrical contact with the raspberry pi (5v)! Of course, you can isolate even DC and AC circuits with these too!

    Right, so let's set up two circuits and optically...

    Read more »

  • Elephant Detection Devices: software overview

    Neil K. Sheridan6 days ago 0 comments

    Let's do the example for dayPi

    * main while loop is for the safe shutdown button, but I'm not sure it is worth the risk of using this.

    So our while loops are going to be:

    1. while light_sensor == True

    The dayPi will operate whilst it is daylight according to the light sensor. So we use our def of check_light_sensor to get back HIGH or LOW from the light sensor. If this while loop breaks, we run our client code to become a client and send a message to the nightPi that it is time to start. Then we switch to server and wait for messages back. The messages are either regarding the light_sensor or a detection message which we must pass to our serial modem for sending out as SMS. Note that we continue to act as server until the light_sensor is True.

    2. while PIR == HIGH

    If the PIR sensor is HIGH, i.e. something hot is in range of the PIR, we need to perform our image capture, and then send the image to the elephant detector software

    - take image

    - send image to detector

     if image == elephant:

     - are we doing a deter? If yes, then we start the following while loop:

                   - run our client code to use Bluetooth to send message to deter device. Once we get echo back from deter device that message is sent, we run our server code and wait for deter device to message us when deter performed

                    -  if that failed we try again for n attempts 

                    - if we get deter performed message back we break out of while loop

     - take data from detector and convert it to string containing top 5 elephant classes

     - compose our message containing all variables required (e.g. device location, time, deter done, top 5 elephant classes, etc.)

    - save image of suspected elephant 

     - we now run our serial modem code to send SMS containing our message to phone numbers on list

    - wait 60 seconds

    if image == not elephant:

    - wait 60 seconds

    Let's do the example for nightPi

    * main while loop is for the safe shutdown button, but I'm not sure it is worth the risk of using this.

    So our while loops are going to be:

    1. while light_sensor == False

    The nightPi will operate whilst it is not daylight according to the light sensor. So we use our def of check_light_sensor to get back HIGH or LOW from the light sensor. If this while loop breaks, we run our client code to become a client and send a message to the dayPi that it is time to start. Then we switch to server and wait for a message back.

    2. while PIR == HIGH

    If the PIR sensor is HIGH, i.e. something hot is in range of the PIR, we need to switch on our IR illumination device, perform our image capture, and then send the image to the elephant detector software

    - switch on IR illumination device

    - take image

    - send image to detector

     if image == elephant:

     - are we doing a deter? If yes, then we start the following while loop:

                   - run our client code to use Bluetooth to send message to deter device. Once we get echo back from deter device that message is sent, we run our server code and wait for deter device to message us when deter performed

                    -  if that failed we try again for n attempts 

                    - if we get deter performed message back we break out of while loop

     - take data from detector and convert it to string containing top 5 elephant classes

     - compose our message containing all variables required (e.g. device location, time, deter done, top 5 elephant classes, etc.)

    - save image of suspected elephant 

     - we now have to switch to client code, so we can send this message to the dayPi. Note that the dayPi has the serial modem, we don't. And the dayPi is now acting as a server. So we switch to client code and send it the detection message via...

    Read more »

  • Field testing of ElephantAI: with elephants

    Neil K. Sheridan6 days ago 0 comments

    In this test, we tested the ElephantAI system with elephants . This was using  TensorFlow & Keras (off-shelf), TensorFlow using Inception trained on ImageNet 2012 dataset (off-shelf). More importantly this included the new model we trained, adding our new classes of elephant, via transfer learning on off-shelf model (InceptionV3) using TensorFlow.

    [under construction]

  • Field testing of ElephantAI: with horses

    Neil K. Sheridan6 days ago 0 comments

    In this test, we used the ElephantAI system, with the target animal to changed horses. This was using  TensorFlow & Keras (off-shelf) and TensorFlow using Inception trained on ImageNet 2012 dataset (off-shelf).

    So in this test, horses stand in for elephants! 

    What we are testing:

    1. Do detection devices detect target animal from images acquired when PIR triggered?

    2. Do detection devices switch between dayPi and nightPi depending on lighting?

    3. Does the IR illumination component of nightPi illuminate target animal at sufficient range?

    4. Do the detection devices alert users on SMS list when target animal is detected? Is this ok during night and day? Are the dayPi and nightPi communication via Ethernet?

    5. Do nightPi and dayPi communicate via Bluetooth with the deter device? Does the deter device play scare_sounds when the target animal is detected? Well it plays a beep so as not to scare the horses!

  • Testing of IR illumination devices and NoIR camera

    Neil K. Sheridan6 days ago 0 comments

    Here we look at some of the different IR illumination devices you can use. And test them with the NoIR camera to see what we get.

    [should we add the optically isolated circuit for switching IR illumination devices here too?]

  • #buildinstructions for hardware and software: nightPi

    Neil K. Sheridan6 days ago 0 comments

    These are the build instructions for the nightPi. This is the component of the elephant detection device that will photograph elephants during the night-time. This has IR illumination, which is powered from the battery with 12v, but is switched from the Pi. This component does not have mobile connectivity itself, but communicates with the dayPi, which does have mobile connectivity via Ethernet. 

    [remove Ethernet for light condition, sort out functions , add the new function for classify image using tensorflow]

    Code overview (rough do not run it)

    ## nightpi code v1.5 
    ## Here are our functions
    ######## Light sensor function
    def CheckLightCondition():
        GPIO.setmode(GPIO.BOARD)
        GPIO.setup(11, GPIO.IN)
        light_sensor = GPIO.input(11)
        if (light_sensor == True):
            light_condition = "NIGHT"
        else:
            light_condition = "DAY"
        GPIO.cleanup()
        return light_condition
         
    ##############################################
    ######## Check PIR function
    def CheckPIR():
        # dependencies are RPi.GPIO and time
        # returns whats_here with "NOTHING HERE" or "SOMETHING HERE"
        time.sleep(1)
        #don't rush the PIR!
        GPIO.setmode(GPIO.BOARD)
        # set numbering system for GPIO PINs are BOARD
        GPIO.setup(7, GPIO.IN)
        # set up number 7 PIN for input from the PIR
        # need to adjust if you connected PIR to another GPIO PIN
        try:
            val = GPIO.input(7)
            if (val == True):
                whats_here = "something_here"
                #PIR returned HIGH to GPIO PIN, so something here!
            if (val == False):
                whats_here = "nothing_here"
                #PIR returned LOW to GPIO PIN, so something here!
                #GPIO.cleanup()
            ERROR2 = "no error"
        except:
            ERROR2 = "PIR error"
            #something went wrong, return error code 2 with info for debugging
            GPIO.cleanup()
        return whats_here
       
    ######################################################
        
    ######################################################
    ###### 3. Is this an elephant?
    #We send our images obtained by CaptureImage() to the elephant detector
    #software, we get back our top 5 results. So the images we are going to
    #send will be '/home/pi/suspects/suspected_elephant1.jpg to suspected_elephant10.jpg
    #we can move the confirmed elephant photos to a new directory and delete those
    #in the suspects directory afterwards
    #def IsThisElephant():
        
        #with the os method to run the label_image.py code from TensorFlow team
        #top5_things = os.system(python label_image.py --graph=retrained_graph.pb
         # --labels=retrained_labels.txt
          #--image=/home/pi/suspects/suspected_elephant1.jpg)
    #now what is in the string top5_things?
    #we need to search through the string for our elephant types and the
    #probabilites
    # global elephant_type
    # so the elephant type is a global variable so we can access it outside
    # of this function
    # returns is_it_elephant with "YES" or "NO"
    # archive the confirmed elephants
    # delete the suspected elephant photos
    #######################################################################
    ###### 4. Deter elephants!
    ### This is going to run if IsThisElephant returned YES and we have got yes_audio
    ### hard-coded as yes. Remember this is PYTHON 3.x.x CODE!
    def DeterElephants():
        #we setup as client, to message the deter device telling it to perform deter
        message = "yes_audio"
        serverMACAddress = '43:43:A1:12:1F:AC'
        port = 9
        s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM)
        s.connect((serverMACAddress, port))
        s.send(bytes(message, 'UTF-8'))
        s.close() 
        # now we sent the message, so we set up as server and wait for message back
        backlog = 1
        size = 1024
        s = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
        s.bind((hostMACAddress, port))
        s.listen(backlog)
        try:
            client, clientInfo = s.accept()
            while 1:
                data = client.recv(size)
                if data:
                    print(data)
                client.send(data) 
                # echo back to client so it knows we got the data
        except:	
            print("Closing socket")
            client.close()
            s.close()
        # so we got a message back. If it was "done" we know the deter was done
        # and we can set deter_done as 1
        if data == "done":
            deter_done = 1
        else:
            deter_done = 0
        return deter_done
    ...
    Read more »

View all 53 project logs

View all 8 instructions

Enjoy this project?

Share

Discussions

Thomas wrote 04/25/2017 at 19:17 point

Hi Neil, I think this here might be of interest: https://hackaday.io/project/561-summerize-total-motion-in-a-video

  Are you sure? yes | no

Neil K. Sheridan wrote 04/25/2017 at 19:38 point

Hi, Thanks! That does look interesting! Will go thru it!

  Are you sure? yes | no

Thomas wrote 04/25/2017 at 19:56 point

When I think of elephants, the first thing that comes into my mind is how they move. The idea of using optical flow for creating a "movement spectrogram" is intriguing. The first couple of lines in the Wikipedia article on optical flow point to interesting approaches:
https://en.wikipedia.org/wiki/Optical_flow

  Are you sure? yes | no

Neil K. Sheridan wrote 03/26/2017 at 20:21 point

yes! I'm going to post it later this week! I'm just taking out the bits that aren't relevant so it is easy to follow! 

  Are you sure? yes | no

jessica18 wrote 03/26/2017 at 17:23 point

can you post the code

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates