close-circle
Close
0%
0%

Elephant AI

a system to prevent human-elephant conflict by detecting elephants using machine vision, and warning humans and/or repelling elephants

Similar projects worth following
close
The conflict that arises between humans and elephants in countries such as India, Sri Lanka, and Kenya, claims many hundreds of human and elephant lives per year. These negative interactions arise when humans meet elephants on their trails, when elephants raid fields for food, and when elephants try to cross railways. Machine vision and automated deterrence can mitigate such conflict.

ELEPHANT AI SYSTEM


INTRODUCTION AND GOALS

This is an evolution of my 'Automated Elephant-detection system' that was a semi-finalist in the Hackaday Prize 2016. The current project differs substantially in that it makes use of more advanced machine vision techniques, and eliminates the usage of RF communication and village base stations. Alternatively using 4G/3G/EDGE/GPRS on each elephant-detection device, and includes elephant-deterrence devices to completely eliminate interaction between humans and elephants whenever possible.

* Thanks to  www.nerdycute.com for drawing our logo!

So, let's get to the primary goals of Elephant AI:

  • Eliminate contact between humans and elephants
  • Protect elephants from injury and death
  • Protect humans from injury and death

How will the Elephant AI accomplish these goals?

  • Detect elephants as they move along their regular paths. These paths have been used by elephants for many years (perhaps centuries) and often cut through areas now used by humans. Humans will be warned that elephants are moving on the paths so they can stay away or move with caution.
  • Detect elephants as they leave forested areas to raid human crop fields. At this point, elephant deterrence devices will attempt to automatically scare elephants. This will be using sounds of animals they dislike (e.g. bees and tigers, and human voices in the case of Maasai people in Kenya/Tanzania), and perhaps by firing chili balls into the paths of the elephants from compressed air guns.
  • Detect elephants before they stray onto railway lines. This can be done via a combination of machine vision techniques and more low-tech IR (or laser) break-beam sensors. Train drivers can be alerted to slow-down and stop before hitting the elephants who are crossing.

Just how bad is it for humans and elephants to interact? This video, shot several months ago, in India, gives some idea. It is really bad indeed. It causes great stress to elephants, and puts both the elephants and humans at risk of injury or death.

That's why Elephant AI wants to take human-elephant interaction out of the equation entirely!

HARDWARE SETUP

We need a daylight camera (IR-filtered) and a night camera (NoIR filtered + IR illumination array) since elephants need to be detected 24hrs per day! In my original project I completely forgot about this, then decided to multiplex cameras to one Raspberry Pi. It was actually cheaper and easier to use two raspberry pi's; each with its own camera. Night-time and daytime classification of elephant images both need their own trained object detector anyway, so I don't think it's such a bad solution (for now).

METHODS FOR IMAGE CLASSIFICATION

This is the main part of the project. In my original automated elephant detection project I'd envisaged just comparing histograms!! Or failing that I'd try feature-matching with FLANN. Both of these proved to be completely rubbish in regard of detecting elephants! I tried Haar cascades too, but these had lots of false positives and literally took several weeks to train!

Initially with ElephantAI I worked with an object detector using Histogram of Oriented Gradients (HOG) and Linear Support Vector Machines (SVM). That had promising results; giving only 26% false-positives with a dataset consisting of 350 positive elephant images and 2000 negative non-elephant images (see https://hackaday.io/project/20448-elephant-ai/log/57399-4-result-for-object-detector-using-histogram-of-oriented-gradients-hog-and-linear-support-vector-machines-svm) and I would expect improved results with larger datasets. And it did. I got a result of 16% false-negatives with 330 positive elephant images and 3500 negative non-elephant images (see result #5)

At present, I am working on differentiating between types of elephants using deep convolutional neural networks for image classification vs. classical machine-vision techniques I had previously employed. This is important because different types, or classes, of elephants will...

Read more »

  • 2 × Raspberry Pi 3 Model B [detection device]
  • 1 × Raspberry Pi Camera Module v2 (8MP) Standard [detection device] daytime usage [£29]
  • 1 × Raspberry Pi Zero W [£9.60]
  • 1 × JustBoom Amp Zero pHAT for the Raspberry Pi Zero [£24.00]
  • 1 × Sealed Lead-Acid Battery 12V - 7Ah+ [£10-£30]

View all 22 components

  • #buildinstructions for hardware: an optically isolated circuit for IR illuminator

    Neil K. Sheridan3 days ago 0 comments

    Here we are going to make a circuit to isolate the 12v IR illumination devices from the 5v raspberry pi! We need to switch them on using GPIO out from the pi, but we don't want to expose our raspberry pi to any risk of it getting 12 volts!

    Now, this circuit will be quite tricky if you are not used to working with its two main electronic components: a transistor and an optocoupler/optoisolator!

    Let's first investigate what these are:

    WHAT IS A TRANSISTOR AND HOW DOES IT WORK?

    Now, in our circuit we want to turn on the IR illumination device. We could do this with a push-button switch! But computers can't push switches! Well, we could make a robotic hand to push the switch I suppose, but that would be a bit much! So we use transistors in order to enable computers to 'push' switches!

    See it does look a bit switch like doesn't it! There are different types of transistors, but we are using a bipolar junction transistor (BJT) There are two different structures for these bipolar junction transistors: NPN and PNP. The structure is the type of semiconducting materials the transistor uses. You can read more about these here: https://en.wikipedia.org/wiki/Bipolar_junction_transistor#NPN

    Anyway, all we need to be concerned about is that we can switch the transistor on or off by altering the voltage applied to the base. If we apply a voltage to the base, then current will flow between the collector and the emitter - so switching our circuit on! If we don't apply a voltage to the base, then no significant current will flow from collector to emitter - and our circuit will be switched off!

    So, as you can see below, there are four pins on the (BJT) transistor I'm using: 

    1. BASE

    2. COLLECTOR

    3. EMITTER

    4. COLLECTOR (the top one)

    * the datasheet should explain which pin is number 1!

    Now we'll build a test circuit using batteries. Here's a simple test circuit we can setup using a breadboard, one resistor, an LED, a 9v battery, a smaller battery, and our BJT (NPN):  

    So, you should have a good idea of how a BJT functions now! Below you can see my circuit in action. Note there are two batteries: one for the LED circuit, and one to send voltage to the base pin on the transistor. In this video, there's also a resistor between the battery (which is 6v rather than 0.7v) and the transistor base pin:

    WHAT IS AN OPTOISOLATOR AND HOW DOES IT WORK?

    An optocoupler, or optoisolater, is a component that can allow two circuits to communicate with each other light instead of electrical signals. So the circuits are completely isolated electrically, communicating only via light. In the same way as our synapses electrically isolate our neurons from each other, communicating only with chemicals. The optocoupler uses semiconducting materials and LEDs to perform this function. 

    As you can see in the above schematic - it is basically an LED and a transistor. The LED on the left side is acting on the base of the transistor (on the right side).

    So in an optoisolator we have pin arrangement of:

    1. Anode for the LED (+ve)

    2. Cathode for the LED (-ve)

    3. Emitter for transistor

    4. Collector for transistor 

    Well, that's if it is 4 pin. Many are 6 pin, so we might have:

    1. Anode for the LED (+ve)

    2. Cathode for the LED (-ve)

    4. Emitter for transistor

    5. Collector for transistor 

    I have a 6 pin. The 4N25X. Datasheet: http://www.farnell.com/datasheets/90919.pdf

    If you look at the datasheet you can also find out which side is for the LED, and so which is pin 1! In my case it had a notch on the LED side (see image below). It's very important to check your datasheet - it's not a good idea to just muddle through by connecting things and seeing if it works!

    Ok, so we got the idea of an optoisolator!  In our scenario, we use it because we don't want the 12v IR illumination device in electrical contact with the raspberry pi (5v)! Of course, you can isolate even DC and AC circuits with these too!

    Right, so let's set up two circuits and optically...

    Read more »

  • Elephant Detection Devices: software overview

    Neil K. Sheridan4 days ago 0 comments

    Let's do the example for dayPi

    * main while loop is for the safe shutdown button, but I'm not sure it is worth the risk of using this.

    So our while loops are going to be:

    1. while light_sensor == True

    The dayPi will operate whilst it is daylight according to the light sensor. So we use our def of check_light_sensor to get back HIGH or LOW from the light sensor. If this while loop breaks, we run our client code to become a client and send a message to the nightPi that it is time to start. Then we switch to server and wait for messages back. The messages are either regarding the light_sensor or a detection message which we must pass to our serial modem for sending out as SMS. Note that we continue to act as server until the light_sensor is True.

    2. while PIR == HIGH

    If the PIR sensor is HIGH, i.e. something hot is in range of the PIR, we need to perform our image capture, and then send the image to the elephant detector software

    - take image

    - send image to detector

     if image == elephant:

     - are we doing a deter? If yes, then we start the following while loop:

                   - run our client code to use Bluetooth to send message to deter device. Once we get echo back from deter device that message is sent, we run our server code and wait for deter device to message us when deter performed

                    -  if that failed we try again for n attempts 

                    - if we get deter performed message back we break out of while loop

     - take data from detector and convert it to string containing top 5 elephant classes

     - compose our message containing all variables required (e.g. device location, time, deter done, top 5 elephant classes, etc.)

    - save image of suspected elephant 

     - we now run our serial modem code to send SMS containing our message to phone numbers on list

    - wait 60 seconds

    if image == not elephant:

    - wait 60 seconds

    Let's do the example for nightPi

    * main while loop is for the safe shutdown button, but I'm not sure it is worth the risk of using this.

    So our while loops are going to be:

    1. while light_sensor == False

    The nightPi will operate whilst it is not daylight according to the light sensor. So we use our def of check_light_sensor to get back HIGH or LOW from the light sensor. If this while loop breaks, we run our client code to become a client and send a message to the dayPi that it is time to start. Then we switch to server and wait for a message back.

    2. while PIR == HIGH

    If the PIR sensor is HIGH, i.e. something hot is in range of the PIR, we need to switch on our IR illumination device, perform our image capture, and then send the image to the elephant detector software

    - switch on IR illumination device

    - take image

    - send image to detector

     if image == elephant:

     - are we doing a deter? If yes, then we start the following while loop:

                   - run our client code to use Bluetooth to send message to deter device. Once we get echo back from deter device that message is sent, we run our server code and wait for deter device to message us when deter performed

                    -  if that failed we try again for n attempts 

                    - if we get deter performed message back we break out of while loop

     - take data from detector and convert it to string containing top 5 elephant classes

     - compose our message containing all variables required (e.g. device location, time, deter done, top 5 elephant classes, etc.)

    - save image of suspected elephant 

     - we now have to switch to client code, so we can send this message to the dayPi. Note that the dayPi has the serial modem, we don't. And the dayPi is now acting as a server. So we switch to client code and send it the detection message via...

    Read more »

  • Field testing of ElephantAI: with elephants

    Neil K. Sheridan4 days ago 0 comments

    In this test, we tested the ElephantAI system with elephants . This was using  TensorFlow & Keras (off-shelf), TensorFlow using Inception trained on ImageNet 2012 dataset (off-shelf). More importantly this included the new model we trained, adding our new classes of elephant, via transfer learning on off-shelf model (InceptionV3) using TensorFlow.

    [under construction]

  • Field testing of ElephantAI: with horses

    Neil K. Sheridan4 days ago 0 comments

    In this test, we used the ElephantAI system, with the target animal to changed horses. This was using  TensorFlow & Keras (off-shelf) and TensorFlow using Inception trained on ImageNet 2012 dataset (off-shelf).

    So in this test, horses stand in for elephants! 

    What we are testing:

    1. Do detection devices detect target animal from images acquired when PIR triggered?

    2. Do detection devices switch between dayPi and nightPi depending on lighting?

    3. Does the IR illumination component of nightPi illuminate target animal at sufficient range?

    4. Do the detection devices alert users on SMS list when target animal is detected? Is this ok during night and day? Are the dayPi and nightPi communication via Ethernet?

    5. Do nightPi and dayPi communicate via Bluetooth with the deter device? Does the deter device play scare_sounds when the target animal is detected? Well it plays a beep so as not to scare the horses!

  • Testing of IR illumination devices and NoIR camera

    Neil K. Sheridan4 days ago 0 comments

    Here we look at some of the different IR illumination devices you can use. And test them with the NoIR camera to see what we get.

    [should we add the optically isolated circuit for switching IR illumination devices here too?]

  • #buildinstructions for hardware and software: nightPi

    Neil K. Sheridan4 days ago 0 comments

    These are the build instructions for the nightPi. This is the component of the elephant detection device that will photograph elephants during the night-time. This has IR illumination, which is powered from the battery with 12v, but is switched from the Pi. This component does not have mobile connectivity itself, but communicates with the dayPi, which does have mobile connectivity via Ethernet. 

    Code overview (rough do not run it)

    ## nightpi code v1.0 
    ## Here are our functions
    ######## 1. Light sensor function
    def CheckLightCondition():
        light_sensor = 2
        light_condition = "UNKNOWN"
        if light_sensor == 1:
            light_condition = "DAY"
        elif light_sensor == 2:
            light_condition = "NIGHT"
        return light_condition
    #test this with the following code
    #light_sensor = 1
    #what_is_light = CheckLightCondition()
    #print(what_is_light)
    ##############################################
    ######## 2. Check PIR function
    def CheckPIR():
        # dependencies are RPi.GPIO and time
        # returns whats_here with "NOTHING HERE" or "SOMETHING HERE"
        time.sleep(5)
        #don't rush the PIR!
        GPIO.setmode(GPIO.BOARD)
        # set numbering system for GPIO PINs are BOARD
        GPIO.setup(7, GPIO.IN)
        # set up number 7 PIN for input from the PIR
        # need to adjust if you connected PIR to another GPIO PIN
        try:
            val = GPIO.input(7)
            if (val == True):
                whats_here = "something_here"
                #PIR returned HIGH to GPIO PIN, so something here!
            if (val == False):
                whats_here = "nothing_here"
                #PIR returned LOW to GPIO PIN, so something here!
                #GPIO.cleanup()
            ERROR2 = "no error"
        except:
            ERROR2 = "PIR error"
            #something went wrong, return error code 2 with info for debugging
            GPIO.cleanup()
        return whats_here
       
    # test this with the following code:
    #import RPi.GPIO as GPIO
    #import time
    #hows_the_PIR = CheckPIR()
    #print(hows_the_PIR)
    ######################################################
        
    ######################################################
    ###### 3. Is this an elephant?
    #We send our images obtained by CaptureImage() to the elephant detector
    #software, we get back our top 5 results. So the images we are going to
    #send will be '/home/pi/suspects/suspected_elephant1.jpg to suspected_elephant10.jpg
    #we can move the confirmed elephant photos to a new directory and delete those
    #in the suspects directory afterwards
    #def IsThisElephant():
        
        #with the os method to run the label_image.py code from TensorFlow team
        #top5_things = os.system(python label_image.py --graph=retrained_graph.pb
         # --labels=retrained_labels.txt
          #--image=/home/pi/suspects/suspected_elephant1.jpg)
    #now what is in the string top5_things?
    #we need to search through the string for our elephant types and the
    #probabilites
    # global elephant_type
    # so the elephant type is a global variable so we can access it outside
    # of this function
    # returns is_it_elephant with "YES" or "NO"
    # archive the confirmed elephants
    # delete the suspected elephant photos
    #######################################################################
    ###### 4. Deter elephants!
    ### This is going to run if IsThisElephant returned YES and we have got yes_audio
    ### hard-coded as yes
    def DeterElephants():
        #we setup as client, to message the deter device telling it to perform deter
        message = "yes_audio"
        serverMACAddress = '00:1f:e1:dd:08:3d'
        port = 9
        s = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
        s.connect((serverMACAddress, port))
        try:
            while 1:
                s.send(message)
        except:
            ERROR_5 = "error sending message to deter device using bluetooth"
            sock.close()
            #something went wrong so write error message for debug and break
        sock.close()
        # now we sent the message, so we set up as server and wait for message back
        backlog = 1
        size = 1024
        s = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
        s.bind((hostMACAddress, port))
        s.listen(backlog)
        try:
            client, clientInfo = s.accept()
            while 1:
                data = client.recv(size)
                if data:
                    print(data)
                client.send(data) 
                # echo back to client so it knows we got the data
        except:	
            print("Closing socket")
            client.close()
            s.close()
        # so we...
    Read more »

  • #buildinstructions for hardware and software: dayPi

    Neil K. Sheridan4 days ago 0 comments

    These are the build instructions for the dayPi. This is the component of the elephant detection device that will photograph elephants during the daytime, and will provide mobile connectivity!

  • Future directions and improvements to ElephantAI

    Neil K. Sheridan4 days ago 0 comments

    Here are some of the ideas I've envisaged for improvements to, and future directions for, the ElephantAI system!

    LOCAL PEOPLE TAKE PART IN SUPERVISED LEARNING OF THE ELEPHANT DETECTOR MODEL

    In a more advanced PPP (via 3G/4G) connectivity scenario I have envisaged, the input of local people could be leveraged to improve the accuracy of our elephant detector by enabled a supervised training system. Here's what it would look like:

    1. Elephant detector tweets/DMs image of suspected elephant

    2. Local people tweet back with hashtags #yeselephant #noelephant #type_is_lone #type_is_calf
    #type_is_herd #type_is_male #type_is_female according to what they identify in the image
    3. Database of images containing elephants confirmed by users is created by elephant detection device
    4. Images being stored along with their class labels that were provided via the hashtags (e.g. #type_is_calf , #type_is_herd etc.)
    5. On a weekly or monthly basis, the elephant detection devices will upload their databases of images to the cloud (e.g. to an Amazon EC2 virtual machine)

    6. Transfer learning will be performed on the virtual machine using our existing dataset with these additional images. Thus improving the accuracy of our elephant detector model!

    7. The new model will be sent to the elephant detection device from the cloud-based virtual machine, and be used for future detection.

    This process will be repeated on a weekly or monthly basis, thus providing on-going improvements in accuracy of the elephant detector!

    Note that, even if we don't have good 3G/4G connectivity from the elephant detection devices to perform this function, we could connect them to a upload station. The upload station would be located most likely in the village, were either wired or 3G network coverage is available. The elephant detection devices can be linked to the upload station via RF comms, either directly or via repeater stations. A similar concept to that I'd envisaged in my first https://hackaday.io/project/10391-automated-elephant-detection-system !

  • #software mobile connectivity: Huawei E3531

    Neil K. Sheridan5 days ago 0 comments

    USING THE HUAWEI E3531 USB DONGLE FOR ELEPHANT DETECTION DEVICE MOBILE CONNECTIVITY 

    Earlier we looked at setting up the Huawei E303 USB dongle for mobile connectivity. Here we'll look at setting up the E3531 model in two modes. These dongles are really low-cost compared to HATs! This one was only £21!

    1.

    So first of all let's get started by setting this as  HiLink CDC-Ether mode for PPP  via eth2 over 2G/3G to connect to the internet. Then we can change it to serial modem if we want later on.

    The E3531 will then have the following ID in this case: 12d1:14dc , with 12d1 being the vendor ID, and 14dc being the HiLink CDC-Ethernet model/mode ID.

    Let's connect the E3531 to the USB port on the pi! Now run the 'lsusb' command to list usb devices attached to pi. Hopefully you'll get the following:

    As you can see we got ID 12d1:14dc so that's great. It's in HiLink-CDC-Ethernet mode! That's supposed to be the default state for the E3531 so you don't need to worry about it being stuck in mass storage mode like the E303.

    Now we can use the Huawei HiLink to set everything up to connect to our carrier. Go to the web-browser, and access http://192.168.8.1 . Then you'll get the home html file giving your status:

    Then you can go along to settings and enter your carrier information. So once you get connect, you are all ready to access the internet using PPP via eth1! That was all quite easy!

    2.

    Next, let's switch it to a serial modem so we can send AT commands to get out SMS messages in lower connectivity scenarios. We'll be wanting to access it as tty/USBx

    We need to switch our E3531 mode from 12d1:14dc  to modem mode i.e. 12d1:1001 with 3x virtual serial ports. Let's get started by downloading and installing usb-modeswitch:

    sudo apt-get install usb-modeswitch

    No let's make a file containing our switch mode instructions. The instructions are:

    TargetVendor=0x12d1 
    TargetProduct=0x1f01 

    MessageContent="55534243123456780000000000000011062000000100000000000000000000"

    The message content depends on the firmware of your E3531, so honesty, the above might not work and you'll have to experiment/research to find a message that does :-(

    Right, so let's make the file for usb-modeswitch to use:

    sudo nano /etc/usb_modeswitch.d/12d1:1f01
    and add those instructions to the file! Then save it. Then reboot the pi.

     Now after reboot, we can issue the following command: 'lsusb' and we should see the new ID as 12d1:1001 so the E3531 has switched to serial modem mode!

    If we issue the command ls /dev/tty* we'll see our virtual serial ports: ttyUSB0, ttyUSB1 and ttyUSB2! Great!

    3.

    Now we can go ahead and connect to the virtual serial ports and issue some AT commands! I used cu "call up another system" (https://linux.die.net/man/1/cu) to do this.

    sudo apt-get install cu

    So after you've installed it, execute it with following arguments to connect to ttyUSB0:

    cu -l /dev/ttyUSB0

     Now I went ahead and sent some AT commands. Sadly they don't echo, so you can't see them!

    The first was 'AT' and I got back OK. Then I issued 'AT+CMGF=1' which tells the modem to act in SMS mode, and got back CMS ERROR 302, which is operation not allowed. Then I tried to send an SMS with 'AT+CMGS="+443283870634" <CR>' which returned an error code of CMS ERROR 302 again! These errors are likely due to the SIM card requiring a PIN or PIN2. If the SIM is blocked, the error would be due to SIM required PUK or PUK2 (personal unblocking codes).

    You can send a PIN using AT+CPIN="0000" <CR>. Note that carriers will block the SIM after 3x incorrect PIN attempts, and you will need to use a PUK or PUK2 to unblock it. All the carriers have default PIN codes.

    * It turned out that the error was due to me not entering AT+CMGF=1 correctly, not a PIN problem after all!

    You can find a full list of error codes here: http://www.smssolutions.net/tutorials/gsm/gsmerrorcodes/

    4.

    Ok, let's get started with doing this...

    Read more »

  • #software : switching between raspberry pi depending on day/night

    Neil K. Sheridan10/09/2017 at 20:11 0 comments

    What's the problem?

    We use two raspberry pi computers! One for daytime elephant detection, and one for night-time elephant detection. One having NoIR camera + IR illumination, and one having IR-filtered camera. In addition, only one has mobile connectivity. So they both need to communicate! The daytime detection raspberry pi must tell the night-time detection pi when it is night! And the night-time detection raspberry pi must tell the daytime when it is day - based on what their respective light sensors inform them!



    So, we looked at some approaches to do this here: https://hackaday.io/project/20448-elephant-ai/log/67566-switching-between-raspberry-pi-depending-on-lighting-daynight

    The problem is that we can't multiplex raspberry pi IR and NoIR cameras to the CSI port. Well, we can using http://www.ivmech.com/magaza/en/development-modules-c-4/ivport-v2-raspberry-pi-camera-module-v2-multiplexer-p-107 I guess, but it's kinda expensive at $77 + shipping + taxes. And we could perhaps use a Raspberry Pi camera on the CSI + a webcam etc. via USB (e.g.https://www.raspberrypi.org/documentation/usage/webcams/ ) .

    You might think we could communicate between the two raspberry pi computers (we'll refer to them as dayPI and nightPI from now one) via serial. But we can't do that, because one of them is using serial to communicate with the cellular network (2G/3G/GPRS) modem!



    Anyway, so first here I'll go through software ideas for connecting two raspberry pi's together using Ethernet cable. Then I'll show what I did.

    1. Send and receive UDP with DGRAM as proof of concept (code concept) 

    send.py

    UDP_IP = "IP_OF_RECEIVER"
    UDP_PORT = 5005
    # we need an unassigned port
    MESSAGE = "It's night-time!"
    print "UDP target IP:", UDP_IP
    print "UDP target port:", UDP_PORT
    print "message:", MESSAGE
    sock = socket.socket(socket.AF_INET, # Internet
    socket.SOCK_DGRAM) # UDP
    sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))

     receive.py

    import socket
    UDP_IP = "IP_OF_RECEIVER"
    UDP_PORT = 5005
    sock = socket.socket(socket.AF_INET, # Internet
    socket.SOCK_DGRAM) # UDP
    sock.bind((UDP_IP, UDP_PORT))
    while True:
            data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
            #buffer size to prevent overflow 
            print "received message:", data

    This is UDP with socket.SOCK_DGRAM is a datagram socket. Order and reliability for message is not guaranteed for these type of sockets. Alternatively we use socket.SOCK_STREAM  for TCP which is sequenced (https://en.wikipedia.org/wiki/Stream_socket)

    Here we go with an example code idea for the server with TCP and SOCK_STREAM:

    import socket
    host = 'host IP'
    post = 5580
    message = "this is a message"
    def setupServer():
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        print("Socket made")
        try:
            s.bind(host, port)
        except socket.error as msg:
            print(msg)
        print("Binded to socket")
        return s
    def setupConnection():
        s.listen(1) # one connection only at a time
        conn, address = s.accept()
        print("connection to: " + address[0] ":" + str(address[1]))
        return conn
    def GET():
        reply = storedValue
        return reply
    def dataTransfer(conn):
        while True:
        #receive data
            data = conn.recv(1024)
            data = data.decode('utf-8')
            #this is for split with first bit of string having a command word
            dataMessage = data.split(' ', 1)
            command = dataMessage[0]
            if command == 'GET':
                reply = GET()
            elif command == 'REPEAT':
                reply = REPEAT(dataMessage)
            elif command == 'EXIT':
                print("no client")
                break
            elif command == 'KILL':
                print("server shut down now")
                s.close()
                #close socket
                break
            else:
                reply = 'Unknown command given'
            # send reply to client
        conn.sendall(str.encode(reply))
        print("Data sent to client")
        conn.close()
    def REPEAT(dataMessage):
        reply = dataMessage[1]
        # so the split from second bit of string.. [0] was the command
        # this is just proof of concept
        return reply
    s = setupServer()
        
    while True:
        try:
            conn = setupConnection()
            dataTransfer(conn)
        except:
            break
    

     Here we go for the client:

    import socket...
    Read more »

View all 49 project logs

View all 8 instructions

Enjoy this project?

Share

Discussions

Thomas wrote 04/25/2017 at 19:17 point

Hi Neil, I think this here might be of interest: https://hackaday.io/project/561-summerize-total-motion-in-a-video

  Are you sure? yes | no

Neil K. Sheridan wrote 04/25/2017 at 19:38 point

Hi, Thanks! That does look interesting! Will go thru it!

  Are you sure? yes | no

Thomas wrote 04/25/2017 at 19:56 point

When I think of elephants, the first thing that comes into my mind is how they move. The idea of using optical flow for creating a "movement spectrogram" is intriguing. The first couple of lines in the Wikipedia article on optical flow point to interesting approaches:
https://en.wikipedia.org/wiki/Optical_flow

  Are you sure? yes | no

Neil K. Sheridan wrote 03/26/2017 at 20:21 point

yes! I'm going to post it later this week! I'm just taking out the bits that aren't relevant so it is easy to follow! 

  Are you sure? yes | no

jessica18 wrote 03/26/2017 at 17:23 point

can you post the code

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates