Close
0%
0%

Control Mechanism of an Autonomous Quadcopter

Obstacle detection system by OpenCv on Raspberry Pi 2 Model B

Similar projects worth following
The aim of this project is to make an autonomous robot which will participate in the robotics competition at the University of Manchester. I am a second year BEng (Hons) Mechatronic Engineering student who is studying at the University of Manchester. Being an active member of the Robotics Society over the past 3 years, I undertook various extra-curricular projects. Therefore I have a solid electronics as well as programming knowledge. Besides the theoretical knowledge that we've been taught in the lectures, I am enthusiastic about undertaking robotics projects to have a better understanding of how theoretical knowledge applies in the real life robotics applications and to improve myself in an area which will be closely related to my career.

Introduction:

The robotics competition is organized by the Robotics Society at the University of Manchester and the main requirement is building an autonomous robot by undertaking all the processes from design to manufacturing and programming. The robot should perform tasks with a high degree of autonomy in order to be capable of traveling from point A to B by surpassing challenging and randomly placed obstacles. This project integrates two important aspects of modern electronics, namely computer engineering and control systems engineering. The competition will be monitored by a group of jury who will eventually decide the winner by considering a number of criteria.

There will be 2 randomly placed balls that are coloured in red and blue and the robot that successfully locates these balls gains extra marks by the jury. Therefore our aim is to engineer a quadcopter which will autonomously scan the course looking for a red or blue ball. Once it detects either the red or blue ball it will hover over it and descend, touch it and then scan the rest of the course for the other ball.

I am responsible of developing the system of image processing to detect the location of the red and blue balls and outputting the location in terms of coordinates. Francesco Fumagalli is undertaking the programming of the actual quadcopter. You can follow his blog here.

P.s. The majority of the components that are used in this project are supplied by the UoM Robotics Society.

The course of the competition:


plain - 4.21 kB - 02/15/2016 at 02:03

Download

plain - 2.92 kB - 02/15/2016 at 02:03

Download

plain - 1.71 kB - 02/15/2016 at 02:03

Download

plain - 684.00 bytes - 02/13/2016 at 13:55

Download

Portable Network Graphics (PNG) - 21.10 kB - 02/13/2016 at 13:51

Preview
Download

View all 7 files

  • 1 × Raspberry Pi 2 Model B
  • 1 × 16GB Micro SD card (Minimum 16GB)
  • 1 × Wireless USB adapter
  • 1 × HDMI cable and Ethernet cable
  • 1 × 5V wall power supply w/ USB Micro-B end That supplies a minimum of 2A

View all 9 components

  • Log 10: Control Methods

    Canberk Suat Gurel07/01/2017 at 12:45 0 comments

    Following steps are executed by this robotic system to achieve object detection and following. First, object is extracted using image processing method. Second, errors such as heading angle error and distance error between detected object and the robot are calculated. Third, controllers are designed for minimizing these errors.

    A. Image Processing Method

    In this method, a color-based object detection algorithm is developed for the Kinect camera sensor. AForge.NET C# framework has been used to provide useful set of image processing filters and tools designed for developing image processing algorithm [10]. The method is executed as below.

    1) With the help of Kinect camera, both color (RGB ) and depth information are collected. In the development of this algorithm, the first step is to detect specified colored object and then get its position and dimension. Finally, the object is located in the image.

    2) The simplest object detection is achieved by performing color filtering on the Kinect RGB images. The color filtering process filters pixels inside or outside of specified RGB color range and fills the rest with specified color (black color is used in this paper ). In this process, only object of the interested color is kept and all the rest are removed out.

    3) Now, the next step is to find out coordinates of interested colored object. This is done by using 'Blob counter' tool, which counts and extracts stand alone objects in images using connected components algorithm [10]. The connected component algorithm treats all pixels with values less than or equal to 'background threshold' as background, but pixels with higher values are treated as object's pixels. However, 'Blob counter' tool works with grayscale images, so, gray scaling is applied on the images before using this tool.

    4) Last step is to locate the detected object in the image. Once the object is located, we get the coordinates (X,Y) of the centre of object.

    Note that, the desired range of area for an object is chosen to be from 1m to 1.2m. The reference distance for an object is chosen as 1.1m as shown in Figure 1. The distance between an object and the robot is obtained using Kinect depth map. If the object is located 1.2m or further, the robot will go towards forward direction. If the object is placed less than 1m, the robot will go towards backward direction. And, if the object is within this range, the robot will stop. Figure 2 shows an image frame having detected object with its center coordinates (X,Y) within a desired area.

    Figure 1 Desired range of area for an object

    Figure 2 Image showing a detected object within desired area

    B. Error Measurement

    This section explains the definition and measurement methods for heading error and distance error as follows:

    Figure 3 Image showing detected object with its center coordinates (X1,Y1) outside a desired area

    1) Heading angle error:
    Consider an object is detected in the right corner of an image frame with its center coordinates (X1, Y1) outside a desired object area as shown in Figure 3. To make the robot (quadcopter) turn towards detected object, it should change its current angular position to a desired angular position to achieve object following. Therefore, a heading angle error e is defined as,

    where φD is the desired angular position of robot and φc is the current angular position of robot as shown in Figure 4.

    Figure 4 Heading angle error definitions

    Figure 5 shows an extended view of a heading angle error, e, from the Kinect camera with detected object in an image frame having its center coordinates as (xm,ym) pixels.

    According to the image principle explained in [4] and [11], the heading angle error e between the center of detected object and the center of RGB image is given by,

    where a is the pixel size of the color image with detected object. f is the focal length of Kinect camera and n is the pixel difference between the center of detected object and the center of RGB image frame.


    In this project, only the heading...

    Read more »

  • Log 9: Kinect Sensor

    Canberk Suat Gurel07/01/2017 at 12:09 0 comments

    Kinect Sensor

    The Microsoft Kinect camera sensor is a revolutionary RGB-D camera that is primarily built as an input device for Xbox gaming console [8]. Due to its capability of producing decent quality images and depth information, this low cost device become popular in the field of scientific study especially in the field of computer vision and robotics.

    Kinect's Software Development Kit (SDK ) for Windows offers API interfaces to help user to create and develop their own application through it [9].

    Figure 1: Kinect Xbox camera sensor

    Figure 1 shows the Kinect camera sensor consisting of an IR (infrared ) projector, an IR camera, a RGB (color ) camera, four microphone array, a tilting system and image processing microchip known as Primesense's PSI 08 0-A2. The depth camera consists of an infrared laser projector combined with a
    monochrome CMOS sensor, which capture video data in 3D under any ambient light conditions. The sensing range of the depth sensor is adjustable with its two modes of operation that are default mode and near mode. The Kinect Xbox can work only with default mode and its range varies from 80 centimeters to 4 meters. The RGB camera operates at 30Hz, and can offer images with 8-bits per channel. Using the tilting system, camera can be tilted up to 27° either up or down.

    Reference:

    A. V. Gulalkari, G. Hoang, H. K. Kim, S. B. Kim, P. S. Pratama and B. H. Jun, "Object Following Control of Six-legged Robot Using KInect Camera," ICACCI, South Korea, 2014.

  • Log 8: Testing OpenCV Programs

    Canberk Suat Gurel02/14/2016 at 12:55 0 comments

    Part 1:

    Download the OpenCVTest_1.py from the files.

    This program opens the file in the same directory names "cam.jpg" and displays the original image and a Canny edges of the original image.

    Just like we've done in the previous log, create a file called OpenCVTest1.py using the following command:

    nano OpenCVTest1.py
    Then copy/paste in OpenCVTest1.py and press CTRL+O and CTRL+X

    (Now you should be back in the command line, if you're unsure about this stage, examine the log 6)

    And execute the code using the following command.

    python OpenCVTest1.py
    P.s. the Pi camera comes with a protective cover on its lens, make sure that you've removed it otherwise you'd probably get a complete black picture instead of a canny edges of the original image.

    Part 2:

    Download the OpenCVTest_2.1.py from the files.

    This program opens a picam stream, attempts to change to 320x240 resolution, and shows the original image of each frame and also a Canny edges of each frame.

    Now repeat the same process as the Part 1, create a file called OpenCVTest2.py using the following command:

    nano OpenCVTest2.py
    Then copy/paste in OpenCVTest2.py and press CTRL+O and CTRL+X

    (Now you should be back in the command line, if you're unsure about this stage, examine the log 6)

    And execute the code using the following command.

    python OpenCVTest2.py
    Part 3:

    Download the OpenCVTest_3.1.py from the files.

    This program tracks a red ball and outputs its location in terms of coordinates.

    Now repeat the same process as the Part 1, create a file called OpenCVTest3.py using the following command:

    nano OpenCVTest3.py

    Then copy/paste in OpenCVTest3.py and press CTRL+O and CTRL+X

    (Now you should be back in the command line, if you're unsure about this stage, examine the log 6)

    And execute the code using the following command.

    python OpenCVTest3.py
    Having done this we've almost concluded the project, in the next log I'll be working on the flight control of the quadcopter.

  • Log 7: Get a LED Blinking

    Canberk Suat Gurel02/13/2016 at 13:51 0 comments

    At this stage I'm going to explain how to create a file which consists of a simple code (to get a LED blinking) and we will conclude with executing the code. This will be helpful particularly at the next stage where we'll have a go with OpenCV.

    First of all, breadboard blink_led.png (you can find the circuit diagram in the files)

    (I assume that you have a certain level experience with breadbording circuits therefore I'm not going to get into too much detail with that.)

    If you are wondering how to calculate the resistor value in this circuit, see resistor for LED calculation.pdf

    See "RaspberryPi2_J8_pinout.png" for a complete RPi 2 connector J8 pinout

    Alternatively see "RasPiB-GPIO_lightbox.png"

    Continuing at the RPi command line: (Execute the following commands one by one)

    nano blink_led.py       # open the file my_blink.py with the nano editor
    Some of the resources use the "touch" command to create a file and then the "nano" command to edit the created file. However nano will create the file if it has not been created already, therefore there no need for the "touch" command.

    Now copy / paste in blink_led.py (located at the files), then press Ctrl+O to save, then Ctrl+X to exit nano.

    sudo python blink_led.py
    Run the program with this command, note sudo "super user do", i.e. root access is necessary to perform hardware I/O on the RPi.

    You should now see the LED on your board blinking. Press Ctrl+C to exit this program.

    (For future reference Ctrl+C exits most programs when ran from a Linux command line.)

    This step is absolutely crucial for does of you who wants to make the program run when Raspbian boots (i.e. for a headless embedded application) proceed as follows . . .

    sudo nano /etc/rc.local               # open rc.local in the nano editor
    In rc.local just before "exit 0" add the following:
    sudo python /home/pi/blink_led.py &         # add this to rc.local, just before "exit 0"
    Do NOT forget the "&" to start as a separate process or the RPi will run your program indefinitely and will not continue to boot !!

    Forgetting the "&" could put the RPi in an unrecoverable state (necessitating re-formatting the SD card if that's the case take a look at log 1).

    /etc/rc.local is ran by the RPi as root during boot-up, so you don't really need to include "sudo" in the command even if accessing GPIO pins.

    sudo shutdown -r now       # reboot, the LED should start blinking during RPi boot-up
    To return to the regular boot-up, simply open rc.local again and remove the "sudo python /home/pi/blink_led.py &" line
    sudo nano /etc/rc.local          # remove the "sudo python /home/pi/blink_led.py &"

  • Log 6: Installing OpenCV on the RPI 2

    Canberk Suat Gurel02/11/2016 at 21:20 0 comments

    Before we get started, I'd like to warn you that this stage takes about 3.5 hours total on a RPi 2.

    Also during this process, you cannot use the PuTTY because it will lose the connection due to "connection timed out" error. Therefore you need to connect the RPI to a separate monitor via the HDMI cable (just like you've done in Log 1)

    Once that's done, power up the RPI and log in with the default username / password, which is pi / raspberry (unless you've changed the log in details by the "sudo raspi-config" command)

    Then execute the following commands one by one:

    sudo apt-get update
    
    sudo apt-get upgrade
    
    sudo apt-get install python-numpy python-scipy python-matplotlib
    
    sudo apt-get install build-essential cmake pkg-config
    
    sudo apt-get install default-jdk ant
    
    sudo apt-get install libgtkglext1-dev
    
    sudo apt-get install v4l-utils
    
    sudo apt-get install libjpeg8 \
    libjpeg8-dev \
    libjpeg8-dbg \
    libjpeg-progs \
    libavcodec-dev \
    libavformat-dev \
    libgstreamer0.10-0-dbg \
    libgstreamer0.10-0 \
    libgstreamer0.10-dev \
    libxine2-dev \
    libunicap2 \
    libunicap2-dev \
    swig \
    libv4l-0 \
    libv4l-dev \
    python-numpy \
    libpython2.7 \
    python-dev \
    python2.7-dev \
    libgtk2.0-dev \
    libjasper-dev \
    libpng12-dev \
    libswscale-dev
    
    wget http://sourceforge.net/projects/opencvlibrary/files/opencv-unix/3.0.0/opencv-3.0.0.zip
    
    unzip opencv-3.0.0.zip
    
    cd opencv-3.0.0
    
    mkdir build
    
    cd build
    
    cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D INSTALL_C_EXAMPLES=ON \
    -D INSTALL_PYTHON_EXAMPLES=ON \
    -D BUILD_EXAMPLES=ON \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D WITH_V4L=ON ..
    

    The next command takes about 3 hours on the RPi 2. I used a mini 5V DC fan to facilitate air circulation and heat dissipation, otherwise I'll overheat due to the fact that this process will push the RPi to the limits of what it is capable of.

    sudo make
    sudo make install
    sudo nano /etc/ld.so.conf.d/opencv.conf
    
    # opencv.conf will be blank, add the following line:
    /usr/local/lib       # enter this in opencv.conf, NOT at the command line
    (leave a blank line at the end of opencv.conf) then save and exit nano

    # back to the command line:

    sudo ldconfig
    sudo nano /etc/bash.bashrc
    # add the following lines at the bottom of bash.bashrc
    PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
    # enter these at the bottom of bash.bashrc, NOT at the command line
    export PKG_CONFIG_PATH
    # enter these at the bottom of bash.bashrc, NOT at the command line
    (leave a blank line at the end of bash.bashrc)

    # save bash.bashrc changes, then back at the command line, reboot

    sudo shutdown -r now
    # after rebooting, verify our OpenCV install:
    python 
    # enter interactive Python prompt session
    >>> import cv2
    >>> cv2.__version__

    # should say your OpenCV version, i.e. '3.0.0', press Ctrl+D to exit the Python prompt session

  • Log 5: Streaming live video with the Picam

    Canberk Suat Gurel02/11/2016 at 12:58 0 comments

    Make sure you have enabled the camera!

    (If you are unsure about this take a look at the log 2)

    Execute:

    sudo apt-get install vlc
    command to install vlc on the raspberry pi

    Then install vlc on you windows PC follow this link:
    https://ninite.com/ then check the vlc and click on install

    Then get back to the PuTTY and execute:

    raspivid -o - -t 0 -hf -w 800 -h 400 -fps 24 |cvlc -vvv stream:///dev/stdin --sout '#standard{access=http, mux=ts,dst=:8160}' :demux=h264
    Now go on the VLC (on your Windows PC) >> Media and start a Network Protocol:

    Type in the IP address followed by :8160 i.e. (http://111.111.0.11:8160)

    Remember that you've obtained the IP address by

    hostname -I
    (If you are unsure about this take a look at the log 2)

  • Log 4: Verify that Picamera Works

    Canberk Suat Gurel02/11/2016 at 12:54 0 comments

    Make sure you have enabled the camera by

    sudo raspi-config
    (If you don't know how to do this take a look at Log 2.)
    raspistill -o cam.jpg    # take a picture with the raspberry picam
    raspivid -o video.h264 -t 10000    #record a video (for 10s) with picam
    Use
    ls -l
    command to verify "cam.jpg" and "video.h256" are there.
    pcmanfm &
    command to open the documents folder of the raspberry pi

    [VISUAL!]

    there should be the picture you've just took (named as cam.jpg) double click on that to open the picture

    also the video (named as video.h264) that you've recorded is located in the same directory.

  • Log 3: Installing and setting up PuTTY and Xming

    Canberk Suat Gurel02/11/2016 at 12:50 0 comments

    Install PuTTY follow this link:
    (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html, choose "putty.exe")
    (then create a PuTTY shortcut on your desktop)

    Install Xming and Xming-fonts follow this link:
    (http://sourceforge.net/projects/xming)
    (create an Xming shortcut on your desktop)

    Reboot after installing PuTTY, Xming, and Xming-fonts
    Use this command for rebooting:

    sudo shutdown -r now
    Start PuTTY:

    Set the following settings:

    -your RPi IP address
    -Terminal -> Bell -> None (bell disabled)
    -Connection -> Seconds between keepalives -> set to "30"
    -Connection -> expand SSH -> X11 -> check "Enable X11 forwarding"
    then save these settings by entering a preferred name in the "Saved Sessions" box, for example "my_default", then choosing Save

    To begin a PuTTY session, load your preferred settings and click "Open"

    Start Xming before or just after beginning a PuTTY session if you would like to see Raspbian windows rendered on your Windows desktop computer,
    To verify Xming is running look for the Xming icon in the lower right corner of your Windows screen

    To verify PuTTY and Xming are working, start PuTTY try the following commands

    pwd    # present working directory, should say "/home/pi" as this is the default location for the user "pi"
    ls -l    #lists the files in the current directory
    pcmanfm &    #this is the graphical Raspbian file browser and should open as a separate window
    epiphany-browser &    #this is the graphical default Raspbian internet browser, which can also be used to browse files, FTP, etc.

    To paste into a PuTTY window simply right-click anywhere in the PuTTY window.
    To copy from a PuTTY window simply highlight what you would like to copy

    (not necessary to press Ctrl+C)

  • Log 2: First Time Boot-up

    Canberk Suat Gurel02/11/2016 at 12:41 0 comments

    sert the flashed SD card into the RPi, then connect:
    -USB keyboard
    -USB mouse
    -PiCamera
    -USB Wireless adapter
    -HDMI monitor cable
    -Power (at last)

    The newest version of Raspbian (Raspbian-Jessi) boots directly into the graphical desktop, once boot-up is complete, bring up a command line, type

    sudo raspi-config
    and set the following options:
    1 Expand Filesystem - set OS to fill SD card
    3 Boot Options - set to "B1 Console"

    choose "Finish", when asked "Would you like to reboot?" choose "Yes", if you need to reboot from the command line type "sudo shutdown -r now" to reboot, for future reference if you need to shut down without rebooting type "sudo shutdown -h now"

    log in with the default username / password, which is pi / raspberry

    startx //start the graphical desktop
    choose wireless icon at the top right, enter wireless router password, verify networking works
    hostname -I 
    this command will display the IP address write that down. (you'll need this while setting up the PuTTY in the next step) then type
    sudo shutdown -r now
    # reboot, you don't need to log in on the RPi after rebooting

  • Log 1: Making the Raspbian SD Card

    Canberk Suat Gurel02/11/2016 at 12:36 0 comments

    Download the latest .zip version of Raspbian from www.raspberrypi.org, then unzip the file.

    Download and install Win32DiskImager follow this link: here

    If your PC does not have a SD card slot you can purchase a separate USB SD card reader.

    Insert and format your SD card, (right click on the SD card icon and click on Format)

    Then open Win32DiskImager and flash Raspbian to the SD card.

    (Image file is the unzipped raspbian file and device is the SD card that you've inserted then click on "Write")

    When flashing is complete before removing the SD card make sure to right-click on the SD card drive letter and choose "Eject", then remove the SD card.

View all 10 project logs

Enjoy this project?

Share

Discussions

matlixco wrote 02/11/2016 at 02:59 point

Hi Canberk Gurel. 

I have experience building drones. I find very interesting this project. I think if you use opencv in raspberry Pi 2 can process images at about 6 fps, which may be enough to control the drone at low speeds, it has my support for your project.
but my English is very basic ....

  Are you sure? yes | no

Canberk Suat Gurel wrote 02/11/2016 at 21:04 point

Hi there, thanks very much for your interest, I will certainly let you know if I need some help with this project. I still need little bit more time to give out more information related to the competition that the this project will participate. Please check out my page in the following days. Best wishes.

  Are you sure? yes | no

Does this project spark your interest?

Become a member to follow this project and never miss any updates