Close

Obstacle Detection and Avoidance

A project log for Expandable Ruggedized Robotic Platform

Robotic platform designed to operated in harsh conditions experienced in outdoor environments. Modular with easy to replace components.

williamg42williamg42 10/31/2015 at 16:460 Comments

Examples

Original

Processed (White is impassable terrain)

Obstacle Detection:

based on and taken from:
Ulrich, I. 2000. Appearance-Based Obstacle Detection with Monocular Color Vision. In Proceedings of the AAAI National Conference on Artificial Intelligence.

Theory:

The basic algorithm utilizes histograms and a "safe" area to detect impassable terrain in the environment. It is assumed the area directly in front of the robot is clear of obstacles, and can be used as a reference. A histogram of this safe region is generated. The more advance alogrithm stores histograms from past travel, and uses those as known "safe areas". Then the image is scanned pixel by pixel and the value of each pixel is compared its bin in the histogram of the safe region. If the number of pixels in the bin is below some threshold, that pixel is classified as belonging to an obstacle. The HSV color space is used instead of the RGB color space because it is more resistant to changes in lighting.

Implementation:


Implementation so far can be found here: https://github.com/williamg42/AGV-IV/tree/master/Autonomous/Autonomous_Obstacle_Avoidance

The current code is only the segmentation algorithm, which applies the basic algorithm described above. A more advance algorithm is currently under development.

Obstacle Avoidance:

based on and taken from:
Horswill, I. 1994. Visual Collision Avoidance by Segmentation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 902-909.


image at time t and let D(.) be the obstacle detection algorithm described above. We define the bottom projection ("bprojection") of the binary image J to be a vector b(J), indexed by x (horizontal) coordinate, whose x'th element is the height of the lowest marked pixel in the x'th column of J:

bx(J) = min{ y: J(x,y) = 1}

It will be shown below that under the right conditions, b(D(I)) is a radial depth map: a mapping from direction to the amount of freespace in that direction. Given the radial depth map b(D(I(t))), we define the left, right, and center freespaces to be the distances to the closest objects to the left, right and center:

Where xc is the x coordinate of the center column of the image, and w is a width parameter.

Now if we assume the robot is a true holonomic drive (which it is not, but we can "convert" and angular and translation velocities to left and right wheel velocities fairly easily. This should allow the robot to avoid hitting something but will not maintain its previous direction vector, which is fine for now)

Where dTheta/dt is the angular velocity, v(t) is the translational velocity, and dmin is the closest a robot should ever come to an obstacle, and c_theta and c_v are user defined gains. There will also be velocity cap on the equations to limit the robots maximum speed, which are not mentioned in the equations for simplicity.

Discussions