Close

Robot navigation

A project log for BunnyBot

BunnyBot is a ROS based robot platform that can perform useful tasks using its built in gripper and vision system.

jack-qiaoJack Qiao 07/04/2016 at 06:410 Comments

I'm still waiting on the arm actuator to arrive, meanwhile here is a video of the robot doing SLAM

Here's a short explainer:

Phase 1, mapping: The robot has a 2d LIDAR that spins around to make a map of its surroundings. I manually give navigation goals to the robot, which drives to the commanded position while avoiding obstacles. As it does this a map is being built from the collected sensor data. I struggle a bit as there are people walking around.

Phase 2, navigation: Once we have an adequate map, we can save that map and use it for path planning. The advantage here is that the sensor data is only used for localization and not mapping, so the robot can go much faster than the mapping phase. This is mostly a limitation of our LIDAR, which is fairly slow and noisy.

About the visualization (Rviz in ROS)

- small rainbow-coloured dots are from the LIDAR

- shifty white dots are point cloud data from the realsense (used to detect obstacles outside the LIDAR plane)

- black dots are marked obstacles

- fuzzy dark blobs are the local costmap. The robot tries to avoid going into dark blobs when planning its path

- big green arrow is the target position that I give to the robot

- green line is the global plan (how the robot plans to get from its current location to the target)

- blue line is the local plan (what the robot actually does given the global plan, visible obstacles and acceleration limits of the robot)

Discussions