Autonomous SLAM with a Roomba

Autonomous robot performing SLAM using a Roomba, Raspberry pi, RPLidar and a laptop for the UI and heavy CPU work.

Public Chat
Similar projects worth following
This project is the natural successor of .The robot is able to explore its surroundings and build a map of it.The robotic platform is a Roomba 630 with an RPLidar and a Raspberry pi.The Raspberry pi relays telemetry including raw lidar data to a laptop, as well as taking motion commands and issuing them to the Roomba. Additionally the Raspberry pi is directly responsible for some basic crash avoidance.All code is written in java.This project was undertaken as a learning experience, and does not use any robotic libraries like ROS.Some of the key parts of the software build are:* Route planning.* Particle filter.* Obsticle avoidance.* Map building, Loop closure and Error correction (SLAM).* Sparse high speed occupancy grid map.

This video explains much of what this robot does.

Standard Tesselated Geometry - 51.06 kB - 04/10/2018 at 22:50


Standard Tesselated Geometry - 527.62 kB - 04/10/2018 at 22:49


Standard Tesselated Geometry - 38.95 kB - 04/10/2018 at 22:49


Standard Tesselated Geometry - 148.42 kB - 04/10/2018 at 22:49


Standard Tesselated Geometry - 74.89 kB - 04/10/2018 at 22:49


  • 1 × RPLidar
  • 1 × Raspberry pi
  • 1 × Roomba 630
  • 1 × Usb battery 10Ah
  • 1 × Usb WiFi

View all 6 components

  • Robot fail...

    rlsutton106/22/2019 at 10:25 0 comments

    This is what happens when both Obstacle detection and Navigation systems fail at the same time.

  • June 15th 2019 Issues

    rlsutton106/15/2019 at 04:01 0 comments

    My agenda this week, we'll see how much I get done!

    IssueIntended Solution
    Poor adherence to planned path.Calculate the curvature of the bezier slightly in front of the robots current position.
    Poor adherence to planned path.
    Convert route gradient to a fixed path.
    Poor detection of objects via depth camera.Come up with or change to a new algorithm for locating edges in the point cloud.
    Erratic turning.Stabilize the turn radius via making adjustments to the radius over half a second period at 100ms intervals, 
    Slow reaction time to new obstacles.Run the route planner on demand when there is a change detected in the environment rather than on a set frequency.
    Inconsistent perception of position and size of objects detected via the point cloud. This is in part because of the broken Intel Realsense integration to OpenNI2 and my inability to compensate for it using the available information provided by the OpenNI2 API.Rather than track via an occupancy grid, move to tracking objects and noting their size and location. Subsequent observation would update the size and location for an object with a suitably near match of location and size, thus over time improving the accuracy of the perceived size and location of the object.
    Update the gallery picture to a more current build of the robotSelf explanatory really.

    Any tips, suggestions or questions gratefully received.

  • Rebuild video with udoo x86 and D435 realsense

    rlsutton101/20/2019 at 22:01 0 comments

    Here is the rebuild video, with a short demo at the end. I'm still working on some software issues processing the point cloud data from the depth camera.

  • Rebuild video with udoo x86 and D435 realsense

    rlsutton101/20/2019 at 22:01 0 comments

    Here is the rebuild video, with a short demo at the end. I'm still working on some software issues processing the point cloud data from the depth camera.

  • Rebuild video with udoo x86 and D435 realsense

    rlsutton101/20/2019 at 22:01 0 comments

    Here is the rebuild video, with a short demo at the end. I'm still working on some software issues processing the point cloud data from the depth camera.

  • Robot upgrades…

    rlsutton101/04/2019 at 10:54 0 comments

    My robot is getting a serious upgrade.

    Why? I decided that I couldn’t effectively avoid obstacles that the LIDAR couldn’t see using the Astra depth camera because it’s minimum point could range is 60cm.

    This then caused me to find a depth camera with a shorter minimum range and I found the Intel Realsense D435. The D435 has a minimum depth range of 10cm at lower resolutions and 20cm at higher resolutions.

    This led me to the next problem, the Realsense camera requires USB3 and the SDK doesn’t have ARM support either, making the Raspberry PI a problem. This led me to search for a replacement processor that supports USB3 and is X86… and the winner is UDOO X86.

    The UDOO requires 12V at up to 3 Amps, I bought a 12Volt battery power bank…

    I soon realised that the power bank could operate at 12,16 or 19 volts and as it only has a single button to turn it on, off and select the voltage, it would be possible to accidentally select a higher voltage when turning it off. This was an unacceptable risk as the UDOO requires 12V +- 5%, which led to the next search – for a 12 Volt buck/boost regulator.

    The next challenge was a shortage of USB ports I need 4, but the UDOO only has 3 although this was an easy one to solve.

    Given that all my code is written in Java, the move from Raspberry PI to UDOO X86 is effortless for my code. But the Realsense D435 depth camera doesn’t have much in the way of Java support and I had already sunk some effort into integrating with OpenNI2 for the Astra camera. The Realsense libs do support OpenNI2 – this sounded like an easy fix, but as it turns out it requires manually building the OpenNI2 & Realsense SDK. I cheated a little and used the binaries for the OpenNI2 build, which caused me some other problems but after about 3 or 4 hours and a firmware update for the Realsense, it was working.

    The next problem is how to house all this new hardware

    I’m building a new platform for the hardware to sit on top of the Roomba. As the Realsense is much smaller I’ll be able to condense it into just 2 layers. I started with a couple of MDF cake bases from the local bargain shop. I added a slot into the lower one to allow access to the Roomba’s buttons. I’ve also used some cupboard handles to act as stand offs to hold the platform above the Roomba with enough clearance for the Roomba Serial din plug, I didn’t want to bring the din plug though the platform to allow more clear space for all the hardware to sit on the bottom platform.

    To be continued…

  • Bezier curves for smooth paths and obstacle avoidance

    rlsutton112/16/2018 at 10:54 0 comments

    The navigation system was previously creating paths with abrupt turns in them, in this video you can see that I've used Bezier curves to smooth out the paths.

    Although it doesn't quite get it right first time, there is an additional short term route planner (the red line) designed to plan around local obstacles that are not present in the map. This planner re-plans the next few meters every second.

    There is one further improvement here, the robot no-longer navigates in local maps and as a result doesn't have to stop constantly to switch maps.

    The robot still drives quite conservatively (slow) when near obstacles, as can be seen.

  • Navigator refinement and Particle filter fix

    rlsutton109/30/2018 at 06:34 0 comments

    In this video you can see that the route planner (left side) plans a route staying well away from walls, corners and other obstacles. 

    The regular pauses in the robots travel happen when it swaps out the particle filter map (right side) for next local map in the path.

    Towards the end of the video you can see the particles (right side, red dots) of the particle filter spread out when the robot is travelling quickly (50cm/second) down the hall way. This is the maximum documented speed of the Roomba.

    The map was built on the fly (SLAM) although I didn't include that in this video.

    In the past few weeks I've identified a bug in the particle filter, where the re-sample was done before the update leading to continual jitter in the output of the particle filter.

    I also spent some time improving the route planner, it now correctly calculates a cost with relation to passing near or nearer to obstacles, resulting in a path which does a good job of staying away from obstacles, walls and corners.

    I still intend to add another layer on top of the route planner to take into account the latest Lidar scan and navigate around transient objects.

  • Orbbec Astra depth camera

    rlsutton107/01/2018 at 22:39 0 comments

    I have now written code to acquire data from the Orbbec Astra depth camera.

    My initial experiments were with Orbbecs SDK, and after about 6 hours was getting usable data via the Java API. I posted those details to the orbbec forums.

    Alas I moved my code to the Raspberry pi, only to discover there is currently no support for Arm/Linux in the Orbbec SDK.

    Next i moved to OpenNI2 and after about 4 more hours, i had usable depth data on the Raspberry pi. I also posted that code to the Orbbec forums.

    A nice thing the astra does by default is remove the ground plane which greatly simplifies data processing.

    I took the depth data and flattened it buy removing the height data, giving effectively a top down view similar to LIDAR.

    The Astra can not give depth data any closer than 60cm, which will really stop me from using it the way i intended. Fortunately i acquired the used Astra cheaply, so i have ordered an Intel Realsense that can give depth data as close as 10cm.

  • Mapping Improvements

    rlsutton105/15/2018 at 11:30 0 comments

    This week I managed to find a bug in the code which initializes the particle filter when moving between maps.

    This has enabled extended runs without the robot becoming lost in the simulator, and my general experience is that the real robot actually performs better due to the greater detail available in the real world.

    Progress has also been made on placing constraints between maps, although it is appearing that the positional accuracy when moving between maps is worse that when creating them. As a result I'm considering only adding additional constraints between maps with very distant constraints such as for loop closure.

View all 15 project logs

Enjoy this project?



logan wrote 09/15/2019 at 23:34 point

Hi Robert,

Thanks for sharing this amazing project!

I had a Roomba 360, a RPlidar, and a Realsense D435 sensor all sitting at home or being used for little things, and wanted to experiment with pi-bot code you uploaded on Github, which initially worked nicely in the simulation mode but I didn't manage to succeed on running it on the real robot, so if you could please provide some guidance on how to get it working..

I'm running the project on IntelliJ and a Windows 10 x64 laptop, and while everything is compiled well and running the simulation mode, using the Kitchen map, I'm facing the following issues:

1. No image is shown when I try to run options 5, 6, or 9 that use video or depth camera support. Hazelcast conects to the cluster and stays there while all video windows are blank (no image output from the Realsense sensor). I have installed the Realsense SDK for Win10 and get video image with the SDK's app successfully.

2. In all options (0-9), Hazelcast connects to the cluster using my IP address and stays there and waits and nothing happens. I'm not quite sure why you are using Hazelcast, I guess maybe the latest code you provide on Github is the one that uses Raspberry-Pi where Hazelcast helps to connect to a remote PC for high-level processing and exchange data with it? - I don't know.

3. In the there is a save( ) method used to save captured maps but the load( ) method that is supposed to load a saved mad is blank. Could you please provide/upload the content of the load( ) function, or an example on how to load any saved *.bmp map?

4. Do you have a more complete or newer version of the code that hasn't been uploaded yet? Is the current Github code the one that works with R-Pi/ORBEC and not the one with UDOO/Realsense?

5. I get the following error when trying to connect to Roomba. I have changed the Linux port /dev/ttyUSB0 to a my Windows one COM3 everywhere but maybe there are more Linux-to-Windows tweaks I need to do to connect successfully.
Exception in thread "EventThread COM3" java.lang.NullPointerException
    at com.maschel.roomba.RoombaJSSCSerial.serialEvent(
    at jssc.SerialPort$

6. Error connecting to RPlidar. Similar to the previous issue with Roomba connection.
Exception in thread "main" ev3dev.sensors.slamtec.RPLidarA1ServiceException: This device is not valid: COM4
    at ev3dev.sensors.slamtec.RPLidarA1Driver.init(
    at ev3dev.sensors.slamtec.RPLidarA1.init(

I've been working on it for a while now and I desperately want to get it working on my Roomba with the RPlidar and D435!!! Like I said, if there is a more updated version of the code that resolves some of the above issues and answers to the questions I would highly appreciate it if you could upload it.

Many thanks, Logan

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates