Close

Sensor Fusion

A project log for OMNi

A modular, semi-autonomous, omnidirectional robot for telepresent applications

will-donaldsonWill Donaldson 09/26/2021 at 21:330 Comments

For a robot to successfully navigate through an environment it must know its position relative to both its environment and its previous state(s). Generally speaking, robots require multiple sensor inputs to locate themselves. LiDAR is a powerful technology for mapping the surrounding environment, however, a severe limitation is that the LiDAR scan is very slow, updating between 1 - 10 Hz. For a robot that is moving at a walking pace, this is not fast enough to precisely and accurately respond to a dynamic environment. For example, if the LiDAR scan is only updating once per second and the robot moves at a speed of 1m/s then a LiDAR scan could determine that the robot is 0.75m away from a wall and before the LiDAR scan updates the robot will have already collided with the wall, believing it is still 0.75m away from the wall based on the previous LiDAR scan.

RPLidar A1 has a scan update frequency of 5.5-10Hz

To overcome this limitation our robot needs to fuse together multiple readings from several different sensors to accurately predict its location at any time. To do this we have installed motor encoders to count the number of revolutions of the wheel, and thereby calculate the distance it has traveled, as well as an IMU (inertial measurement unit). Both these devices can be sampled on the order of 100Hz.

By combining the high-frequency measurements of the encoders and IMU with the low-frequency LiDAR scans we can continuously generate an accurate location by interpolating the position of the robot between each subsequent LiDAR scan.

The next blog post will discuss how we use the LiDAR sensor and build a map of the environment around the robot. The remaining of this blog post will focus on how we process the measurements from the encoders and IMU.

While the previous paragraphs glorify the strengths of encoders and IMUs, we omitted their significant flaws. In order for a motor encoder to calculate an accurate location, we must assume a no-slip condition. That is to say that the wheel never slips on the floor; if the wheel were to slip then we get an inaccurate location measurement since the encoder believes the robot drove forward despite the wheel slipping and spinning on the spot. For most vehicles; cars, skateboards, and the Curiosity Mars rover, the no-slip condition is a fairly accurate assumption, however, given that we are using 3 omnidirectional wheels this is a little more nuanced. The entire premise of an omni wheel is that it works by spinning freely on the axis perpendicular to its rotation. Nonetheless, on a surface with a high enough coefficient of friction, we can use the equations of motion (below) to calculate the location of the robot based on the RPM (revolutions per minute) of the wheels.

Equation of motion for a three omni wheel robot. Where r is the wheel radius and R is the wheelbase radius

Our robot initially only relied on the motor encoders however when it came to navigating accurately we begun to notice erroneous behavior. Upon closer inspection, we noticed the wheels would slip on the polished concrete floor.

<insert video of wheels slipping>

To mitigate this we first created acceleration profiles, limiting the speed at which the motors could accelerate, thereby significantly reducing the wheel slip.

To improve the results further we added the BNO055 IMU so we could combine the readings into a single odometry value using an EFK (extended Kalman filter).

<TO DO: outline how the EFK works and associated matrices, discuss covariances>

Discussions