I wanted to give Roz the capability to (using dead reckoning) navigate to a waypoint. This is a small part of being able to navigate from one place to another, using a non-raster based map. Most robots these days use super-accurate scanning LIDARs and 3D cameras to build up reasonably accurate raster-based maps of the locations that the robot navigates through. I think this is a bad idea, because it simply isn't scalable. We certainly don't navigate that way as humans, and before everyone had GPS maps in their car we didn't navigate that way driving either.
The way we navigate is to follow paths from one visual landmark to another. We only have a rough idea of where we are at any given point in time, certainly not down to the closest centimeter or even sometimes the closest meter. When you're trying to go from one room in a house to another, it doesn't matter if you're 4 meters from the door or 3 meters - you keep walking until you reach the door. When you go through the door, you follow another path that leads to the next landmark you are looking for, and so on. It doesn't matter exactly where the doorway is, because once you can see it you head towards it until you reach it. You can navigate through a house like that, or from one location to another thousands of kilometers away. It is a very scable navigation and mapping technique.
For now, Roz will only be doing visual landmark recognition using fiducial tags on doorways and other interesting landmarks. Eventually, I would like Roz or a future version to be able to do sufficient real world landmark recognition with a camera to be able to do away with fiducial tags.
To get started, however, doing dead reckoning from one location to another (given a heading and a rough distance) is a good place to start. Roz does odometry right now by estimating distance travelled by a combination of the gait speed and the compass heading from the IMU in his head. Its very crude, but for what I'm trying to accomplish I don't need super high accuracy.
We start at the location (0, 0), and are given a compass heading and the number of mm we should travel. We calculate the (X, Y) end location, and then set up a PID loop, with constants that seem to give reasonably good results.
self.start_location = self.robot.odometer.position
radian_heading = math.radians(self.segment.heading)
self.end_location = XYPoint(self.start_location.x, self.start_location.y)
self.end_location.x += self.segment.distance * math.cos(radian_heading)
self.end_location.y += self.segment.distance * math.sin(radian_heading)
# we've reached the goal when we're less than 150mm from the end location
self.reached_goal_distance = 150
end = XYPoint(int(self.end_location.x), int(self.end_location.y))
log('End location: {}'.format(end))
self.delta_heading = 0
self.kp = 0.007
self.ki = 0
self.kd = 0.0005
log('Kp: {} Ki: {} Kd: {}'.format(self.kp, self.ki, self.kd))
self.pid = PID(self.kp, self.ki, self.kd, setpoint=self.delta_heading, output_limits=(-0.5,0.5))
Once everything is set up, we run in a loop, grabbing the current heading, and calculating the heading offset, which we force to be between -180 and 180 degrees. We feed that delta heading into the PID loop, and get the output value, which is radians per step cycle, and feed that into the robot's inverse kinematics system to tell the robot to turn at that speed as it is walking forwards.
We also do obstacle detection and avoidance, but I've removed that code from this example for clarity.
heading = self.robot.imu_sensor.yaw
heading_to_goal = self.robot.odometer.position.heading_to(self.end_location)
self.delta_heading = heading_to_goal - heading
# change delta_heading to be between -180 and 180
if self.delta_heading < -180:
self.delta_heading += 360
elif self.delta_heading > 180:
self.delta_heading -= 360
new_r_speed = self.pid(self.delta_heading)
log('|R_Speed,{},{},{}'.format(new_r_speed, heading, self.delta_heading))
self.robot.set_rotation_speed(new_r_speed)
All of this is implemented in a finite state machine, so the first code segment above is run during setup of the dead reckoning state machine, and the second segment above is in the walking state handler, so it is called iteratively until something happens to make it change state:
distance_to_goal = self.robot.odometer.position.distance_to(self.end_location)
if distance_to_goal <= self.reached_goal_distance:
log('Reached goal, finishing')
self.state_machine.transition_to(self.finish_state)
The idea is, once we get to the end location, we can start looking for a visual landmark, and servo the robot to that landmark. We then will get the next segment in the path towards our goal, and start the process over again.
Here's a quick video showing the dead reckoning navigator working:
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.