As powerful as autonomous SLAM navigation algorithms are, sometimes having human operators is required. An example may be for visitors to museums, or other events, that wish to self-guide their own tour while also interacting with the exhibits, however, are unable to visit themselves due to age, sickness, or other accessibility constraints.
OMNi offers two methods of user control: handheld controller or keyboard. We currently use a Xbox One controller that connects to the robot via Bluetooth for operating the robot when within about 10 meters.
Alternatively, by using a ZeroTier SSH connection between the robot and a remote computer. The advantage of using ZeroTier software is that it offers a secure connection that can be connected to any internet network globally. The one caveat is that, currently, the remote machine must be configured to run ROS2 and have the OMNi workspace installed and configured. As this project continues to develop we will publish detailed instructions on how to achieve all this.
Once connected to the robot on a remote computer, we use the ROS2 package teleop_twist_keyboard (BSD License 2.0). As shown in the README file of the previous link the robot can be controlled using nine keys (uiojklm,.) in either holonomic (omnidirectional drive) or non-holonomic (traditional drive, like a car).
However, driving the robot from a remote location means that you cannot see the robot nor the environment. As such, we installed a Raspberry Pi High Quality Camera on the top platform so that the operator can have a first-person view of the robot.
Below is a demonstration of what the first-person view looks like from a remote computer.