06/06/2018 at 16:47 •
06/06/2018 at 16:40 •
in this video I demonstrate the encoder calibration procedure using the open source VESC Tool software.
06/04/2018 at 06:34 •
I've got another update on the project. This time I've made an instructional video showing how to super glue the magnet on to the end of the motor - a task necessary for operation of the encoder. The encoders on Rover (and other robots of mine like Skittles) give the motor controller precise control over motor torque, velocity, and position. This advanced control is critical to making safe, useful robots that can perform a wide variety of helpful tasks.
Without further ado, check out the encoder magnet installation tutorial below:
Thanks for watching and if you want to talk about Rover or help with the project, please visit http://reboot.love and create an account today!
06/04/2018 at 02:12 •
Hey Hackaday Hackers!
I've collected the robots I've been building over the last 7 months and made a short video. It's a good short overview of what I'm doing, and shows Rover V1 and Rover V2 together next to me for scale.
So what's next for Rover V2? Over the next month or so, I will bring up the electronics and software on Rover V2 that I had originally on Rover V1. Rover V2 uses all the same motors, electronics, and software as V1 so it should be painless to get it operating under remote control. Take a look at V1 driving here:
After I've brought up remote control on V2, I have a few tasks to work on next:
- Bring up reinforcement learning in the simulator, so the virtual robot can follow trails.
- Add a suitable camera system to Rover V2 and begin work to bring autonomy to the physical system.
- Improve documentation for Rover's software control system, mechanical design, sensor setup, and electronics.
- Expand outreach on Rover's home site, http://reboot.love, to try to find collaborators who want to help make Rover better.
Long term my goals are to build an open source farming system, and I see Rover V2 as a software research platform to develop some of the key algorithms a farming bot would need to navigate safely around a farm.
If you like what you see, please share this project and consider creating an account on http://reboot.love to help with the Rover project!
06/04/2018 at 00:24 •
Hey Rover fans!
I've now uploaded the files for Rover Sim to github. Please check them out, download them, and let me know if I missed anything! I am still learning how to use Unreal Engine, so if I left off any files I want to know!
06/02/2018 at 20:06 •
Hey Rover fans!
I've started building a simulator for Rover using Unreal Engine. See the video of the simulator and my rationale and goals with the simulator below:
I'd like to build a kind of next generation navigation stack for Rover that uses only cameras for localization, path following, and obstacle avoidance. I have experience bringing up ROS based navigation systems using LIDAR, but those systems are expensive and traditional mapping and localization algorithms for LIDAR are typically for indoor flat environments like offices. While working on a robotics project for my job two years ago, I was asked to survey all possible sensing modalities for a well funded commercial robot, and spent time looking at 3D LIDAR like the Velodyne Puck, Time of Flight cameras like the IFM 03D303 and Kinect V2, structured light cameras like the first generation Kinect and the old Asus Xtion Pro, stereo pair cameras like the ZED Stereo Camera and the Playstation 4 camera (which at $50 is a steal for linux based robotics if you don't mind wiring on a USB3 connector to the cable!), and more.
I found that getting full 360 degree surround sensor coverage for the robot would be terribly expensive, the compute required to process all the data would be prohibitive for any semblance of a low cost system, the power budget would be terrible, and success in sunlight was still uncertain. Meanwhile stereo cameras looked almost do-able, but the quality of data one could glean with state of the art algorithms was so poor it seemed hopeless. It would be only $200 to surround the system with cameras compared to $10k for other sensors, but we couldn't make enough sense of the data to meet our operational needs. I surveyed the algorithms by looking at deployed systems, open source libraries, and the latest research, and it seemed there could be hope in the future for camera based systems. Indeed, most animals on Earth do well with just a pair of optical sensors and a movable head.
More recently, deep neural networks have revolutionized the way computers understand images and the world around them. We no longer need to manually tune algorithms to detect features in an image based on a human understanding of the data. We are learning to train algorithms to find the necessary details on their own. This is an approach that is both far more accurate and more computationally efficient than past approaches. A low power computer chip is all that is needed to do person following on modern drones - a task that would have taken a desktop grade CPU just a few years ago.
And so, I've envisioned the Rover system as a sort of camera-based research platform for robotics. Rover is made for unstructured off road environments, not flat well-behaved offices. I've come up with a six camera surround system I think has promise for a vehicle like this - four fish eye cameras in the corners and one regular view camera in the front and rear. This would allow Rover to do some stereo reconstruction of scenes while also giving it a monocular view all around the robot, with higher resolution images for front and rear just like the fovea in mammalian eyes.
The hardware would be a Jetson TX2 computer with six cameras feeding in to its CSI camera bus. This is off the shelf hardware and I think the TX2 will be enough for some pretty solid navigation work. See one such camera system below:
Rover Sim is a virtual environment designed to allow the development and training of the appropriate machine learning algorithms. It will be totally open source as soon as I get a little time to publish it on Github. Once the basic sim is complete (I need to modify the camera position to resemble Rover's planned camera placement), I will work on bringing up the World Models algorithm in sim: https://worldmodels.github.io/
I will start by just following the black road in the Sim, a straightforward enough task by my estimation. From there I will spruce up the trails a bit, and retrain the World Model network to follow simulated trails. Eventually I will try to make the trails photorealistic like in the following video, and attempt to bring up the algorithm on the real physical Rover:
To get this "transfer learning" to work, I may need to incorporate real world data in to the training, or fuzz the colors in the simulator so the algorithm is insensitive to color variance. Once Rover is a consummate trail follower, I can explore the idea of navigating a network of multiple trails. That would be very similar to the task solved in this Deep Mind research: https://deepmind.com/blog/learning-to-navigate-cities-without-a-map/
I can also explore person following, so the robot can stay on the trail while following an operator, and can look at using optical flow techniques to determine depth data, ultimately piping depth data in to the World Models controller net in the hope of adding obstacle avoidance to the system. The research shown below calculates scene depth as just part of its optical flow routine (optical flow would also provide high quality odometry, an important feature for a robot):
I've recently brought up an install of "Gym-UnrealCV", a software component that connects Unreal Engine games to OpenAI Gym, a reinforcement learning library. https://github.com/zfw1226/gym-unrealcv
I'll post another update with the game source files when I can!
I can't build Rover alone. To really make this project what I hope it can be, we need all kinds of help with software, design, and even art and marketing. Even though Rover is a CC0 open source project, I love working with creatives to tell a more engaging story! To facilitate discussion on this and other open robotics projects, I've created the website http://reboot.love where anyone can join in and help, or just ask questions and seek guidance. Sign up for Reboot and join the discussion today!
05/19/2018 at 07:08 •
Visit the Robot tent in Zone 5 and look for the yellow banner. There you can see Rover V1 and Rover V2 in person, as well as nab some of my propaganda writing in a new book I've printed. If you don't have the fortune of being in the area, catch my propaganda in this PDF here: http://tlalexander.com/static/zine.pdf
Oh, you can also meet me! My brain is full of crazy ideas about robots - come ask me about them!