The Love Elemental will be glowing, fuzzy, serpentine robot that seems compellingly lifelike.
There's several cool things about the Love Elemental:
- It will have "glow fur", where RGB leds are placed underneath white fake fur, so that it can express its emotional state through visuals along its body.
- It will have proximity-touch sensing with dozens of individual sensors along its body, to interact with people in its environment.
- It will have a depth sensing camera so that it can do sophisticated navigation and mapping.
- It's meant to be a beautiful piece of interactive sculpture, not only a robot.
I've spent the past many months designing and building the PCBs and mechanics of the Love Elemental. Now, this log will include a few teasers of things that are being included in the build as I put it together over the next couple months. The boards have nearly all the functionality I hoped for: automatic charging for Lipos, 10A+ discharge from integrated 3S Lipos, DMA-based control for LED strips for animation, connections to the servo motors, and a fan driver (if the board needs extra cooling help). The capactive touch sensors work, but don't have the range/proximity that I wanted to achieve when combined with the actively shielded cables I'm using. The battery protection & charge management IC circuit doesn't work, so that'll need to be remade at some point. All but one circuit working on the first spin, for my first PCB in over 10 years? I'll take it!
The body has also undergone many iterations & improvements, to make it easier to assemble, iterate on, and to give it capacity to mount everything needed. Most excitingly, the Love Elemental now has ribs that are attached to the inner skeleton. These ribs use 3D printed flexures to constrain their movement, so that although it'll only have 7 segments, it will have 28 individually passively articulated ribs, which I think will give it a very nice aesthetic. The design of the ribs, flexures, and mates will have to come in another post, as there are a lot of tricks I used to ensure they're easy to print, assemble, and replace, while also being strong.
Lastly, I imported all the mechanical designs into solidworks, so that I can iterate faster than my custom CAD environment allows me to. As it turns out, even I like the real CAD tools :)
I've managed to bring up mapping & visual odometry in my ROS simulation of the love elemental. From the get-go, I knew I wanted to use a kinect/realsense sensor for perception, because having a color + depth map seems like a rich space for doing interesting things, and the cost has fallen so much over the years.
Before getting into the details, here's a screenshot of Rviz (only visualizing the robot's internal model state, top right), Gazebo (visualizing the actual simulation, bottom right), and Rtabmap (visualizing the map-making & SLAM model, left):
For navigation, I've also known that I'm going to use ROS Navigation, which is a mature and open navigation platform. To set up navigation for a robot, you need several things:
- odometry (where are you relative to where you where)
- pose estimation (how are the joints oriented & which way are you pointing)
- mapping (what's around you)
- a movement controller (how do you go forward or turn)
- path planning (how should I get from here to there)
Some of these are easier and some are harder. I'll now go into more detail about what I tried, what I learned, and how I got this working.
For instance, for wheeled robots, odometry is easy, since you can measure distance traveled via wheel odometry. For a serpent robot, this is not a studied field. My initial gait controller R&D (see the prior log) had a way to do dead reckoning, but I've determined that wheel/ground slippage happens constantly, so I can't use dead reckoning or encoder odometry. So, I tried several packages to compute odometry.
First, I tried hector_slam, which uses a lidar scanner and estimates the map & location simultaneously. As it turned out, hector_slam's alignment requires a very wide field of view to be successful, and since I was using depthimage_to_laserscan conversion (via the eponymous ros node), I was only able to get a laser scan with a narrow FOV. So, hector_slam would fail to map & align successfully.
Next, i found ORBSlamV2, which takes a stream of images, stereo images, or depthscan images and uses optical flow to estimate odometry (and maybe it does map-making too). After I built Orbslam2 and got it running on the love elemental simulation, it would nearly instantly lose tracking within a few hundred milliseconds, and then it would segfault. On to the next one!
Also, as an aside, if visual odometry really doesn't work out with rtabmap, you can always slap a T265 visual odometry module onto the robot and get something significantly more accurate than the OSS tech. The downside is, of course, that this isn't good for the aesthetics I'm going for.
Odometry gives you one pose estimate, but typically, you'll combine sensor information from other sources (like an IMU) to get a better pose estimate than each individual sensor will provide. I found this example for pose estimation with rtabmap for realsense, where they use an unscented kalman filter to fuse visual odometry with the IMU. This is still in development for the simulation, since I'm finding that the simulated IMU gets bounced around a lot.
Movement Controller [in progress]
I still need to implement a movement controller, since the propagation manager isn't a good way to do movement. There are 2 parts to the movement controller I need to implement: the body alignment (i.e. which way is forward when you're slithering, since your head doesn't usually point forward), and how to generate movement (i.e. timing the movement of the joints).
For body alignment, this paper has a simple approach that decomposes the snake's body using PCA to find the principle...
Due to the interesting structure of a universal spine joint with 2 actuated abdominal muscles, doing the trig for the inverse kinematics isn't as simple as it would be for legged robots. As a result, I ran a simulation that tried all the different servo positions. I measured the servo positions & spine angles and trained an lightgbm model to create a kinematics controller.
This video shows a the simulated robot doing a simple serpentine gait. While there's still a lot to do, I'm excited that some of the low-level components of the project are starting to seem real.
This shows the environment I created that uses python to combine openscad, step file importing, and joint definitions to export to a physics and motor-control simulation-ready model for Gazebo/ROS.
It was particularly exciting because this approach actually includes full inertial models for the entire robot, doing calculations to estimate them from the component's STEP files.
This shows a side profile of the robot at this point, highlighting all points with their orientation axis in red, and connections points to combine components (so that all calculations can be done in local frames of reference). This shows an initial design for the body segments, spine, servo motors, and wheels for the elemental. The servo models come from the vendor's step files.