Close

Video proof!

A project log for Choose your own adventure bot

Ultra low cost 3D printed Open Educational Resource Walking Robotic Platform

shanesnipeshane.snipe 09/05/2021 at 12:280 Comments

Well since I have something that now moves, it is all just software. Twiddle a few bits and Cya is going to dance. There are some many things wrong with that statement. First hardware is never done, it is just  done enough to get by and I do not think I am there.  As for the bit  twiddling, I need to back it up a bit and think about how I want to control Cya.

But first, a little glimpse of that I have been doing. I had mentioned a trestle which I imagined to be  something to hold the robot up as it tries to walk. Well, two chopsticks, 6 yearbooks and one milk crate later, I had something workable. 

If you look closely, you will see the encoder positions are read and displayed on the  screen  at the endpoints.  This is one of the main reasons I wanted a screen. It so much easier that relying on  the serial terminal for to understand the encoder limits.

So how to go from having a hardware platform that can move its joints and be aware of where they are to dancing?  I guess the first point is to think about how to control the positions.  Traditional robotics us inverse kinematics.  This is basically where you know a position in space and then you calculate the angles of the joints to get there using  trig. I think this is a non-starter for any non-roboticist so lets find a short-cut!

I am going to propose that movement is just stringing together a bunch of discrete positions and if the gap between the positions is small enough, you may not care how the robot moves to get there.  So if the robot has of list of targeted positions and it moves from position to position, some reasonably compelling motion can be created.

 Now, how can I make this easier. What if each joint has positions 1-100 with 1 being the smallest angle and 100 being the largest. With my low resolution encoder and joints with backlash, this is probably an appropriate resolution for the joints.

So what are the steps to get there.

1) Map the encoder output to 1-100 and display the joint position on the screen for each joint.

2) Manually move Cya's joints to the desired positions to make up the choregraphed movement. Record the sequence of positions.

3) Code the function that moves each joint to the desired positions.

4)  Call the function to work through the array of points to make one cycle.

Stretch goal would be:

Take points from one robot and transfer them to another via ESP Now.

This seems simple and straight forward but it also is different from what I have seen other projects do. First of all, unless you have super high end servos, you do not have positional feedback available. In other words, you can not move the robot to a joint and record the position. You have to go back to the inverse kinematics. Another way to get at it is to use reinforcement learning to get to a set points. I was working with a team to do this but unless you can ignore inertia of the limbs, the math is too intense for model to work. This is why you see a bunch of stick legged robots. So on my team, the computation guys kept pushing the mechanical team to pull the weight and actuators out of the legs.  However, I think the farther the weight is from the ground, the hard it is to make the robot balance so these are conflicting goals. Sacrificing physics so you can calculate something makes no sense so I went the opposite way on Cya and put the batteries as close to the ground as possible. This should make the robot more stable. We will see how it goes!  


Discussions