close-circle
Close

6/15/2014 - Delta mechanism simulation and accuracy determination

A project log for FirePick Delta, the Open Source MicroFactory

An affordable electronics manufacturing system for hobbyists, students, & small businesses. Inspired by RepRap. Powered by OpenPnP/FirePick.

Neil Jansen 06/14/2014 at 14:4210 Comments

In our previous post, we explained what a delta mechanism is, and why we decided to use it for this project.

This post will hopefully shed some light on how we came up with the arm lengths, ratios, and other parameters that determine how accurate and fast it can be.  There's quite a few examples of delta robots out there, and even a few pages with the necessary math to calculate position.  But there's not really a lot of "why" out there, or a reasoning behind the chosen geometries and ratios were chosen.  I'm not a math guru, but I will attempt to explain why I chose my design as I did.

First, I must say that the tutorial linked below on the Trossen Robotics forum is amazing, it saved me a lot of work.  The illustrations are book-worthy, the code worked without issue, and it was easy to follow, even for a non-math-major like me.

http://forums.trossenrobotics.com/tutorials/introduction-129/delta-robot-kinematics-3276/

Here's one of their wonderful illustrations, showing the three arms, the end effector and the base in green.

Cliffs Notes:

To move the end effector to a given position, we must use inverse kinematics to determine the positions of the three top arms based on the delta mechanism constraints.  We give a valid position in the work area to the delta_calcInverse() function.  It returns, via pointers, the three angles of the top arms.

However, the delta_calcInverse() function doesn't take into account any motor hysteresis, or quantization due to stepper motors only having a finite number of steps. To do that, we can quantize the angles returned, and the perform a calculation using forward kinematics to determine how close we got to the desired position.

[X,Y,Z desired position] -> Inverse Kinematics -> Quantize -> Forward Kinematics -> [X,Y,Z result position] 

Error distance = sqrt( ( x1-x2)^2 + (y1-y2)^2 + (z1-z2)^2 )

That takes care of the error distances that occur from stepper motor quantization, we'll explain how we mitigate the other errors in a bit.  But first, we'll conclude the kinematics calculations.

We must set a few variables before we can run the delta_calcInverse() function.  They are, 'e', 'f', 're', and 'rf'.  NOTE: If you enter these in millimeters, then your result will be in millimeters.  Angles will be returned in degrees.

Those are the default values in the example.  We'll determine ours in a bit.

Here's the full code from the Trossen Robotics article:

 // robot geometry
 // (look at pics above for explanation)
 const float e = 115.0;     // end effector
 const float f = 457.3;     // base
 const float re = 232.0;
 const float rf = 112.0;
 
 // trigonometric constants
 const float sqrt3 = sqrt(3.0);
 const float pi = 3.141592653;    // PI
 const float sin120 = sqrt3/2.0;   
 const float cos120 = -0.5;        
 const float tan60 = sqrt3;
 const float sin30 = 0.5;
 const float tan30 = 1/sqrt3;
 
 // forward kinematics: (theta1, theta2, theta3) -> (x0, y0, z0)
 // returned status: 0=OK, -1=non-existing position
 int delta_calcForward(float theta1, float theta2, float theta3, float &x0, float &y0, float &z0) {
     float t = (f-e)*tan30/2;
     float dtr = pi/(float)180.0;
 
     theta1 *= dtr;
     theta2 *= dtr;
     theta3 *= dtr;
 
     float y1 = -(t + rf*cos(theta1));
     float z1 = -rf*sin(theta1);
 
     float y2 = (t + rf*cos(theta2))*sin30;
     float x2 = y2*tan60;
     float z2 = -rf*sin(theta2);
 
     float y3 = (t + rf*cos(theta3))*sin30;
     float x3 = -y3*tan60;
     float z3 = -rf*sin(theta3);
 
     float dnm = (y2-y1)*x3-(y3-y1)*x2;
 
     float w1 = y1*y1 + z1*z1;
     float w2 = x2*x2 + y2*y2 + z2*z2;
     float w3 = x3*x3 + y3*y3 + z3*z3;
     
     // x = (a1*z + b1)/dnm
     float a1 = (z2-z1)*(y3-y1)-(z3-z1)*(y2-y1);
     float b1 = -((w2-w1)*(y3-y1)-(w3-w1)*(y2-y1))/2.0;
 
     // y = (a2*z + b2)/dnm;
     float a2 = -(z2-z1)*x3+(z3-z1)*x2;
     float b2 = ((w2-w1)*x3 - (w3-w1)*x2)/2.0;
 
     // a*z^2 + b*z + c = 0
     float a = a1*a1 + a2*a2 + dnm*dnm;
     float b = 2*(a1*b1 + a2*(b2-y1*dnm) - z1*dnm*dnm);
     float c = (b2-y1*dnm)*(b2-y1*dnm) + b1*b1 + dnm*dnm*(z1*z1 - re*re);
  
     // discriminant
     float d = b*b - (float)4.0*a*c;
     if (d < 0) return -1; // non-existing point
 
     z0 = -(float)0.5*(b+sqrt(d))/a;
     x0 = (a1*z0 + b1)/dnm;
     y0 = (a2*z0 + b2)/dnm;
     return 0;
 }
 
 // inverse kinematics
 // helper functions, calculates angle theta1 (for YZ-pane)
 int delta_calcAngleYZ(float x0, float y0, float z0, float &theta) {
     float y1 = -0.5 * 0.57735 * f; // f/2 * tg 30
     y0 -= 0.5 * 0.57735    * e;    // shift center to edge
     // z = a + b*y
     float a = (x0*x0 + y0*y0 + z0*z0 +rf*rf - re*re - y1*y1)/(2*z0);
     float b = (y1-y0)/z0;
     // discriminant
     float d = -(a+b*y1)*(a+b*y1)+rf*(b*b*rf+rf); 
     if (d < 0) return -1; // non-existing point
     float yj = (y1 - a*b - sqrt(d))/(b*b + 1); // choosing outer point
     float zj = a + b*yj;
     theta = 180.0*atan(-zj/(y1 - yj))/pi + ((yj>y1)?180.0:0.0);
     return 0;
 }
 
 // inverse kinematics: (x0, y0, z0) -> (theta1, theta2, theta3)
 // returned status: 0=OK, -1=non-existing position
 int delta_calcInverse(float x0, float y0, float z0, float &theta1, float &theta2, float &theta3) {
     theta1 = theta2 = theta3 = 0;
     int status = delta_calcAngleYZ(x0, y0, z0, theta1);
     if (status == 0) status = delta_calcAngleYZ(x0*cos120 + y0*sin120, y0*cos120-x0*sin120, z0, theta2);  // rotate coords to +120 deg
     if (status == 0) status = delta_calcAngleYZ(x0*cos120 - y0*sin120, y0*cos120+x0*sin120, z0, theta3);  // rotate coords to -120 deg
     return status;
 }<br>

So although I can program in C/C++, I needed some powerful 3D graphing and needed to get it done quick.  I didn't have time to learn MATLAB or Octave, or NumPy, or any of the other tools that were probably appropriate.  Nor did I want to go through the trouble of cross-platform stuff like Qt or WxWidgets.  Instead, I did all of the calculations in Labview, because that's what I use daily at work, and I swear to you guys, it's not nearly as bad as some of the smug programmers say it is.  Labview has gotten a bad wrap, and I'm actually quite fond of it.  In the last five or so years, they've made it into a world-class rapid application development system, that's specialized for test and lab use.  

Here's my Labview implementation of the above code:

Now, running that code will tell us the error for ONE point that we specify.  If we run it for all points in the entire work area, via multiple nested FOR loops, and only display valid points, this is what we get:

The work area of a delta is sort of bowl-shaped.  If you watch some deltas move around, you'll begin to see why it's like that.  The problem is, a lot of the places that it can get to don't really do us any good.  It's basically wasted space.

NOTE: The colors just represent depth, red being the upper-most layers, and blue being the lower-most layers.

We're after the bottom part of the bowl.  that's where our machine will be spending all of its time.  Let's chop of the sides of the bowl at +/- 150mm:

That's starting to look more square.  Now let's index down and find the first layer from the top that is completely made up of valid (reachable) points.  We're basically finding the lowest part of the top concave surface of the bowl.

So this is pretty much the usable area of our delta.  It's enough to completely cover a 214mm x 214mm MK2B RepRap heated bed plate, with room to spare on all four sides for SMT feeders.

Now, if we pick a random layer above, and change the Z axis to show us the actual quantization error, then here's what we get:

NOTE: For the picture(s) below, Z indicates 3D positioning error (XYZ), not just the Z error, for each point on the XY bed. The Z axis is obviously not to scale compared to X and Y.

Pretty awful, no?  4.8mm of quantization error.  This is because we're using stepper motors with 200 finite steps per revolution.  This means we can only place our 'rf' arms with 1.8 degree precision. 

That kind of error is unacceptable.  But modern stepper drivers have microstepping mode, which can actually position the shaft at positions in between the full steps, by pulsing the current on the two windings.

Let's run the simulation again, this time with 16x microstepping enabled:

That's a lot better.  But we're still not in 0201 territory.  At this point, it is impossible to attach the 're' arms to the stepper motor shaft and get our needed accuracy of 0.05mm.  However, we can use a gear/pulley reduction system to get even greater positioning resolution. 

So my solution is to use a GT2 timing belt and reduction pulley, which gives us an almost 10x improvement in positioning, at the cost of 10x loss in speed.  In the picture below, you can see the reduction pulley system.  Rather than using a fixed-size continuous belt and use a toothed pulley, we actually 3D print a smooth pulley with a built-in tensioning system and loop-friction-fit holders for each end of the belt.  It works phenomenally well, and I'm quite proud of it :)

So using the reduction system above, let's re-run the simulation:

Awesome!

We can meet our 0201 accuracy requirement, by using 1.8 degree stepper motors with 16x microstepping, and a ~10:1 reduction timing pulley system.  

Just how important is the gear/pulley reduction, in the grand scheme of things?

Here's an XY chart that I calculated by graphing # of teeth for the bottom reduction pulley (X axis), with the resulting max quantization error (Y axis).  This was created by running the same parameters in a large nested series of FOR loops, that graph the accuracy at every point on the bed, at every X/Y/Z position, for every tooth/pulley ratio between 16 and 300.  Clearly, you can see why I went with 150 teeth (or ~10:1 ratio) on the bottom pulley.  It's down towards the point of diminishing returns.  It seems that ~250 teeth would be to the point of diminishing returns

That takes care of quantization error and most sources of hysteresis/backlash.

But what about X/Y position error, caused by mechanical misalignment and parts tolerances?  We will be using computer vision to align the end effector at every point on the bed, using a printed template and a bash shell script :-)  And eventually a web browser frontend for it.  If anyone cares to see the gory details of it, I'll post them here.

But what about Z position error, caused by mechanical misalignment and parts tolerances?  We will be using an auto-Z leveling probe that uses a regular momentary switch, which gets pressed down onto the bed in a grid of points.  Afterwards, a bunch of maths and other magic happens, and a plane indicating where the Z surface is gets used as the new Z reference, in a way that accounts for the table being completely crooked.  We'll be borrowing this code from the Marlin RepRaps, in true open-source spirit :-)  Here's the youtube video which is pretty amazing to watch.

So that's how we plan to make an accurate machine.  But how did we select the geometry?  I created a script that randomizes all of the variable lengths and other geometry parameters.  I can then run each of them to determine whether or not it meets the bed X/Y/Z dimensions, and whether it meets the accuracy I need to do 0201 parts.  Many of these fail.   I start looking for ratios that work, and feed those back into the process as seeds to hone in on the successful geometries.  This is sort of a quasi-evolutionary algorithm that's sort of guided along because I was lazy and in a hurry when I wrote it.  Maybe I could gall it an Intelligent Design algorithm, instead of an evolutionary algorithm?  Hmm, lol.  Let's just call it monte-carlo with some meddling.  Or a robot pedigree breeding program.

Here's some of the attributes that I tried to get:

You can see the histogram below which plots the "score" of the various iterations.  The range of geometries are provided on the left, and succesful geometries come out on the right.  This is an out-dated screenshot, I'll see if I have the actual one that provided the results below later.

Results: 

Number of monte-carlo iterations: 1000

78 out of 1000 iterations successfully met the accuracy requirement at all points in the work area.

Out of the 22 successful geometries, I picked the one with the shortest 're' distance.

Delta parameters:

This will give a work radius suitable for a RepRap Mk2B sized heated bed plate and a Z height of 80mm (100mm or more if printing a tall, skinny part).

Analysis:

In conclusion the code that I wrote was better than nothing, but it could certainly be improved.  Collision detection could eliminate more bad geometries, and an algorithm for evolving successful geometries would be better. 

Discussions

navale_kanishk wrote 03/19/2017 at 05:30 point

Hey! Can I get the LabVIEW Project files especially DeltaBot - Iterator of DeltaBot? Neil, awesome work by the way! 

  Are you sure? yes | no

Royer88 wrote 08/02/2016 at 00:12 point

si podrías subir los .vi del proyecto, es muy interesante, esoy empesando a trabajar con labview

  Are you sure? yes | no

TTN wrote 07/15/2014 at 20:25 point
Could you be so kind as to share a link or the source code of the program for the deltabot monte carlo optimisation?

  Are you sure? yes | no

Fire wrote 07/09/2014 at 08:03 point
Hi, just wondering... Did you get a change to check the accuracy you've calculated ? My advice on stepper motors microstepping: the microsteps give you a smoother motion and a beter resolution but as far as the accuracy goes, 16x microstepping doesn't mean you get to position your motor at 16 positions between steps accurately. The open loop fault is greatly related to motor load and friction and also to motor speed. Best way to increase motor accuracy is closed loop control with an encoder but then you'll have to change the project title to at least the 600$ pick and place ;)

  Are you sure? yes | no

Neil Jansen wrote 07/09/2014 at 22:02 point
Great advice. We're currently writing an automated accuracy checking program that uses the camera and motion control to check 81 point s (9x9 grid) on the bed. We've also got a Z probe with mostly-working software. We'll be publishing the results from these tests in the coming weeks, when we feel confident that we've got good results.

  Are you sure? yes | no

Kearney Lackas wrote 06/21/2014 at 07:45 point
Nice work so far! Another option for a low-cost backlash free gear reduction in this application could be a capstan drive (example, scmero.ulb.ac.be/Research/Projects/7/SAM_ULB_ASL_actuation.jpg). These are good designs for high precision, low friction, low backlash. In the past, we have used heavy-duty fishing line and 3D printed pulleys to make some surprisingly strong and very efficient robotic arms. Because there is little friction, they are also very backdrivable. Depending on the number of turns and the torque required, you may be able to use the motor shaft itself as the small capstan which would maximize your transmission ratio per space and it may save several dollars if you did this for all three. They are a pain to wrap though. Just a thought to consider. Keep it up!

  Are you sure? yes | no

lboucher26 wrote 06/19/2014 at 00:02 point
Awesome update. I think you might just have sold me on the kickstarter. In the spirit of Open Source, is there any chance you will share your Labview code?

  Are you sure? yes | no

Neil Jansen wrote 06/22/2014 at 02:58 point
Sure, I'll try to get it posted to our github repository in the near future.

  Are you sure? yes | no

guile2912 wrote 06/16/2014 at 13:56 point
Thanks a lot for this very interesting article, thanks for taking the time to post it.

  Are you sure? yes | no

Minimum Effective Dose wrote 06/16/2014 at 02:41 point
Fascinating! This sort of heavy theory design work is outside my area of expertise, so I learned a lot. Thanks for posting it and making it accessible.

  Are you sure? yes | no