In our previous post, we explained what a delta mechanism is, and why we decided to use it for this project.

This post will hopefully shed some light on how we came up with the arm lengths, ratios, and other parameters that determine how accurate and fast it can be. There's quite a few examples of delta robots out there, and even a few pages with the necessary math to calculate position. But there's not really a lot of "why" out there, or a reasoning behind the chosen geometries and ratios were chosen. I'm not a math guru, but I will attempt to explain why I chose my design as I did.

First, I must say that the tutorial linked below on the Trossen Robotics forum is amazing, it saved me a lot of work. The illustrations are book-worthy, the code worked without issue, and it was easy to follow, even for a non-math-major like me.

http://forums.trossenrobotics.com/tutorials/introduction-129/delta-robot-kinematics-3276/

Here's one of their wonderful illustrations, showing the three arms, the end effector and the base in green.

Cliffs Notes:

To move the end effector to a given position, we must use **inverse kinematics** to determine the positions of the three top arms based on the delta mechanism constraints. We give a valid position in the work area to the delta_calcInverse() function. It returns, via pointers, the three angles of the top arms.

However, the delta_calcInverse() function doesn't take into account any motor
hysteresis, or quantization due to stepper motors only having a finite
number of steps. To do that, we can quantize the angles returned, and
the perform a calculation using **forward kinematics** to determine how
close we got to the desired position.

[X,Y,Z desired position] -> Inverse Kinematics -> Quantize -> Forward Kinematics -> [X,Y,Z result position]

Error distance = sqrt( ( x1-x2)^2 + (y1-y2)^2 + (z1-z2)^2 )

That takes care of the error distances that occur from stepper motor quantization, we'll explain how we mitigate the other errors in a bit. But first, we'll conclude the kinematics calculations.

We must set a few variables before we can run the delta_calcInverse() function. They are, 'e', 'f', 're', and 'rf'. NOTE: If you enter these in millimeters, then your result will be in millimeters. Angles will be returned in degrees.

- 'e' is the end effector triangle side length
- 'f' is the base triangle side length
- 're' is the shin, or bottom (parallelogram) arm
- 'rf' is the thigh, or top arm, which is similar to a servo horn

Those are the default values in the example. We'll determine ours in a bit.

Here's the full code from the Trossen Robotics article:

// robot geometry // (look at pics above for explanation) const float e = 115.0; // end effector const float f = 457.3; // base const float re = 232.0; const float rf = 112.0; // trigonometric constants const float sqrt3 = sqrt(3.0); const float pi = 3.141592653; // PI const float sin120 = sqrt3/2.0; const float cos120 = -0.5; const float tan60 = sqrt3; const float sin30 = 0.5; const float tan30 = 1/sqrt3; // forward kinematics: (theta1, theta2, theta3) -> (x0, y0, z0) // returned status: 0=OK, -1=non-existing position int delta_calcForward(float theta1, float theta2, float theta3, float &x0, float &y0, float &z0) { float t = (f-e)*tan30/2; float dtr = pi/(float)180.0; theta1 *= dtr; theta2 *= dtr; theta3 *= dtr; float y1 = -(t + rf*cos(theta1)); float z1 = -rf*sin(theta1); float y2 = (t + rf*cos(theta2))*sin30; float x2 = y2*tan60; float z2 = -rf*sin(theta2); float y3 = (t + rf*cos(theta3))*sin30; float x3 = -y3*tan60; float z3 = -rf*sin(theta3); float dnm = (y2-y1)*x3-(y3-y1)*x2; float w1 = y1*y1 + z1*z1; float w2 = x2*x2 + y2*y2 + z2*z2; float w3 = x3*x3 + y3*y3 + z3*z3; // x = (a1*z + b1)/dnm float a1 = (z2-z1)*(y3-y1)-(z3-z1)*(y2-y1); float b1 = -((w2-w1)*(y3-y1)-(w3-w1)*(y2-y1))/2.0; // y = (a2*z + b2)/dnm; float a2 = -(z2-z1)*x3+(z3-z1)*x2; float b2 = ((w2-w1)*x3 - (w3-w1)*x2)/2.0; // a*z^2 + b*z + c = 0 float a = a1*a1 + a2*a2 + dnm*dnm; float b = 2*(a1*b1 + a2*(b2-y1*dnm) - z1*dnm*dnm); float c = (b2-y1*dnm)*(b2-y1*dnm) + b1*b1 + dnm*dnm*(z1*z1 - re*re); // discriminant float d = b*b - (float)4.0*a*c; if (d < 0) return -1; // non-existing point z0 = -(float)0.5*(b+sqrt(d))/a; x0 = (a1*z0 + b1)/dnm; y0 = (a2*z0 + b2)/dnm; return 0; } // inverse kinematics // helper functions, calculates angle theta1 (for YZ-pane) int delta_calcAngleYZ(float x0, float y0, float z0, float &theta) { float y1 = -0.5 * 0.57735 * f; // f/2 * tg 30 y0 -= 0.5 * 0.57735 * e; // shift center to edge // z = a + b*y float a = (x0*x0 + y0*y0 + z0*z0 +rf*rf - re*re - y1*y1)/(2*z0); float b = (y1-y0)/z0; // discriminant float d = -(a+b*y1)*(a+b*y1)+rf*(b*b*rf+rf); if (d < 0) return -1; // non-existing point float yj = (y1 - a*b - sqrt(d))/(b*b + 1); // choosing outer point float zj = a + b*yj; theta = 180.0*atan(-zj/(y1 - yj))/pi + ((yj>y1)?180.0:0.0); return 0; } // inverse kinematics: (x0, y0, z0) -> (theta1, theta2, theta3) // returned status: 0=OK, -1=non-existing position int delta_calcInverse(float x0, float y0, float z0, float &theta1, float &theta2, float &theta3) { theta1 = theta2 = theta3 = 0; int status = delta_calcAngleYZ(x0, y0, z0, theta1); if (status == 0) status = delta_calcAngleYZ(x0*cos120 + y0*sin120, y0*cos120-x0*sin120, z0, theta2); // rotate coords to +120 deg if (status == 0) status = delta_calcAngleYZ(x0*cos120 - y0*sin120, y0*cos120+x0*sin120, z0, theta3); // rotate coords to -120 deg return status; }<br>

So although I can program in C/C++, I needed some powerful 3D graphing and needed to get it done quick. I didn't have time to learn MATLAB or Octave, or NumPy, or any of the other tools that were probably appropriate. Nor did I want to go through the trouble of cross-platform stuff like Qt or WxWidgets. Instead, I did all of the calculations in Labview, because that's what I use daily at work, and I swear to you guys, it's not nearly as bad as some of the smug programmers say it is. Labview has gotten a bad wrap, and I'm actually quite fond of it. In the last five or so years, they've made it into a world-class rapid application development system, that's specialized for test and lab use.

Here's my Labview implementation of the above code:

Now, running that code will tell us the error for ONE point that we specify. If we run it for all points in the entire work area, via multiple nested FOR loops, and only display valid points, this is what we get:

The work area of a delta is sort of bowl-shaped. If you watch some deltas move around, you'll begin to see why it's like that. The problem is, a lot of the places that it can get to don't really do us any good. It's basically wasted space.

NOTE: The colors just represent depth, red being the upper-most layers, and blue being the lower-most layers.

We're after the bottom part of the bowl. that's where our machine will be spending all of its time. Let's chop of the sides of the bowl at +/- 150mm:

That's starting to look more square. Now let's index down and find the first layer from the top that is completely made up of valid (reachable) points. We're basically finding the lowest part of the top concave surface of the bowl.

So this is pretty much the usable area of our delta. It's enough to completely cover a 214mm x 214mm MK2B RepRap heated bed plate, with room to spare on all four sides for SMT feeders.

Now, if we pick a random layer above, and change the Z axis to show us the actual quantization error, then here's what we get:

NOTE: For the picture(s) below, Z indicates 3D positioning error (XYZ), not just the Z error, for each point on the XY bed. The Z axis is obviously not to scale compared to X and Y.

Pretty awful, no? 4.8mm of quantization error. This is because we're using stepper motors with 200 finite steps per revolution. This means we can only place our 'rf' arms with 1.8 degree precision.

That kind of error is unacceptable. But modern stepper drivers have microstepping mode, which can actually position the shaft at positions **in between** the full steps, by pulsing the current on the two windings.

Let's run the simulation again, this time with **16x** microstepping enabled:

That's a lot better. But we're still not in 0201 territory. At this point, it is impossible to attach the 're' arms to the stepper motor shaft and get our needed accuracy of 0.05mm. However, we can use a gear/pulley reduction system to get even greater positioning resolution.

- Gears: Bad idea, because they introduce backlash e.g. hysteresis. Not good for precise positioning.
- Pulleys: Good idea, because they are zero backlash, and the RepRap movement has made GT2 pulleys and belts extremely cheap.

So my solution is to use a GT2 timing belt and reduction pulley, which gives us an almost 10x improvement in positioning, at the cost of 10x loss in speed. In the picture below, you can see the reduction pulley system. Rather than using a fixed-size continuous belt and use a toothed pulley, we actually 3D print a smooth pulley with a built-in tensioning system and loop-friction-fit holders for each end of the belt. It works phenomenally well, and I'm quite proud of it :)

So using the reduction system above, let's re-run the simulation:

Awesome!

We can meet our 0201 accuracy requirement, by using 1.8 degree stepper motors with 16x microstepping, and a ~10:1 reduction timing pulley system.

Just how important is the gear/pulley reduction, in the grand scheme of things?

Here's an XY chart that I calculated by graphing # of teeth for the bottom reduction pulley (X axis), with the resulting max quantization error (Y axis). This was created by running the same parameters in a large nested series of FOR loops, that graph the accuracy at every point on the bed, at every X/Y/Z position, for every tooth/pulley ratio between 16 and 300. Clearly, you can see why I went with 150 teeth (or ~10:1 ratio) on the bottom pulley. It's down towards the point of diminishing returns. It seems that ~250 teeth would be to the point of diminishing returns

That takes care of quantization error and most sources of hysteresis/backlash.

But what about X/Y position error, caused by mechanical misalignment and parts tolerances? We will be using computer vision to align the end effector at every point on the bed, using a printed template and a bash shell script :-) And eventually a web browser frontend for it. If anyone cares to see the gory details of it, I'll post them here.

But what about Z position error, caused by mechanical misalignment and parts tolerances? We will be using an auto-Z leveling probe that uses a regular momentary switch, which gets pressed down onto the bed in a grid of points. Afterwards, a bunch of maths and other magic happens, and a plane indicating where the Z surface is gets used as the new Z reference, in a way that accounts for the table being completely crooked. We'll be borrowing this code from the Marlin RepRaps, in true open-source spirit :-) Here's the youtube video which is pretty amazing to watch.

So that's how we plan to make an accurate machine. But how did we select the geometry? I created a script that randomizes all of the variable lengths and other geometry parameters. I can then run each of them to determine whether or not it meets the bed X/Y/Z dimensions, and whether it meets the accuracy I need to do 0201 parts. Many of these fail. I start looking for ratios that work, and feed those back into the process as seeds to hone in on the successful geometries. This is sort of a quasi-evolutionary algorithm that's sort of guided along because I was lazy and in a hurry when I wrote it. Maybe I could gall it an Intelligent Design algorithm, instead of an evolutionary algorithm? Hmm, lol. Let's just call it monte-carlo with some meddling. Or a robot pedigree breeding program.

Here's some of the attributes that I tried to get:

- Assuming a 300mm W x 300mm x D 500mm H machine size:
- Work area of at least 150mm radius from center of table
- Work Z height of at least 80 mm, possibly 100
- Required accuracy of 0.05mm *NOTE: sufficient to place 0201 components accurately
- Shortest 're' distance that meets the above work area requirements
- The longest 'rf' distance that physically fits within the boundaries of the machine
- An 'e' distance that has a radius less than the distance between the 214mm x 214mm heated bed corners, and the extruded 20mm x 20mm vertical rails.
- An 'f' distance that gives enough space to mount three NEMA 17 motors at 120 degree angles to each other.
- Make the large pulley (mounted to the 'rf' arm) as big as needed to get the required accuracy, but no bigger. Make the small pulley (mounted to the stepper motoro) as small as possible.

You can see the histogram below which plots the "score" of the various iterations. The range of geometries are provided on the left, and succesful geometries come out on the right. This is an out-dated screenshot, I'll see if I have the actual one that provided the results below later.

Results:

Number of monte-carlo iterations: 1000

78 out of 1000 iterations successfully met the accuracy requirement at all points in the work area.

Out of the 22 successful geometries, I picked the one with the shortest 're' distance.

Delta parameters:

- End effector (e) = 115 mm
- Base (f) = 190.526 mm
- Length of parallelogram joint (re) = 270 mm (cut from 12", standard length carbon fiber rods)
- Length of upper joint (rf) min = 90mm center-to-center
- Stepper motor steps per revolution = 200
- Stepper motor microstepping ratio = 16x
- Timing belt # of teeth (motor) = 16 Designed for these GT2 16T pulleys
- Timing belt # of teeth (arm) = 150 *Note: Sort of a misnomer. We actually use a smooth pulley here but listing them as if they were a teeth count makes the ratio easier.
- Max theta = 80 degrees
- Min theta = -80 degrees

This will give a work radius suitable for a RepRap Mk2B sized heated bed plate and a Z height of 80mm (100mm or more if printing a tall, skinny part).

Analysis:

- Longer 're' (parallelogram joint / shin) values seem to be preferred.
- All passing iterations had an 'rf'/'re' ratio between 0.25 to 0.45.
- My program doesn't take into account collision detection, so always make sure it's not outputting a physically impossible geometry.

In conclusion the code that I wrote was better than nothing, but it could certainly be improved. Collision detection could eliminate more bad geometries, and an algorithm for evolving successful geometries would be better.

## Discussions

## Become a member

In order to follow projects & hackers or give likes

Already a member?you need to create an account.

Hey! Can I get the LabVIEW Project files especially DeltaBot - Iterator of DeltaBot? Neil, awesome work by the way!

Are you sure? yes | no

si podrías subir los .vi del proyecto, es muy interesante, esoy empesando a trabajar con labview

Are you sure? yes | no

Are you sure? yes | no

Are you sure? yes | no

Are you sure? yes | no

Are you sure? yes | no

Are you sure? yes | no

Are you sure? yes | no

Are you sure? yes | no

Are you sure? yes | no