Close
0%
0%

PyUltimateRobot

Easily and powerfully control your robot

Similar projects worth following
PyUltimateRobot is a complete development suite suitable for controlling visually oriented robots with multiple segments, arms, arm segments, arm attachments including grips and cameras. It allows graphical programming of movement procedures and visual procedures. It runs on Linux and Windows.

Developed on Python to leverage state-of-the-art extensibility and easily integrated plugins


Able to use low cost robots with the accuracy and dexterity of much more expensive robots, achieved in part by:

a) Ability to drive movement by inexpensive DC gear head motors, including the inherent ability to specify movement by applied force rather than strict space/time control that steppers impose.

b) Arms and linkages don't have to have super tight tolerances, because accuracy is achieved by tightly integrated visual feedback that's easy to program and low complexity processing.

3. Ability to program easily with a GUI and visual interface that sees things from the robots' camera perspective.

Flexible use of coordinate spaces that make programming arms with 6 or more degrees of freedom super easy. Got a bot arm with 13 linkages that can reach around corners? Easy. Simple arm with only 2 degrees of freedom, with one polar and one cartesian? Any arm geometry works.

  • 1 × Python 2.7.6 Language engine
  • 1 × OpenCV For machine vision
  • 1 × Numpy Required by OpenCV
  • 1 × SciPy Required by OpenCV
  • 1 × Pyserial 2.7 For communication with servo controllers

View all 8 components

  • Automatic PID optimization and Composite motor control

    Garrett Herschleb02/12/2018 at 22:06 0 comments

    I've been busy improving the software.

    Now I've gotten a module which performs automatic PID optimization. PID controllers are a feedback control mechanism to make motors do exactly what you want by varying the power fed into them. It does so by moving a motor forward and reverse within the limits you specify. It intelligently tries different PID controller values to find the best possible settings. What are the best settings? You decide by telling the algorithm what's "good" with different dimensions like commanded speed compliance, staying within acceleration limits, minimizing jerk, and crossing goal speed.

    I've also added capability to have multiple motors work in tandem as a single motor, and keep their positions tightly synchronized, even if there's uneven load.

  • Enhanced Visual Guidance Routines

    Garrett Herschleb04/08/2017 at 22:38 0 comments

    The keyline algorithms have been enhanced to:

    1. Compare a number of points together to make sure the algorithm didn't just "see" a glitch in the image.

    2. Add qualifiers to test points in additional ways to the primary specification.

    Now the visual guidance routines can provide more robustness than ever.

  • Updated object location and visual guidance working well

    Garrett Herschleb02/05/2017 at 18:30 0 comments

    I have added and enhanced new shape-finding/fitting algorithms. This includes one Cython module for the
    inner search loops. Also added is a tool to create the shape-fitting templates.
    With this new capability, I'm able to reliably locate the tip of a drill bit and verify its position on a work piece down to about 1/4th of a millimeter.
    So now one can use a $1000 robot to acheive $100,000 precision!

  • CNC Control

    Garrett Herschleb12/21/2016 at 11:54 0 comments

    I've added CNC machine control, with vision guided feedback in case the stepper motors do not respond as programmed. Tested on my CNC machine.

    I've also added a Python module where movement can be programmed in Python to generate a robot program, complete with automated visual validation and remedial action as necessary. High level functions allow things like drill(), rectangle(), circle(), fillrectangle(), line(), etc.

  • 3D Stereo image capture for the aforementioned 3D Vision

    Garrett Herschleb11/02/2016 at 13:44 0 comments

    I've updated the software with 3 different mechanisms through which one may capture 2 stereoscopic images for 3D evaluation:

    1. Single camera taking 2 side-by-side images by moving in between captures. (See "snap_alt_image" in ProcessStep.py and in the procedure generator)

    2. Two independent cameras that have been temporarily moved side by side. (See "alt_image_camera" in ProcessStep.py and in the procedure generator)

    3. Two cameras permanently affixed side by side, and behave as a single camera, but return 2 images instead of one when asked for a snapshot.

    So now it's easy to use 3D vision in evaluating movement positioning, validating pre-conditions, and verifying the results of a step!

  • 3D Vision!!

    Garrett Herschleb10/29/2016 at 14:41 0 comments

    I've added the capability to analyze stereo images and identify different objects according to their depth of field, and also find key points on an object face, filtering out false hits that are in the background or outside of the expected field depth.

    The work so far is only within the visual acquisition and FaceProcessing modules. There are not yet provisions to obtain stereo images automatically.

    That will be my next project -- using dual cameras side-by-side or taking multiple snapshots from the same camera from different perspectives.

  • New Feature: MoveToLimit

    Garrett Herschleb04/20/2016 at 19:35 0 comments

    Added a binary option for movement: MoveToLimit

    This feature will tell the controller to stop the motion and proceed to the next step as soon as arm movement becomes physically stopped / blocked.

    This combined with the ability to specify a maximum force in all degrees of freedom enables the robot to do things "by feel," like tightening a screw or pushing something "in."

    This feature is a companion to the binary option "TimeLimited" which is similar when combined with max force specifications. The difference is that the robot that TimeLimited can "press" something for a time, whereas MoveToLimit will immediately stop the movement after reaching a stopping point.

  • Updated Wiki

    Garrett Herschleb04/16/2016 at 16:29 0 comments

    The Wiki on SourceForge has been updated, expanded and upgraded. The documentation is starting to look real now!

  • Enhanced object recognition

    Garrett Herschleb09/06/2015 at 11:20 0 comments

    I've updated and upgraded the color bar recognition algorithm to be more robust, and read codes in a much greater gamut of lighting and viewing angle conditions.

  • Added a tool kit program to make the robot relocate

    Garrett Herschleb08/26/2015 at 15:45 0 comments

    Added the Move.py program which will cause the robot to follow a route to a new location. This includes an algorithm with which the robot can look around for objects that can act as landmarks to determine location, also known as "pilotage".

    That way there doesn't have to be a sophisticated location infrastructure. The robots can be guided the same way we move ourselves, relative to known environmental objects.

View all 15 project logs

Enjoy this project?

Share

Discussions

Garrett Herschleb wrote 05/05/2016 at 13:30 point

Updated the general parameter optimizer. The parameter optimizer is an engine that will generate a series of guesses for the best parameter values based on scoring feedback.

This helps robots by optimizing PID values that control inexpensive DC gear head motors to behave predictably like more expensive (and generally much weaker) servos.

One uses the optimizer by setting up and configuring the class by telling it the nature of the parameters to work on, and then calling the member function OptimizeIteration (last_score). You hand it the score of the last iteration (more positive is better, None if you're just starting), and the function hands back a dictionary of parameter values to try next. Then you run your machine (whatever it is) and evaluate how well it worked with those parameters, assigning a score, and iterate until OptimizeIteration returns None, indicating that it thinks it found the best possible values.

The update to the optimizer allows a higher level controller manage the types of searches that take place by defining a different class based on the type of searching one wants to do.

The first class is IsoParameterOptimizer, which as its name implies only changes one parameter at a time until a local maxima is found for that parameter only, and then it finds a new local maxima for the next parameter, and so on. It keeps trying to find local maximas for each parameter sequentially until there is no movement for any parameters, where it declares good enough and returns None to indicate a wide local maxima has been found.

A second class is SearchRadiusOptimizer, which tests a number of values all around a central point in as many dimensions as there are parameters. Be careful using this as the number of iterations can climb very high very fast with 3 or more parameters.

For scoring PID performance, a scoring engine (PIDScoring) is defined that deducts score for variance from the set value, too high acceleration, too high jerk (3rd derivative), and overshoot. The score penalties for each situation is user defined.

  Are you sure? yes | no

Craig Hissett wrote 02/24/2015 at 12:08 point

Hey - love the idea of a Python-based robot controller.

Is this limited to just Robotic arm-based robots?

I am stuffing 3 servos and 2 motors into a Wall-e toy, controlled by an Arduino. I hope to upgrade the project to contain a RPi; I would love to add a camera to it and use some software like yours to control it!

  Are you sure? yes | no

Garrett Herschleb wrote 02/24/2015 at 14:31 point

Currently the software is centered around arm & leg control. However ground wheel movement is one of my planned additions soon, if that's what you're thinking about. Fortunately the architecture lends itself to easy add-ons such as that. Let me know if you want to contribute and I can help.

  Are you sure? yes | no

Craig Hissett wrote 02/24/2015 at 21:45 point

Thanks for the reply - this sounds fantastic!

I would love to contribute, however my Python skills are somewhat basic. If the code is well documented Im sure i could get to grips with it.

  Are you sure? yes | no

Garrett Herschleb wrote 08/16/2015 at 12:44 point

I've now added functionality for wheel control now, including an algorithm to follow a line or general directions to get to another location.

Also included is the ability to use a differential drive as an azimuth control to the robot segment, making zero radius turns to change azimuth angle.

  Are you sure? yes | no

Craig Hissett wrote 08/16/2015 at 14:08 point

Amazing work! I cant wait to have a look through it :-)

  Are you sure? yes | no

jlbrian7 wrote 09/15/2014 at 02:29 point
I tried to use opencv/python for a recognition program using the cascade method (that may be the only way, not really sure), but there is just not enough documentation out there. The program was successful in that it would always recognize the "pattern", but I was getting too many false positives for it to be useful. I am not sure if there was not enough training data, or my programming was at fault (almost certainly a strong combination of the two), but If you have information for that it would be a huge plus.

  Are you sure? yes | no

Garrett Herschleb wrote 09/15/2014 at 03:48 point
One thing I've observed about the recognition utilities in openCV is that the algorithms try to match up a great number of points to recognize an object with no context. I'm not sure anything works that way, including our own wetware. We're always basing recognition on what we expect to see, which is why illusions work so well on us.

So with this system I decided to take the approach of looking for something you expect to see anyway. My own algorithms look for a small number of human specified key features, with the assumption that you'll find what your looking for close by.

For recognizing objects unambiguously in any context, I prefer the use of labels, which is what we use for our own wetware anyway. In this case the labels take the form of color based bar codes, which unlike bar code recognition in zlib, these can be read reliably in any orientation.

Take a look at my visual acquisition module for more details.

Sorry I couldn't help you much on openCV.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates