A real-time creative coding environment for robot arms. For painting, performance, and other kinds of art.

Similar projects worth following
Crash is a flexible robot control system for the creative programming of robot arms. Its aim is to facilitate rapid prototyping of new types of human-robot interaction, particularly in service of novel digital fabrication techniques. Development is motivated by the needs of my machine-mediated painting experiments.

After intensive training human craftspeople can produce artifacts of unmatched quality, while adapting to variations in materials, design constraints, and available tools. Modern manufacturing technologies perform well when the goal is to create very many of an object, but break down when every object must be designed and fabricated to fit unique constraints.

For example, in architecture no two buildings are exactly the same down to the last detail and so the design of each one needs to be adapted to the space where it will be realized and the requirements of the particular project. Even when final plans are drawn they are often adapted on the fly during construction because some critical constraint was missed which only became clear with the physical work underway.

Sometimes the environment for the design is not known in advance, like in the case of building on other planets or in disaster areas. Other times a design calls for materials that have varying material properties, like wood, and some adaptation of the design is needed to make use of the material available. Right now our automation can only be used under carefully controlled circumstances and work must fall back to humans in every other case.

Ideally machines would relieve us architects, engineers, and designers from some of the burden of adapting to the dynamic world our creations must survive in. The only way to achieve that is to build higher order tools which can learn and improvise on their own.

  • Motion Trajectory Smoothing with Ruckig

    Owen Trueblood07/18/2022 at 01:38 0 comments

    In the last post I showed how to set up CollisionIK for my Kuka KR 16 robot arm in ROS so that it can be used to solve the inverse kinematics (IK) problem in real-time while handling singularities, collisions, and other issues. I only showed it working in simulation because there were still some issues to overcome before it would be safe to use with the real robot. In this post I show my solutions to those issues and the resulting system being used to control the real 600 lb robot arm based on device and hand tracking data obtained using WebXR on a phone and Quest 2, respectively.

    The motion from CollisionIK looked fairly reasonable in simulation, so why couldn’t it it just be plugged straight into the robot? The main reason is safety - nothing was in place to prevent dangerously fast or unexpected motion. In the worst case that motion could hurt someone, or it could damage the robot. In the best case it will cause the RSI interface on the robot controller to throw an error and do an emergency stop. So at the very least we need to constrain the motion sent to the robot just so it will keep doing what we tell it.

    CollisionIK contains code to minimize the velocity and acceleration of the robot but nothing to limit those parameters to safe ranges. Due to the nature of the solver - it relaxes constraints to find solutions in situations where other solvers would give up and stop - it’s not really appropriate to implement safety constraints as goals in the CollisionIK objective function.

    A Quick Note on Jerk

    Another reason we can’t just proceed with the velocity & acceleration minimization included in CollisionIK is that we also need to limit jerk, which is the rate of change in acceleration. For small robots it’s sometimes ok to ignore jerk, but my KR 16 is big enough that ignoring jerk could lead to damage to the internal mechanisms of the robot.

    You can get an intuitive sense of why by thinking about what acceleration means in the context of the robot arm’s drive mechanism. By Newton’s 2nd law, force equals mass times acceleration. The mass here is basically fixed. So whatever reasoning we do about acceleration also applies to force. Torque is the kind of force that’s relevant in the robot arm because it’s force applied to rotate an object (the arm’s joints). Now if we look at jerk again we can see that because it is the rate of change in acceleration it is proportional to the change in torque.

    So if we have discontinuous jerk we have discontinuous torque. Physically a jump in jerk is like a whack to the input to the mechanism. A light mechanism has little inertia and will respond to the whack by accelerating all together. Like swinging a baseball bat into a baseball. A heavy mechanism with a lot of inertia will respond to the whack by shedding energy into breaking stuff. Like swinging a baseball bat into a brick wall. You might chip the bricks or break your wrists but you will not move the wall in time with the impact. If you keep pushing with all that force eventually the wall will yield, but only after a lot of damage has been done at the interface where the collision happened. This is how I think about what happens inside the robot to its gears when there’s discontinuous jerk.

    Jerk-limited Motion with Ruckig

    The goal is to limit the velocity, acceleration, and jerk of the motion of the individual joints of the robot arm. At the same time all of the individual joint trajectories need to be synchronized in time so that the tool at the end of the robot follows its target trajectory as close as possible. And these calculations need to be very fast so we can run them in real time, at minimum a few hundred times a second. Lastly, we need to be confident that we will always get a solution in a bounded amount of time. If even a single solution is late then the 12 ms RSI communication contract will be broken and the robot will stop.

    Ruckig is a motion trajectory generation library that is a near ideal answer to these...

    Read more »

  • Real-time Motion Planning with CollisionIK

    Owen Trueblood06/19/2022 at 00:03 0 comments


    One major barrier to a fluid creative workflow with robots is the often stilted sense-think-act loop. Every time through the loop the robot first gathers information about the environment, crunches the numbers to generate a plan, and then executes that plan. Without special care the individual steps can take significant time, which results in a robot that can’t fluidly adapt to changes in the environment or high level control inputs.

    For my system the think step has been the bottleneck. Previously I demonstrated real-time communication with my Kuka KR 16 robot arm using a driver from ROS Industrial and motion planning with the MoveIt motion planning framework. Even though the underlying control data being sent to the robot was on a 12 ms time step the motion planning step in MoveIt often took a few seconds. And there was no straightforward way to append new plans found while moving, so the robot would come to a stop after finishing executing each one. Metaphorically, the robot was closing its eyes, covering its ears, and running through a room. Only when it finished running did it open its eyes to see whether it had made it to the other side or slammed into an obstacle; and uncover its ears to be told what it needed to do next.

    If we had a system that kept its eyes open and ears uncovered while taking action we could change the kind of problems we can solve with it and the ways that we implement the solutions. We could:

    • Have a human step in at a moment’s notice to move the robot out of a tight situation or give it hints about what to do. For example, we could use a VR controller to move the tool to the canvas. Or do computer mediated painting where input from a haptic controller is filtered, scaled, or otherwise modified before directly moving the tool on the end of the robot in real-time.
    • Adjust motion in response to fast, unpredictable, and chaotic changes in the environment. Obstacles moving around, the work surface changing, or small details like paint drips.
    • Use iterative algorithms to explore and manipulate the environment. For example, using a camera to visual servo towards a canvas. Often it's much easier to come up with a solution to a perception problem by thinking about taking small steps down a cost gradient.

    To do these things we need real-time motion planning to go with our real-time trajectory execution. There are many kinds of planning problems in robotics but the one that I am mainly interested in here is about taking a path for the tool on the end of the robot through cartesian space (XYZ and rotation) and finding a corresponding path through joint space that has the tool follow the target path as closely as possible while avoiding problems like collisions and singularities. MoveIt provides several planners that can solve this particular problem in a global, offline way. That is, it will find a solution that is close to the “best” possible given the constraints, but it will calculate it all at once beforehand. What we want instead is a hybrid: local-first, global when feasible, and very online (real-time).

    Survey of Solutions


    Back in 2019 I was looking at BioIK as a potential solution. It is robust because it continuously creates populations of potential solutions for what to do in the next moment and quickly scores them based on various criteria that are based on how well they meet the goals and avoid potential issues. This lets it be very flexible but also fast. Because there’s a whole population of potential solutions at each step you can tune its performance or characteristics by controlling the population. For example, if you want more speed you can reduce the population of potential solutions.

    It seemed pretty great on paper, but in practice I’ve run into difficulty getting it to produce motion with the qualities I desire. It provides a myriad of different solvers that have different properties. The default one is global and not ideal for adapting to on-the-fly...

    Read more »

  • Order (Robots) and Chaos (Paint)

    Owen Trueblood06/18/2022 at 23:42 2 comments

    It’s been more than 3 years since I’ve updated this project. A lot of that time was spent on other work, but this project has consistently stayed on my mind. Now I’m coming back to it with a fat stack of notebooks full of ideas and a clarified sense of why this is worth pursuing.

    What is the relevance of robots to painting? Here, combining the two is about creating an excuse to use a visceral, emotional medium to find and express ideas that capture something of the truths encoded in the boundary between ordered systems and chaotic ones.

    I’m obsessed with the apparent paradox that great complexity can “grow” out of near uniformity evolving according to relatively simple rules. I wonder what limits of understanding may exist for agents, like us, embedded in the systems they are observing. I worry that I don’t have the right tools to identify, understand, or communicate the insights that might come from pondering these apparent paradoxes. I’m not going to make any scientific discoveries or write down any mathematical proofs about them. But I can at least make progress on them in a way that is personally meaningful to me.

    That’s where this work comes in. I aim to use painting as a petri dish of chaos that happens to grow artifacts that can be meaningful to me and potentially carry some meaning to others. I’m building my own kinds of instruments for probing it that are biased by my interests in complexity theory and beyond. Robotics is just my chosen means for putting the instances of order and chaos that I’m excited by in close contact, and a platform for building tools to guide, capture, and communicate the activity that results.

  • Real-time Control with ROS

    Owen Trueblood03/31/2019 at 04:13 3 comments

    Real-time feedback is an important ingredient in tool embodiment. That is, when a tool reacts fluidly to human intention and the result of its action is available immediately and continuously it can feel like an extension of the body of the user. Like riding a bike, writing with a pen, or making music on a piano. After learning little attention needs to be paid to moving the right way and so the conscious mind can occupy the abstract space of the task the tool provides leverage over.

    Industrial robots are not usually built with real-time feedback in mind, because most often they repeat the same basic motions over and over again in a static environment. And real-time control can be extremely dangerous, because a bug in the software regulating the motion can lead to unexpected motion of the robot. The only control usually exposed to a human user is the ability to start or stop the activity of the machine, or change its speed. The first significant technical hurdle of this project lies in figuring out how to overcome the base limits of the controller to enable real-time control over every joint in the robot.

    Remote Control with KUKAVARPROXY

    The KRL programming language environment on the KUKA Robot Controller v2 (KRC2) used in this project provides programming primitives for controlling and triggering action based on basic industrial I/O, but its a pain to set up such I/O and the control frequency is very poor (roughly a few Hz). It's meant mainly for switching on equipment like welders or reading inputs like proximity switches, not low latency communication.

    Ideally the robot could be controlled over Ethernet from a regular PC. A common tool for doing that with older KUKA industrial robots is a third-party tool called KUKAVARPROXY. This tool runs under Windows 95 on the KRC and sets up a server that allows global variables to be read and written over a network connection via a simple protocol. A script written in KUKA Robot Language (KRL) running on the robot can read and set global variables to communicate with external client PCs.

    There is some support for this mode of control in commercial software, such as the excellent RoboDK. By running a specified KRL script on the robot RoboDK is able to connect and read the state of the robot's joints and execute basic motion commands. RoboDK has a Python API that allows mediated control of the robot. In the video above a Processing sketch is used to generate simple motion commands that are sent to a Python server (code here) which calls the RoboDK API to communicate with KUKAVARPROXY on the robot and consequently the running KRL script. For setting up jobs in RoboDK this sort of functionality is sufficient, but it does not provide an avenue to the real-time control that I'm after. There is a critical flaw with this mode of communication which has to do with how the robot does motion planning during KRL execution.

    To be able to plan motion properly KRL scripts are executed from two places at once. There is the normal "program counter" which is tied to the currently running command. But there is lookahead functionality as well, which executes commands into the future to fill the motion planner's buffer. Executing motion that depends on variables which can be changed at any time throws a wrench into this lookahead mechanism. Disabling lookahead means that the robot will come to a stop after every single motion command, making it impossible to build up fluid motion from the available motion primitives in KRL like lines and arcs.

    I spent a while investigating KUKAVARPROXY and KRL before ruling them out. One of the products of that work was a rudimentary transpiler from Clojure to KRL. You can find an example of that tool used to implement a KUKAVARPROXY remote control script on GitHub here.

    Real-time Control with RSI

    If KRL is fundamentally limited when it comes to real-time control because of the lookahead feature, what are other options? What's...

    Read more »

View all 4 project logs

Enjoy this project?



Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates