This project was
created on 04/22/2014
and last updated 5 days ago.
Remember that scene from I, Robot where super-bot Sonny sketches his dream to Will Smith and Bridget Moynahan? Well, screw the future... Lets do that kinda thing TODAY!
Although Roboartist can't dream on his own, he's pretty good at drawing whatever you throw at him. Show him that picture you took on that trip you went on the other day, and watch him grab a pen and swing into action.
Roboartist is a 4 stage robotic arm that can sketch out the outline of any image using a pen/pencil on an A3 sheet using Edgestract, our custom made edge detection algorithm. The project relies on the core engine to extract the edge from the image uploaded for processing. An Arduino Mega controls the servos using information sent from MATLAB (Fret not, a more open implementation is on the way) via the USB/Bluetooth port.
The basic layout of the hardware is as above. Image acquisition is achieved through a Webcam or a camera. We've also allowed scanning of existing JPEGs. Although an RGB LED strip and an LCD Screen weren't strictly necessary, we threw it in just for fun. What does really improve the product design is the white LED backlight constructed from LED strips. The light diffuses through the paper, providing a nice aura to the performances of the Roboartist.
Here's how the software is structured. The basic idea is to let MATLAB do all the heavy lifting and let Arduino focus on wielding the pencil. The program requires the user to control a few parameters to eek out the noise and obtain a good edge output. Once finished the program communicates with the Arduino (via Bluetooth, 'cos too many wires are not cool!).
Here is a quick peek at the image processing stages involved:
Each slice is from a consecutive series of DIP stage. We've been using the Canny edge detection algorithm initially, but we've now built and switched to Edgestract, a more optimized algorithm for drawing. We have been running the algorithm over various types of images and logging the results.
We'll tell you more in the coming updates.
Atmega 2560, 8 bit microcontroller
Laptop / Computer
We got an eye on the Pi ;)
AX-12 A Servo Motors
Dynamixel with associated brackets, nuts and connecting cables
20x4 LCD module based on HD44870
Display the angles and status real time.
Purely for backlight control
HC-06 Bluetooth communication module
Minimize wires. Save the planet.
So that it... well, buzzes
RGB LED Strip (metre)
Panache, people.. panache
White LED Strip (metre)
Affix paper during performances
It's been a while since we've posted. Something came up and so we had to shoot another video of Roboartist drawing a portrait. We tried to show the whole process from start to finish. Here check it out:
Also, we've ported the Matlab code base to Python and promptly forgot to toot our horn. We'll do that some other time. But for now, the interface is much more cleaner than it was before. We got kind of a retro feel smashed with smooth transitions.
I kind of wish we had better lighting in the room. There are a few improvements we'd like to make too- if you have any, leave a comment below too.
This might be one of those things we probably did on a lazy afternoon. Or evening. I don't remember. Coming up with ideas when drowsy... Half asleep, half awake. When we got to our senses, we realised had a bunch of code that did the job well, but didn't exactly measure up to the International Coding Standards to Not Drive Developers Wild. But it worked. And we let it reside. Today, we introduce you to that part of the code that makes the actual drawings. If you haven't read up on how we managed to position our motors at the right places on the drawing sheet, you should probably read that first.
Anyway, what's the easiest and laziest way to draw on paper then? Tell us if you come up with something lazier in the comments but, we sent the angle values of each
AX-12A servo for drawing each pixel to the arduino at rapid rates. Seriously. That's it. This resulted in the stylus moving in the transformed direction of the pixel currently being traced. Here's how we sent the signals to the Arduino.
For controlling the
first 3 servos we need 10 bits- ( 0-1023 since Dynamixel AX-12A motors provide 300 degrees of rotation over 1023 steps ) and the 4th
servo only needs 1 byte to represent up/down. Hence a total of 31 bits ( nearly 4
bytes) must be sent for representing each pixel. But since arduino supports only 8 bit serial data we break down
and rearrange the bits as follows:
The first 3 bytes are formed from the
lower 8 bits of the servo angle values. The 4th byte is formed from the upper 2 bits
of the 3 servo angles, a delay control bit and the bit representing servo 4’s
angle as shown above. These 4 bytes together represent a
single point of the image to be drawn on the paper. These bytes are then sent
to the Arduino in clusters of 32 bytes
Arduino microcontrollers supports standard baud rates: 4800, 9600, 19200, 38400, 57600, 115200. The Arduino Mega has a 64 byte serial register for incoming bytes. MATLAB initially sends 64 bytes worth of data to the
Mega. In the consecutive cycles, after the Mega reads 32 bytes of data, it sends a signaling
byte to MATLAB requesting the next 32 bytes.
During this time the Mega can read the remaining 32 bytes and
hence there is no delay by waiting. We just needed to rearrange the bits on the other side and fire it away to the motors. The
signaling byte we have chosen ( for no apparent reason ) is 50 ( 0b00110010 ).
Yup. That was hacky enough for one day. We probably spent the rest of that afternoon ringing doorbells of the neighbours and hiding in the bushes.
Wow! There has been quite a lot of buzz about the Roboartist, last week and we even got to the pages of Hackaday.com, Engadget and Popular Mechanics. We're delighted and thankful for all the attention we're receiving. Let's just clear this one little thing that seems to be floating: We're not using the Canny Edge detection. We were for a while. However things got messy pretty quick. Read on to find out what went wrong and how we beat it. It was a classic case of necessity spawning a solution.
The output of Canny filter gives emphasis to the
individual gradient around each pixel separately in determining if the pixel
should be an edge or not. However, it does not include the length of a
structure formed from a group of adjacent pixels, so structures the length of only a few pixels show up as edges. This is not really good news for Roboartist because it means he'll be spending a lot of time poking on the drawing sheet ; messing up the good renderings and annoyingly taking up a lot of time on that ( yup, happened ).
We are clearly better off with an algorithm that evaluates the
length of each structure and then along with the sum of the gradients at each
pixel determines whether the structure as a whole is classified as an edge or not. And that's exactly what we built. Edgestract.
Ok, so how do we find
out the length of each structure for this? We correct all forks and branches in all the individual structures until only perfectly open structures or
perfectly closed structures remain.
In the above stage, all branches and nodes are
removed and only selected open and closed structures remain. We've also marked all the endpoints on all open structures as
shown. We're now good to perform
structure the tracing process!
First open structures are evaluated: we start from one end of an open structure and we
move to the next adjacent pixels one by one and increment length variable by 1
for each pixel transversed. When we reach the other end point we get the total
length of that structure. We then search and jump over to the end point
of the closest surrounding open structure. To prevent the same structures from being infinitely
traced by this process we delete each pixel information as we trace along it.
Ultimately we get the lengths of each open
structure and all of them are deleted. The information regarding the path we
traced and the length of each open structure is stored. We then repeat this process for closed structures, starting from any point on a loop structure (since loops don't have end
points) and after covering a complete loop, jump to the nearest point of another closed structure. All the length and path information is combined with
the earlier data. We can now select edges from the individual structures knowing the length of the
structure from the tracing process and combining that information with the path taken by it. The path gives info about all pixels covered by that structure, hence the sum of gradients
of all pixels obtained.
Check out the following image.
We've superimposed the edge results onto the main image. You'll find all the tiny structures get rejected as their lengths are too small. This could easily backfire, but by carefully controlling a few parameters, you can reduce the noise involved. Hence image appears neater and duration for drawing is reduced. Edgestract is optimised to churn out 'drawable' images. Through the tests we've put it through, it was found that it gave us significantly lesser headaches.
Edgestract : saving the world before drawing time :)