Close

Does this project spark your interest?

Become a member to follow this project and don't miss any updates

Roboartist

"Can a robot turn a... canvas into a beautiful masterpiece?" - Will Smith (I, Robot)

8 88 76
Enjoy this project?
Share on twitter   Share on Facebook

This project was created on 04/22/2014 and last updated 5 months ago.

Description
Remember that scene from I, Robot where super-bot Sonny sketches his dream to Will Smith and Bridget Moynahan? Well, screw the future... Lets do that kinda thing TODAY!

Although Roboartist can't dream on his own, he's pretty good at drawing whatever you throw at him. Show him that picture you took on that trip you went on the other day, and watch him grab a pen and swing into action.
Details

Roboartist is a 4 stage robotic arm that can sketch out the outline of any image using a pen/pencil on an A3 sheet using Edgestract, our custom made edge detection algorithm. The project relies on the core engine to extract the edge from the image uploaded for processing. An Arduino Mega controls the servos using information sent from MATLAB (Fret not, a more open implementation is on the way) via the USB/Bluetooth port.


THE HARDWARE

Hardware Layout

The basic layout of the hardware is as above. Image acquisition is achieved through a Webcam or a camera. We've also allowed scanning of existing JPEGs. Although an RGB LED strip and an LCD Screen weren't strictly necessary, we threw it in just for fun. What does really improve the product design is the white LED backlight constructed from LED strips. The light diffuses through the paper, providing a nice aura to the performances of the Roboartist. 

THE SOFTWARE

Software structure

Here's how the software is structured. The basic idea is to let MATLAB do all the heavy lifting and let Arduino focus on wielding the pencil.  The program requires the user to control a few parameters to eek out the noise and obtain a good edge output. Once finished the program communicates with the Arduino (via Bluetooth, 'cos too many wires are not cool!). 

STAGES

Here is a quick peek at the image processing stages involved:

Each slice is from a consecutive series of DIP stage. We've been using the Canny edge detection algorithm initially, but we've now built and switched to Edgestract, a more optimized algorithm for drawing. We have been running the algorithm over various types of images and logging the results.

We'll tell you more in the coming updates.

Components
  • 1 × Arduino Mega Atmega 2560, 8 bit microcontroller
  • 1 × Laptop / Computer We got an eye on the Pi ;)
  • 4 × AX-12 A Servo Motors Dynamixel with associated brackets, nuts and connecting cables
  • 1 × 20x4 LCD module based on HD44870 Display the angles and status real time.
  • 1 × 12V Relay Purely for backlight control
  • 1 × HC-06 Bluetooth communication module Minimize wires. Save the planet.
  • 1 × Buzzer So that it... well, buzzes
  • 2 × RGB LED Strip (metre) Panache, people.. panache
  • 5 × White LED Strip (metre) Backlighting
  • 8 × Neodymium magnets Affix paper during performances

See all components

Project logs
  • Serial Communication on a Hacky Afternoon

    5 months ago • 0 comments

    This might be one of those things we probably did on a lazy afternoon. Or evening. I don't remember. Coming up with ideas when drowsy... Half asleep, half awake. When we got to our senses, we realised had a bunch of code that did the job well, but didn't exactly measure up to the International Coding Standards to Not Drive Developers Wild. But it worked. And we let it reside. Today, we introduce you to that part of the code that makes the actual drawings. If you haven't read up on how we managed to position our motors at the right places on the drawing sheet, you should probably read that first.

    Anyway, what's the easiest and laziest way to draw on paper then? Tell us if you come up with something lazier in the comments but, we sent the angle values of each AX-12A servo for drawing each pixel to the arduino at rapid rates. Seriously. That's it. This resulted in the stylus moving in the transformed direction of the pixel currently being traced. Here's how we sent the signals to the Arduino.

    For controlling the first 3 servos we need 10 bits- ( 0-1023 since Dynamixel AX-12A motors provide 300 degrees of rotation over 1023 steps ) and the 4th servo only needs 1 byte to represent up/down. Hence a total of 31 bits ( nearly 4 bytes) must be sent for representing each pixel. But since arduino supports only 8 bit serial data we break down and rearrange the bits as follows:

    The first 3 bytes are formed from the lower 8 bits of the servo angle values. The 4th byte is formed from the upper 2 bits of the 3 servo angles, a delay control bit and the bit representing servo 4’s angle as shown above. These 4 bytes together represent a single point of the image to be drawn on the paper. These bytes are then sent to the Arduino in clusters of 32 bytes

    Arduino microcontrollers supports standard baud rates: 4800, 9600, 19200, 38400, 57600, 115200. The Arduino Mega has a 64 byte serial register for incoming bytes. MATLAB initially sends 64 bytes worth of data to the Mega. In the consecutive cycles, after the Mega reads 32 bytes of data, it sends a signaling byte to MATLAB requesting the next 32 bytes. During this time the Mega can read the remaining 32 bytes and hence there is no delay by waiting. We just needed to rearrange the bits on the other side and fire it away to the motors. The signaling byte we have chosen ( for no apparent reason ) is 50 ( 0b00110010 ).

    Yup. That was hacky enough for one day. We probably spent the rest of that afternoon ringing doorbells of the neighbours and hiding in the bushes.

  • Edgestract - An 'uncanny' edge detection algorithm

    5 months ago • 0 comments

    Wow! There has been quite a lot of buzz about the Roboartist, last week and we even got to the pages of Hackaday.com, Engadget and Popular Mechanics. We're delighted and thankful for all the attention we're receiving. Let's just clear this one little thing that seems to be floating: We're not using the Canny Edge detection. We were for a while. However things got messy pretty quick. Read on to find out what went wrong and how we beat it. It was a classic case of necessity spawning a solution.

    The output of Canny filter gives emphasis to the individual gradient around each pixel separately in determining if the pixel should be an edge or not. However, it does not include the length of a structure formed from a group of adjacent pixels, so structures the length of only a few pixels show up as edges. This is not really good news for Roboartist because it means he'll be spending a lot of time poking on the drawing sheet ; messing up the good renderings and annoyingly taking up a lot of time on that ( yup, happened ).




    We are clearly better off with an algorithm that evaluates the length of each structure and then along with the sum of the gradients at each pixel determines whether the structure as a whole is classified as an edge or not. And that's exactly what we built. Edgestract.

    Ok, so how do we find out the length of each structure for this? We correct all forks and branches in all the individual structures until only perfectly open structures or perfectly closed structures remain.


    In the above stage, all branches and nodes are removed and only selected open and closed structures remain. We've also marked all the endpoints on all open structures as shown. We're now good to perform structure the tracing process!

    First open structures are evaluated: we start from one end of an open structure and we move to the next adjacent pixels one by one and increment length variable by 1 for each pixel transversed. When we reach the other end point we get the total length of that structure. We then search and jump over to the end point of the closest surrounding open structure. To prevent the same structures from being infinitely traced by this process we delete each pixel information as we trace along it.

    Ultimately we get the lengths of each open structure and all of them are deleted. The information regarding the path we traced and the length of each open structure is stored. We then repeat this process for closed structures, starting from any point on a loop structure (since loops don't have end points) and after covering a complete loop, jump to the nearest point of another closed structure. All the length and path information is combined with the earlier data. We can now select edges from the individual structures knowing the length of the structure from the tracing process and  combining that information with the path taken by it. The path gives info about all pixels covered by that structure, hence the sum of gradients of all pixels obtained.

    Check out the following image.




    We've superimposed the edge results onto the main image. You'll find all the tiny structures get rejected as their lengths are too small. This could easily backfire, but by carefully controlling a few parameters, you can reduce the noise involved. Hence image appears neater and duration for drawing is reduced. Edgestract is optimised to churn out 'drawable' images. Through the tests we've put it through, it was found that it gave us significantly lesser headaches. 


    Edgestract : saving the world before drawing time :)

  • Motor Angle Calculation Breakdown

    5 months ago • 1 comment

    We thought you might be interested in knowing the mechanics and math involved in four stage arm control. Its quite simple really. We hope this will help a few new hackers with their future builds. Here we go...

    The aim of this algorithm is to determine the angles that the servos should take for the robotic arm holding the pen to be positioned at (X3,Y3). We perform the calculations in the Cartesian coordinate system by taking the axis of servo S1 as the origin. The following little formula you've probably learnt (and forgot) will come in handy. Its refered to as the Law of cosines.

    We start by assuming we know (X1,Y1). 

    S4 servos angle does not need to be calculated as it is only needed for lifting and placing the pen on the paper. We can therefore ignore it in this derivation. 

    Since L1, L2 and now R2 is known, by using equation we find the angle to be moved by servo 3 (O3).

     O3 = Arccos ( ( L2^2 + L3^2 - R2^2 ) / (2.L2.L3) )

    Similarly, we find O2a and O2b as marked in figure. Adding O2a and O2b, we get the angle O2 to be moved by servo 2.

    O2a = Arccos ( ( L2^2 + R2^2 - L3^2) / (2 . L2 . R2) )

    O2a = Arccos ( ( L1^2 + R2^2 - R3^2) / (2 . L1 . R2) )

    So we can sum up those angles to find out angle O2.

    O2 = O2a + O2b

    Great! But we still don't know the value of O1 . This one's a little tricky, take a look at the following figure. We've divided the drawing canvas into three regions.

    • If the point to be drawn is further than D1 from the origin (as shown), then O1 = Arctan ( y/x ).
    • If the point to be drawn is nearer than D2 from the origin, then O1=Arctan ( y/x ) + pi/2.
    • If the point to be drawn is nearer than D21 from the origin but farther than D1, then O1 = Arctan( y/x )+( pi/2 ).( D2 - R3 ) / ( D3 - D2 ).

    Now that we've deduced O1, we can derive points ( X1, Y1 ) using 

    X1 = L1 . Cos( O1 )    and    Y1 = L1 . Sin( O1 ).

    There! That wasn't so hard, was it? Next time we'll try to give you some insight into the algorithm that processes the images.

View all 8 project logs

Discussions

iamnotachoice wrote 22 days ago null point

Hey there! Is there any chance your robot could draw my project for use as "artist's redention" in the hackaday prize? It's the Hoverlay: https://hackaday.io/project/205-Hoverlay-II

Are you sure? [yes] / [no]

bonpas3 wrote 3 months ago null point

that 's great job Congrats

Are you sure? [yes] / [no]

skyberrys wrote 5 months ago null point

Hey this thing is really impressive. I would like to give a try at building one.

Are you sure? [yes] / [no]

niazangels wrote 5 months ago null point

Thanks, skyberry! We'd love to see you build. We hope to release everything you need to build your own and if that doesn't help, ping us and we'd love to pitch in.

Are you sure? [yes] / [no]

Eric Evenchick wrote 5 months ago null point

If you're looking for something more open than MATLAB to do edge detection, OpenCV has Canny detection built in. Not sure if it's sufficient for your applications, but the Python bindings make it pretty easy to get going.

Are you sure? [yes] / [no]

niazangels wrote 5 months ago null point

Thanks for the heads up, Eric, but we've tweaked the Canny beyond recognition at this point. While we do blur and scan in 6 directions, we've added stages such as Thinning and Single pixel elimination to create a reasonably drawable image. We'll try to post an entry explaining the process.

Are you sure? [yes] / [no]

Eric Evenchick wrote 5 months ago null point

Alright, looking forward to hearing more about the processing then.

Are you sure? [yes] / [no]

niazangels wrote 5 months ago null point

Take a look :)
http://hackaday.io/project/895/log/2263

Are you sure? [yes] / [no]