Close

line following "robot"

A project log for "hatching" vectorizer

my crazy ideas of how an raster-image to vector-image converter (with a pen plotter in mind) with artistic contents may look like

rawerawe 03/20/2016 at 21:065 Comments

With the median filter/edge detection from the previous log the following two images can be generated from the hackaday logo...


median edge (=cleanup + edge detection)median (=cleanup of jpg artifacts)

For the edge picture, it is possible to implement a simple line following "robot" that operates by the following principle:

- for all the steps the "line following robot" makes,
  keep track of them
- find brightest point in image and move/drop/spawn there
- mark point as "already visited" by painting it black
  (draw small black filled circle)
- for -180° to +180° (relative to the current heading) 
  with a distance d from current position,
  sweep around and check where the brightest spot is,
  head there. and move one step with distance d.
  mark point as "already visited". repeat.
  If there is no brightest point near the current location,
  globally find the brightest point and start all over,
  keeping the "already visited" marks.
With the state drawn every step the "line following robot" takes, it is possible do render a series of images and combine them into a video to see the bot in action.

Left: Paths the robot went, Right: Input/playground/temporary image:


Letting the same algorithm run on the filled image, and modifying the "line detection" to sweep from left to right and use the FIRST occurence of a bright point around, it will crawl along the edges.

Note that as soon as the bot crawled along an edge and marked the area as "already visited" by drawing black filled circles there, the edge is slightly distorted and the errors will add up for every new pass the bot takes in my current implementation. Varying the "already visited" pattern, the logic that re-sets the bot once it went into a dead end and the logic it uses to determine where to crawl, it is possible to create various patterns. Not perfect, but a nice starting point. Marking the already-visited points by altering a copy of the input image limits the algorithm to the image resolution, but allows a simple algorithm that does not get slower over time (e.g. collecting and comparing all the "already visited" points in a list, for each visited point one more would need to be checked each time the bot plans a move).

HPGL is a vector image format once used by Hewlet-Packard pen plotters, test equipment, network analyzers (e.g. HP8753). There are only three commands to do basic stuff:

SP1 = select pen number 1
PU123,456 = lift pen and go to absolute coordinate x|y=123|456
PD789,123 = put pen on paper and go to x|y=789|123

Of course, the HPGL format supports much more (text, splines, dotted lines and other features). For a full-blown viewer try the CERN HPGL Viewer ( http://service-hpglview.web.cern.ch/service-hpglview/ ).

As the points the bot visited were stored in a list, it is easy to generate HPGL commands out of them and feed them to a pen plotter. It is possible to do an simulated pen plotter by just interpreting the commands and setting pixels in a bitmap:

Right now, filling is just a quick hack and needs improvement. There are much more points in there than necessary. Paths can be combined (if end-start, start-start or end-end are near each other) and certain intermediate points can be omitted. More on this in a followup log...

EDIT/Update: pen plotted:

Discussions

Eric Hertz wrote 03/22/2016 at 02:35 point

Alright, I dig it. The video-feed is stalling on the filled-video (flakey-net), but I think I get the gist. This is excellent. Definitely a wise-idea to keep track of the current-direction, such that it follows the outline. I dig that. And I think I see what you're saying about different patterns, as well. Would be possible, relatively easily, to e.g. change it to a raster-filled pattern by instead of sweeping from the current direction, always sweep from 0deg and stop at 90 relative to the y-axis (and if nothing's found, start again globally). Or a zig-zag pattern by 0-90 relative-Y and if nothing's found sweep again from -... yeah, can't quite visualize it, but I get the idea of different fill-patterns by changing sweeps and tracking the current direction, cool. And, more importantly, keeping track of what's already been done. Personally, I dig the spiral-fill pattern, thanks for the heads-up on "straight-skeleton" never saw that before.

Also, great idea using the original image (or a copy) as both the image-buffer *and* the "completed"-buffer. Vastly-simplifies things over trying to keep track of literally every position that's been done and repeatedly comparing to the original image.

HP-GL has been my friend :) there's a program called 'hp2xx' that I've found useful (I extracted/learned the vector-font-system from there, it's pretty interesting, actually).

My original interest in your project was to solve the age-old-dilemma (at leas for me) of how to vectorize an image for a pen-plotter (nicely-done). But this is giving me some ideas as to even just image-editing. E.G. use the spiral-pattern you've got here, and redraw the original image with gradients... like a bucket-fill that takes the *shape* of the fill-area into account.

  Are you sure? yes | no

rawe wrote 03/22/2016 at 10:51 point

The current code is on github if you want to take a look or play with it ( https://github.com/TimeTravel-0/hatching ). What do you expect from a nicely-done image-vectorizer? What is the output you expect?  Maybe I could add your ideas to my code.

  Are you sure? yes | no

Eric Hertz wrote 03/22/2016 at 15:24 point

Nono, I meant "Nicely-done" to you!

I had it running overnight on some images of my own, the results are interesting, but my images were lacking in sharp-enough-edges.  Made for an interesting '-vectom' file, though!

https://cdn.hackaday.io/images/8836481458659521015.jpg

Props on what you've accomplished, not knowing much about Python, that looks like some hard work in there! And, again, the vids/pics you did are quite inspiring... Also, the Apple rendering worked perfectly, before I got all cocky and replaced all the example images with my own ;)

  Are you sure? yes | no

Eric Hertz wrote 03/20/2016 at 23:27 point

Heh, Gotta come back to this when my 'net's a little less flakey... In the meantime, that last pic is great, it looks all 3D-ified, shaded and stuff. I'm sure that wasn't the intention, but it's a cool effect.

  Are you sure? yes | no

rawe wrote 03/21/2016 at 09:38 point

Hi, thanks for your feedback. The 3D effect comes from the slightly shifted drawn black circles that mark an area as visited, more or less assembles a "straight skeleton" (https://en.wikipedia.org/wiki/Straight_skeleton)

  Are you sure? yes | no