Close
0%
0%

LadyBug BEEFY: 3D printer motorized microscope

Got printer? Connect to the motors with a Pi, plonk in a cheap scope, and do 2D and 3D scans!

Similar projects worth following
An offshoot of the ladybug project, a "Use every part of the blu-ray player" effort for scanning microscopy, as written here: https://www.instructables.com/id/LadyBug-a-Motorized-Microscope-and-3D-Scanner-for-/ . This is about the opportunistic monsterification of a broken Flashforge Finder 3D printer, to do the same task with larger scan area and better mechanical makeup. LOOKING FOR DEVELOPERS, USERS, AND IDEAS FOR WHAT TO SCAN!

A couple of years ago, I broke my first 3D printer during a replacement of the fan. Up in smoke the mainboard went, never to turn on again --- until now. Life has been breathed into it again in the form of a Raspberry pi 4, some easy drivers, and an unholy amount of jumper cables, hot glue, and some other stuff.

Main components:

1: Old Flashforge finder --- the one with the matte surface finish, external power supply, and LED mounted in the front. The new model has a smooth surface finish and LED mounted onto the extruder assembly, but has the same mechanical guts. Note that a smarter way to operate this would be using Gcode controls, and which would result in quieter, less jerky stepper motion. But hey, I'm reusing trash. 

2: Raspberry pi 4 (1 gb) ram. I have also used a raspi 3 with this and an older ladybug, but the 4 has a much higher framerate with USB cameras, which is nice.  I use it like a desktop with a monitor/keyboard/mouse, but there's no reason you couldn't go headless. 

3: USB microscope: By happy coincidence, the generic ones in the style of dinolite (or the genuine article) fit neatly into the hole in the original extruder assembly. The cheapies have alright quality, with the main disadvantage being position finickiness and field of view. This solves both.

4: I'm used to Easydrivers, but standard a4988/drv8825 should work fine, especially for motors of this size. 

5: Misc hardware: An older 20v laptop power supply, wireless selfie module for taking picture with a phone, breadboard, wires, beeper.  Also shown in the cover image is some circuit stuff for doing laser scanning with a blu-ray optical pickup unit.

Scanning is done on original software running on the Pi, with most post-processing done on a main computer using commercial software as well as some custom utils. 

Finder parts for USB microscope attachment (4).stl

These are the two tan pieces next to the USB microscope, necessary after removing the original plastic carriage. One presses into the belt to allow the axis to continue to move, the other juts out for pressing on the endstop.

Standard Tesselated Geometry - 81.92 kB - 01/07/2020 at 19:41

Download

Finder backside attachment.stl

Provides a place to stick something to the back of the Finder. Filament used to go in here.

Standard Tesselated Geometry - 30.75 kB - 01/07/2020 at 19:40

Download

  • a rock

    Wayne Wayne10 hours ago 0 comments

    this is a rock

    here is a compressed scanned image of that rock (250 images picked from 1250 at 5 Z heights cuz rocks are bumpy)

  • Fabrics (pt 3)

    Wayne Wayne02/13/2020 at 22:33 0 comments

    I already scanned these things so I might as well share them, right?

    First up we've got a piece of fuzzy blanket:

    Fun fact, this blanket kept me warm in the labs which we keep at 20 Kelvin, until one day a student used it in a Wearable Technology project. Bastard. 

    There's the high res version (2k pics over just the center) trying to be stitched:

    And there's the stitched version, reduced from a 70 megabyte jpeg to a 4.9 megabyte jpeg to meet Hackaday's 5 megabyte file limit. Honestly, it's not too terribly compressed except for when you look really close, which I guess is the whole point to scanning things like this.

     And then we've got some stretchy gold sequin fabric. This one was interested to scan because it reflects so much light it looks just white, then you zoom in real real close and it's gold again. 

    (fun with linux Cheese's kaleidoscope function)

    This is definitely one of those cases where the moving lightsource makes things wonky, or possibly opens up the opportunity to do something artistic. 

    And closest up version, likewise reduced down to 5 megs:

    I like this one because you can see that there's actually a lot of empty space between the sequins were the black fabric is visible. I would like to visit a material like this again sometime to see how you might get accurate (to a human) color rendition so up close. 

    If you thought this post was neat, please consider sharing it to someone else you think might be interested, too. 

  • Stacking and tilt images

    Wayne Wayne02/06/2020 at 00:21 0 comments

    Despite having the ability to theoretically, I've never done true Z stacking by combining images. The other uses of Z height change was to pick between focused images, not actually combine them. This works okay when the change in height is across different x/y regions, not when there are changes of height within a single image. 

    With the rest of the post keep in mind that the usb microscope outputs images at 480/640 pixel resolution. Stacking and then stitching is a whole other ballgame that I think is possible but is a bit more annoying with my current image processing pipeline. 

    Anyway, what is this thing?

    Here's a not so helpful hint:

    If you guessed "ballpoint pen tip", you'd be right! Except a pointy object facing straight up is the worst kind of thing to look at if you have a narrow depth of field. Clearly, there are parts of that image that are in focus, but it's a thin section. Enter moving the bed up 100 times in 50 microns increments:

    (this sequence is compressed a bit to fit hackaday's 5 MB limit, but you get the idea).

    One thing of note is the pattern of illumination, thanks to the scope's always-on LED array. This is also a regular artifact of 2D scanning, and needs to be addressed by, for instance, flooding the entire area with a huge amount of diffuse light. But gimme a break, I'm one person with like no budget.

    Stacking was done using Picolay with no human input:

    Hey, that's not too terrible! It does look kind of strange because, as intern's intern put it, the middle looks like a hole and a mountain at the same time. It's hard to tell what you're looking at. I'm not sure if any of the striations or blurring are caused by the images and illumination or the stacking process itself (whether they're fixable at the software or hardware level). but this is definitely a stacked image, with everything pretty much in focus. 

    That's an auto-generated depth map, but I'm not really sure what it means. 

    Then, after adding a regulator to drop the voltage going into my rotary motor from 24 to 12 volts, because it was running super hot and I used hot glue to mount the pen tip, I tried looking it it from a couple of different angles. I used the rotary tool but didn't program a scan really, just tilted it a couple of times and then stacked again each time:

    Now that's interesting! You can see much more detail, like those gouges on the metal.

    And the other side:

    And, finally, a gif showing the stacking process, which I thought looked really cool:

  • A fourth and fifth axis adds no slop

    Wayne Wayne01/31/2020 at 00:06 0 comments

    In the original LadyBug, I used a teeny tiny stepper motor to add rotational capability for not just 2D scanning, but 3D scanning of a small sample:

    I wanted to do the same thing here. So I made a bracket for a standard Nema Stepper motor:

    (first version to left had big old triangles that got in the way, and recessed holes were on the wrong side. Fixed on the one on the right).

    The bracket screws into the red piece on the background, which I made to fit into where the removable printer build plate would slide in. Much easier to install and remove for switching back and fourth between 3D and 2D scanning.

    Here it is installed onto the slideable plate, with some kind of flat scanning surface attached to the spinny part. I guess that's so you could build up a 3D image of flat things like textiles? I'm not really sure. I haven't actually used it yet, because I got caught up in making the machine even more complicated to the point of uselessness. I did that, by...

    ...creating a very precarious and not-printing-optimized piece to attach the output shaft of the fourth stepper motor to a fifth stepper motor. That is, the fifth stepper motor (the pink one) is the one that rotates your sample, and the big NEMA becomes a tilt motor, which rotates the fifth one. See? complicated!

    Here it is in nice PLA: 

    And here it is in disgusting but functional ABS, after the PLA one melted:

    (that's a prosthetic tooth. turns out dentistry is a lucrative market for 3D scanning.)

    Let's cut to the chase: there it is in action.

    (PS, this is my first-ever edited video of any kind basically. I'm not good at everything.)

    So it absolutely does work as intended, which is neat. One problem which I'm very happy to have solved at its core is eluded to in the video. And that is, if you are tilting something, it is not just going to change the angle, but is going to be shifted in the X and Z dimensions as well. I was aware of this for a few days and just tried not think about it because it looked like a lot of really hard math. But it turns out it's actually pretty simple.

    It's really basic geometry. You're sweeping out the path of a circle. If you know the radius --- the distance from the axis of rotation to the point of your object --- and you know the angle change, which you can figure out based on the number of steps and your stepper motor --- well, that's all you need. This all has to be converted a couple of times between steps and real units, which required me to measure things for the first time, but overall I'm quite happy with how well it works. I'm going to have to calibrate it and maybe make the radius dynamic (it's just set ahead of time right now), but it at least gets you in the right ballpark of where you're supposed to be.

    Yeah.

    Okay. There you go.   

  • Fabric (pt 2)

    Wayne Wayne01/25/2020 at 21:25 0 comments

    I'll bet you're dying to pour over every scrap of that cloth. I can show you, but it would be in between 3300 and 10,000 pieces. 

    The problem is not one of repeatability:

    The scan was 3 hours and took place at three different Z heights. The 2D raster scan of 3,364 images each occurred in between the Z height changes; that is, each change in Z height happened an hour apart, in my very sophisticated setup of binder clips holding it stable. The lights in the room probably went out at one point but I encourage you to look at the image above and see if you can spot a difference. 

    So the image is stable in the X/Y direction, meaning it shouldn't be a challenge to mix/match clear images, or even stack them. I haven't tried stacking in this setup (it's definitely more stable than my last one, where I had trouble), but the blur sorting qualitatively worked without a hitch. Problem:

    The stitching program hates it!!! 

    Grrr! Argh!!! It's got too many files, or I'm just not mighty enough. It totally knows that it's an image --- the preview looks great!

    It's not even that many --- I'm doing a relatively small area. Could get to 10,000 or more, easy. Add that to a very limited commandline API, as well as the weird color effects...

    ...which is why to belabor my point, I'm not leaving you hanging without a stitched image. That above is some loose burlap which, as you see, is WHITE. The individual images are all WHITE. 

    Microsoft ICE has served me well, but it's desperately time to use or construct a good programmatic solution. I don't care what it's in, OpenCV, Fortran, those little candy ticker tape things. But someone. Heeeelp! 救命! F1!!!

  • Fabrics! (part 1)

    Wayne Wayne01/22/2020 at 23:15 0 comments

    I saw someone on facebook talk about scanning textures of things like fabrics, for overlaying onto 3D models. So I found something nice and cute from our school's wearable electronics stockpile:

    They key innovation here being the use of binder clips to keep the fabric stretched out and as flat as possible. I've said it before and I'll say it again: Autofocus post-process is possible, but it's easiest to just make it all in-focus to begin with.

    The focus on the scope is adjustable almost from infinity to less than a millimeter. The closer you get, the higher the magnification. This does not strictly mean that the resolution increases; that is limited entirely by your illumination wavelength/method and your lens's numerical aperture. But dinolite is relatively respected and they prevent the dial from turning past the point where you're getting fake magnification.

    I did three scans: far away (~5 cm), medium (1cm), and close (less than a milimeter). The scan length and difficulty in keeping things in focus are directly related to the magnification, so far away = super easy, medium = probably can keep it all in focus, large = you'll get bad patches unless you do multiple Z heights.

    For comparison, the lowest magnification took 50 pics to see the whole fabric, the medium took 500, and the highest would take about 5000, if you were only doing one Z height. I'm doing three, adjusted about a half millimeter apart each. There's also the danger that the fabric will drift for whatever reason (there are many) in between each change in Z height (the raster pattern happens first) and you'll be unable to do any kind of point-by-point comparison for focus afterwards, but I think even in the worst case most of the image will be roughly in focus even at one Z height.

    Anyway, I'm not going to post the full images now (mostly because the highest res one is still running), but the first two look good. Here's one image of the high res, though, to show you what that fabric looks like up close:

    Comparison, here's a "medium" one, which is about the same resolution as my scan of the front and back of the 50:

  • How to properly level your cookie

    Wayne Wayne01/16/2020 at 00:20 0 comments

    Things I did today: Went to class. Received four 3D printers in the mail. Poked a circuit board with metal sticks. Scanned a cookie...

    To scan a cookie with autofocus or stacking you can do one of two things. You can either use a low enough magnification/high enough depth of field that everything is (roughly) in focus, 

    (just use a macro camera, it's much less trouble, sheesh)

    ...or you can make your intern's intern grind down the cookie until it's mostly flat.

    This is exactly as stupid as it sounds. Note that this isn't the same cookie as before --- that one fell on the ground. But hey, after a thousand pictures, you might get a cookie with some parts that are in focus! 

    ...or it could look like a horrifying pustule of a cookie that exemplifies all the bad parts of the scanning technology. full image so you can see just how bad it is: https://www.easyzoom.com/imageaccess/25d4f62f073e4887ace5b873225093b7

    Yuck.

  • Scanning Yujie

    Wayne Wayne01/14/2020 at 04:29 0 comments

    This are scans using Ladybug, the motorized microscope and 3d scanner for small things made out of Blu-Ray players!  

    Scanning a phone screen is interesting (Ladybug art, above). Click on it to get the full resolution. This is at the lower res mode --- not as extreme, kind of artistic even! We needed to get an app to prevent screen timeouts, and then just... stuck it on there with a bit of tape. If we got it a bit more level, it would reduce some of the variation throughout the image. Or, alternatively, you could implement autofocus. 

    The original is on instructables --- if you notice a hint of green, that's actually a consequence of the stitching process --- the raws are perfect! And so the nonprogrammer cried into the night, for someone who would kindly rescue him...

    And then my good pal Yujie boy came back to the lab for a day, so we scanned his face, too. Held still for 106 minutes. Darndest thing I've ever seen. 

  • MONEY (part 2)

    Wayne Wayne01/10/2020 at 00:29 0 comments

    Wow, so I've got 3 followers, now, nice! Hooray for Reddit!

    I mentioned before that I had a scan of 17,000 images or so. The truth is, that's actually five scans of 3000-odd x/y scans each, just at different Z heights. The purpose of that is to eliminate the need to autofocus: If you have a usable focal depth of, say, 50 microns, and the variability of the sample or bed is 200 microns, just blindly do 5 scans at 0, 50, 100, 150, and 200 microns Z height variance. 

    Then, you can compare the focus quality after the fact for each sub-image, and pick the best before doing stitching, as in this more-comment-than-code snippet:

    def variance_of_laplacian(image):
    	# compute the Laplacian of the image and then return the focus
    	# measure, which is simply the variance of the Laplacian
    	return cv2.Laplacian(image, cv2.CV_64F).var()

     (https://github.com/yuji3w/ladybug/blob/master/utils/blur.py)

    Hooray, brute force! If I were saying that this is an efficient way of going about things, I would be lying through my teeth. But it is a way of faking depth of field if you're not smart enough to do autofocus in real time.

    I got lazy and did NOT bother to do that here for my extra-close-up of his nose. And those eyelashes. Wow. 

    Full image here: https://www.easyzoom.com/imageaccess/08c28e0a1ce84eac8a3fb6193ceb004d

    The color variation is a consequence of the stitching software and not seen in the original raw pieces.

    I tried a couple things different for the back half of the 50. I stuck it to a piece of metal with which I stuck to a piece of metal with magnets instead of using tape, which worked alright. Interestingly adding all that weight reduced the magnitude, but increased the duration, of the bed vibration as observed when changing the Z height.  

    This obscures the corners, too, and adds area of shininess, which confuses the heck out of the USB microscope's auto exposure. 

    (50% scale).

    one thing I did was using a much higher step size for moving the scope --- 480 and 340 steps in the X and Y dimensions, as opposed to 200x200 before. The stitching program had no trouble withit, and in fact I think it results in less color variance and warping. however, the resolution is also a bit lower, though I'll have to do tests side-by-side to confirm! I am wondering if microsoft ICE is doing oversampling and faking those pixels a bit. 

    What should I scan next? 

  • MONEY! (pt 1)

    Wayne Wayne01/08/2020 at 22:38 0 comments

    A bit nervously, I overnight scanned my largest object to date --- a crisp (okay, not really) 50 dollar bill. The lab is locked but there's always the chance that it'll be opened in the morning for a tour or something. So I was a bit relieved to find that it was still there.

    Unless you're doing active focus correction or Z positioning, the trick is to just make your object as flat as possible, and use the minimum acceptable magnification. I use two small strips of double sided carpet tape to keep it stable during the scan. 

    The high res image was 17,000 images, and I haven't even attempted that yet, but...

    The low res scan worked pretty well! 738 images, approximately 50% overlap, originally 480/640 pixels each, stitched in microsoft ICE. The mechanical precision is sufficient, I think, that you could just overlay these images on top of each other and get a decent image without using any special software. 

    (The above image is 25% scale after you click on it.)

    All sorts fun microtext scattered throughout.

    Full image: https://www.easyzoom.com/imageaccess/87fadcf548ca4d4892ffe8383ad1fe66

View all 10 project logs

Enjoy this project?

Share

Discussions

RandyKC wrote 01/24/2020 at 21:23 point

You might not want to post scanned images of money. The treasury department gets a little anal about that.

  Are you sure? yes | no

Wayne Wayne wrote 01/26/2020 at 01:12 point

I'm not breaking the law, and the scanning introduces so many distortions into the image that it's not funny. They'd be caught faster than right away!

  Are you sure? yes | no

Wayne Wayne wrote 01/26/2020 at 01:14 point

For instance, in addition to warping and internal mismatches, there is a marked color gradient visible for larger scans from corner to corner. The illumination variance between shinyspots and the dull also isn't what you'd expect, since the microscope (and the light source) is moving.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates