Close
0%
0%

LadyBug BEEFY: 3D printer motorized microscope

Got printer? Connect to the motors with a Pi, plonk in a cheap scope, and do 2D and 3D scans!

Similar projects worth following
An offshoot of the ladybug project, a "Use every part of the blu-ray player" effort for scanning microscopy, as written here: https://www.instructables.com/id/LadyBug-a-Motorized-Microscope-and-3D-Scanner-for-/ . This is about the opportunistic monsterification of a broken Flashforge Finder 3D printer, to do the same task with larger scan area and better mechanical makeup. LOOKING FOR DEVELOPERS, USERS, AND IDEAS FOR WHAT TO SCAN!

A couple of years ago, I broke my first 3D printer during a replacement of the fan. Up in smoke the mainboard went, never to turn on again --- until now. Life has been breathed into it again in the form of a Raspberry pi 4, some easy drivers, and an unholy amount of jumper cables, hot glue, and some other stuff.

Main components:

1: Old Flashforge finder --- the one with the matte surface finish, external power supply, and LED mounted in the front. The new model has a smooth surface finish and LED mounted onto the extruder assembly, but has the same mechanical guts. Note that a smarter way to operate this would be using Gcode controls, and which would result in quieter, less jerky stepper motion. But hey, I'm reusing trash. 

2: Raspberry pi 4 (1 gb) ram. I have also used a raspi 3 with this and an older ladybug, but the 4 has a much higher framerate with USB cameras, which is nice.  I use it like a desktop with a monitor/keyboard/mouse, but there's no reason you couldn't go headless. 

3: USB microscope: By happy coincidence, the generic ones in the style of dinolite (or the genuine article) fit neatly into the hole in the original extruder assembly. The cheapies have alright quality, with the main disadvantage being position finickiness and field of view. This solves both.

4: I'm used to Easydrivers, but standard a4988/drv8825 should work fine, especially for motors of this size. 

5: Misc hardware: An older 20v laptop power supply, wireless selfie module for taking picture with a phone, breadboard, wires, beeper.  Also shown in the cover image is some circuit stuff for doing laser scanning with a blu-ray optical pickup unit.

Scanning is done on original software running on the Pi, with most post-processing done on a main computer using commercial software as well as some custom utils. 

Finder parts for USB microscope attachment (4).stl

These are the two tan pieces next to the USB microscope, necessary after removing the original plastic carriage. One presses into the belt to allow the axis to continue to move, the other juts out for pressing on the endstop.

Standard Tesselated Geometry - 81.92 kB - 01/07/2020 at 19:41

Download

Finder backside attachment.stl

Provides a place to stick something to the back of the Finder. Filament used to go in here.

Standard Tesselated Geometry - 30.75 kB - 01/07/2020 at 19:40

Download

  • Scan of a 3D print by 3D printer using 3D printed parts

    Wayne2 days ago 0 comments

    (YOU CAN CLICK ON THESE, BY THE WAY!) 

    bout a hundred pics with 25 percent overlap. Color rendition is alright, stitching artifacts are minor, detail is excellent. 

    And there's a trick for getting fewer exposure problems! Rather than using the black plate background and getting that autoadjustment issue around corners, I happened to have a printed sheet of the same color filament I could just slip underneath. It actually makes a pretty big difference: 


    Of course I also took a max focus image of a section, though I got it a bit wrong --- I wanted just a gear. 

    Also about a hundred pics with 25 percent overlap.

    And since it's relevant, here's a scan by the original ladybug, of its own stage. 

    It's interesting to compare the squished, oily look of the orange plastic, which was the first layer printed onto a build tak, and the gold top layer of PLA, which gets duller and crumplier as you zoom in. Something something heat and expansion and a place to go!  

  • a fossil

    Wayne4 days ago 0 comments

    I could probably skip the whole scanning process if I could take better regular pictures than this:

    "But the results are worth it!"

    I'd be wary for quantitative-paleontology, but that looks pretty decent! I feel like scanning this way shortcuts a whole lot of stuff about photography and lighting that I never really figured out. You could hand me an expensive DSLR or whatever and I really couldn't get a better image than this, megapixels be damned. 

    Fun fact, I used 2 Z heights and just flipped back and forth picking the good ones by hand. 

  • another rock

    Wayne5 days ago 0 comments

    Emboldened by the success of the last rock, and driven by the potential for geological applications, of which there are many, I went and gathered some rocks from my house. Some of these are just random things picked up while others I borrowed from the resident ex-geologist. Being flat was the most important thing.

    I also spent time calibrating my system, instituting crude-but-better speed controls (for instance, if moving only a few steps, not blasting it as fast as possible and causing vibrations). I also determined how many pixels displacement per step there were at different focus/magnification configurations, which would let me calculate percent overlap in the X and Y dimensions for each picture. I've been going with round step number up until now, but it's better to use round displacements instead.

    Here's a rock:

    And this is it scanned at "low-medium" resolution, or 1 pixel per step displacement, with just 10 percent overlap between the X and Y dimensions each time (just 9 pictures): 

    See that obvious squareish darker area? That's a downside of having very little overlap each time. An image halfway over the edge (and thus partially viewing the black build plate) will tend to have a higher exposure and the rock at the edges will appear brighter than it actually is. Using more overlap helps the stitcher build the colors into something more natural:

    This is 50% overlap (25 pictures) and the dark spot is there but a less obvious. The obvious downside of increasing the overlap is the scan and stitch time, but another is that it can result in more feature blurring. That's because the stitcher is not perfect and every time two overlapping images are combined, some information is jumbled. It's not very obvious in this case, but if local shape veracity is more important than color, you should aim for as little overlap as possible. 

    25% overlap seems like a good compromise. Here's that with closer viewpoint so that you get so that it's 1.5 pixels/step:

    They're clearly the same rock, but with some differences, mostly in color. Overall, this picture appears more brightly colored, possibly because the light source is closer. There is also a bit less contrast between the dark and light veins. And there are areas of dramatic color differences, most notably in corners and bumps, like at the top --- I wonder if it has anything to do with the autoexposure being triggered by the red stripe?

    And then we've got the really exciting one, high resolution 7.5 pixels/step and 725 pictures selected from 3 Z heights (compressed from 100 to 5 megabytes of course). The rock actually moved a bit between Z heights (vibrations) but it was all able to figured out mostly alright:


    That's 4 hours of my time right there for you.

    And just as a reminder that the above image is composed of ones like this:   

  • a rock

    Wayne6 days ago 0 comments

    this is a rock

    here is a compressed scanned image of that rock (250 images picked from 1250 at 5 Z heights cuz rocks are bumpy)

  • Fabrics (pt 3)

    Wayne02/13/2020 at 22:33 0 comments

    I already scanned these things so I might as well share them, right?

    First up we've got a piece of fuzzy blanket:

    Fun fact, this blanket kept me warm in the labs which we keep at 20 Kelvin, until one day a student used it in a Wearable Technology project. Bastard. 

    There's the high res version (2k pics over just the center) trying to be stitched:

    And there's the stitched version, reduced from a 70 megabyte jpeg to a 4.9 megabyte jpeg to meet Hackaday's 5 megabyte file limit. Honestly, it's not too terribly compressed except for when you look really close, which I guess is the whole point to scanning things like this.

     And then we've got some stretchy gold sequin fabric. This one was interested to scan because it reflects so much light it looks just white, then you zoom in real real close and it's gold again. 

    (fun with linux Cheese's kaleidoscope function)

    This is definitely one of those cases where the moving lightsource makes things wonky, or possibly opens up the opportunity to do something artistic. 

    And closest up version, likewise reduced down to 5 megs:

    I like this one because you can see that there's actually a lot of empty space between the sequins were the black fabric is visible. I would like to visit a material like this again sometime to see how you might get accurate (to a human) color rendition so up close. 

    If you thought this post was neat, please consider sharing it to someone else you think might be interested, too. 

  • Stacking and tilt images

    Wayne02/06/2020 at 00:21 0 comments

    Despite having the ability to theoretically, I've never done true Z stacking by combining images. The other uses of Z height change was to pick between focused images, not actually combine them. This works okay when the change in height is across different x/y regions, not when there are changes of height within a single image. 

    With the rest of the post keep in mind that the usb microscope outputs images at 480/640 pixel resolution. Stacking and then stitching is a whole other ballgame that I think is possible but is a bit more annoying with my current image processing pipeline. 

    Anyway, what is this thing?

    Here's a not so helpful hint:

    If you guessed "ballpoint pen tip", you'd be right! Except a pointy object facing straight up is the worst kind of thing to look at if you have a narrow depth of field. Clearly, there are parts of that image that are in focus, but it's a thin section. Enter moving the bed up 100 times in 50 microns increments:

    (this sequence is compressed a bit to fit hackaday's 5 MB limit, but you get the idea).

    One thing of note is the pattern of illumination, thanks to the scope's always-on LED array. This is also a regular artifact of 2D scanning, and needs to be addressed by, for instance, flooding the entire area with a huge amount of diffuse light. But gimme a break, I'm one person with like no budget.

    Stacking was done using Picolay with no human input:

    Hey, that's not too terrible! It does look kind of strange because, as intern's intern put it, the middle looks like a hole and a mountain at the same time. It's hard to tell what you're looking at. I'm not sure if any of the striations or blurring are caused by the images and illumination or the stacking process itself (whether they're fixable at the software or hardware level). but this is definitely a stacked image, with everything pretty much in focus. 

    That's an auto-generated depth map, but I'm not really sure what it means. 

    Then, after adding a regulator to drop the voltage going into my rotary motor from 24 to 12 volts, because it was running super hot and I used hot glue to mount the pen tip, I tried looking it it from a couple of different angles. I used the rotary tool but didn't program a scan really, just tilted it a couple of times and then stacked again each time:

    Now that's interesting! You can see much more detail, like those gouges on the metal.

    And the other side:

    And, finally, a gif showing the stacking process, which I thought looked really cool:

  • A fourth and fifth axis adds no slop

    Wayne01/31/2020 at 00:06 0 comments

    In the original LadyBug, I used a teeny tiny stepper motor to add rotational capability for not just 2D scanning, but 3D scanning of a small sample:

    I wanted to do the same thing here. So I made a bracket for a standard Nema Stepper motor:

    (first version to left had big old triangles that got in the way, and recessed holes were on the wrong side. Fixed on the one on the right).

    The bracket screws into the red piece on the background, which I made to fit into where the removable printer build plate would slide in. Much easier to install and remove for switching back and fourth between 3D and 2D scanning.

    Here it is installed onto the slideable plate, with some kind of flat scanning surface attached to the spinny part. I guess that's so you could build up a 3D image of flat things like textiles? I'm not really sure. I haven't actually used it yet, because I got caught up in making the machine even more complicated to the point of uselessness. I did that, by...

    ...creating a very precarious and not-printing-optimized piece to attach the output shaft of the fourth stepper motor to a fifth stepper motor. That is, the fifth stepper motor (the pink one) is the one that rotates your sample, and the big NEMA becomes a tilt motor, which rotates the fifth one. See? complicated!

    Here it is in nice PLA: 

    And here it is in disgusting but functional ABS, after the PLA one melted:

    (that's a prosthetic tooth. turns out dentistry is a lucrative market for 3D scanning.)

    Let's cut to the chase: there it is in action.

    (PS, this is my first-ever edited video of any kind basically. I'm not good at everything.)

    So it absolutely does work as intended, which is neat. One problem which I'm very happy to have solved at its core is eluded to in the video. And that is, if you are tilting something, it is not just going to change the angle, but is going to be shifted in the X and Z dimensions as well. I was aware of this for a few days and just tried not think about it because it looked like a lot of really hard math. But it turns out it's actually pretty simple.

    It's really basic geometry. You're sweeping out the path of a circle. If you know the radius --- the distance from the axis of rotation to the point of your object --- and you know the angle change, which you can figure out based on the number of steps and your stepper motor --- well, that's all you need. This all has to be converted a couple of times between steps and real units, which required me to measure things for the first time, but overall I'm quite happy with how well it works. I'm going to have to calibrate it and maybe make the radius dynamic (it's just set ahead of time right now), but it at least gets you in the right ballpark of where you're supposed to be.

    Yeah.

    Okay. There you go.   

  • Fabric (pt 2)

    Wayne01/25/2020 at 21:25 0 comments

    I'll bet you're dying to pour over every scrap of that cloth. I can show you, but it would be in between 3300 and 10,000 pieces. 

    The problem is not one of repeatability:

    The scan was 3 hours and took place at three different Z heights. The 2D raster scan of 3,364 images each occurred in between the Z height changes; that is, each change in Z height happened an hour apart, in my very sophisticated setup of binder clips holding it stable. The lights in the room probably went out at one point but I encourage you to look at the image above and see if you can spot a difference. 

    So the image is stable in the X/Y direction, meaning it shouldn't be a challenge to mix/match clear images, or even stack them. I haven't tried stacking in this setup (it's definitely more stable than my last one, where I had trouble), but the blur sorting qualitatively worked without a hitch. Problem:

    The stitching program hates it!!! 

    Grrr! Argh!!! It's got too many files, or I'm just not mighty enough. It totally knows that it's an image --- the preview looks great!

    It's not even that many --- I'm doing a relatively small area. Could get to 10,000 or more, easy. Add that to a very limited commandline API, as well as the weird color effects...

    ...which is why to belabor my point, I'm not leaving you hanging without a stitched image. That above is some loose burlap which, as you see, is WHITE. The individual images are all WHITE. 

    Microsoft ICE has served me well, but it's desperately time to use or construct a good programmatic solution. I don't care what it's in, OpenCV, Fortran, those little candy ticker tape things. But someone. Heeeelp! 救命! F1!!!

  • Fabrics! (part 1)

    Wayne01/22/2020 at 23:15 0 comments

    I saw someone on facebook talk about scanning textures of things like fabrics, for overlaying onto 3D models. So I found something nice and cute from our school's wearable electronics stockpile:

    They key innovation here being the use of binder clips to keep the fabric stretched out and as flat as possible. I've said it before and I'll say it again: Autofocus post-process is possible, but it's easiest to just make it all in-focus to begin with.

    The focus on the scope is adjustable almost from infinity to less than a millimeter. The closer you get, the higher the magnification. This does not strictly mean that the resolution increases; that is limited entirely by your illumination wavelength/method and your lens's numerical aperture. But dinolite is relatively respected and they prevent the dial from turning past the point where you're getting fake magnification.

    I did three scans: far away (~5 cm), medium (1cm), and close (less than a milimeter). The scan length and difficulty in keeping things in focus are directly related to the magnification, so far away = super easy, medium = probably can keep it all in focus, large = you'll get bad patches unless you do multiple Z heights.

    For comparison, the lowest magnification took 50 pics to see the whole fabric, the medium took 500, and the highest would take about 5000, if you were only doing one Z height. I'm doing three, adjusted about a half millimeter apart each. There's also the danger that the fabric will drift for whatever reason (there are many) in between each change in Z height (the raster pattern happens first) and you'll be unable to do any kind of point-by-point comparison for focus afterwards, but I think even in the worst case most of the image will be roughly in focus even at one Z height.

    Anyway, I'm not going to post the full images now (mostly because the highest res one is still running), but the first two look good. Here's one image of the high res, though, to show you what that fabric looks like up close:

    Comparison, here's a "medium" one, which is about the same resolution as my scan of the front and back of the 50:

  • How to properly level your cookie

    Wayne01/16/2020 at 00:20 0 comments

    Things I did today: Went to class. Received four 3D printers in the mail. Poked a circuit board with metal sticks. Scanned a cookie...

    To scan a cookie with autofocus or stacking you can do one of two things. You can either use a low enough magnification/high enough depth of field that everything is (roughly) in focus, 

    (just use a macro camera, it's much less trouble, sheesh)

    ...or you can make your intern's intern grind down the cookie until it's mostly flat.

    This is exactly as stupid as it sounds. Note that this isn't the same cookie as before --- that one fell on the ground. But hey, after a thousand pictures, you might get a cookie with some parts that are in focus! 

    ...or it could look like a horrifying pustule of a cookie that exemplifies all the bad parts of the scanning technology. full image so you can see just how bad it is: https://www.easyzoom.com/imageaccess/25d4f62f073e4887ace5b873225093b7

    Yuck.

View all 13 project logs

Enjoy this project?

Share

Discussions

RandyKC wrote 01/24/2020 at 21:23 point

You might not want to post scanned images of money. The treasury department gets a little anal about that.

  Are you sure? yes | no

Wayne wrote 01/26/2020 at 01:12 point

I'm not breaking the law, and the scanning introduces so many distortions into the image that it's not funny. They'd be caught faster than right away!

  Are you sure? yes | no

Wayne wrote 01/26/2020 at 01:14 point

For instance, in addition to warping and internal mismatches, there is a marked color gradient visible for larger scans from corner to corner. The illumination variance between shinyspots and the dull also isn't what you'd expect, since the microscope (and the light source) is moving.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates