Close
0%
0%

LadyBug BEEFY: 3D printer motorized microscope

Got printer? Connect to the motors with a Pi, plonk in a cheap scope, and do 2D and 3D scans!

Similar projects worth following
An offshoot of the ladybug project, a "Use every part of the blu-ray player" effort for scanning microscopy, as written here: https://www.instructables.com/id/LadyBug-a-Motorized-Microscope-and-3D-Scanner-for-/ . This is about the opportunistic monsterification of a broken Flashforge Finder 3D printer, to do the same task with larger scan area and better mechanical makeup. LOOKING FOR DEVELOPERS, USERS, AND IDEAS FOR WHAT TO SCAN!

A couple of years ago, I broke my first 3D printer during a replacement of the fan. Up in smoke the mainboard went, never to turn on again --- until now. Life has been breathed into it again in the form of a Raspberry pi 4, some easy drivers, some extra motors, and an unholy amount of jumper cables, hot glue, and some other stuff.

Main components:

1: Old Flashforge finder --- the one with the matte surface finish, external power supply, and LED mounted in the front. The new model has a smooth surface finish and LED mounted onto the extruder assembly, but has the same mechanical guts.  

2: Raspberry pi 4 (1 gb) ram. I have also used a raspi 3 with this and an older ladybug, but the 4 has a much higher framerate with USB cameras, which is nice.  I use it like a desktop with a monitor/keyboard/mouse, but there's no reason you couldn't go headless. 

3: USB microscope: By happy coincidence, the generic ones in the style of dinolite (or the genuine article) fit neatly into the hole in the original extruder assembly. 

4: I'm used to Easydrivers, but standard a4988/drv8825 drivers or whatever should work fine, especially for motors of this size. 

5: Extra motors (optional). At the very least, you should make use of the extruder motor to add a fourth axis. But you can go really crazy and go for a fifth or more. 

6: Misc: An older 20v laptop power supply, wireless selfie module for taking picture with a phone, breadboard, wires, beeper.  

Scanning is done on a single python file in the Pi, with most post-processing done on a main computer using commercial software as well as some custom utils. 

Finder parts for USB microscope attachment (4).stl

These are the two tan pieces next to the USB microscope, necessary after removing the original plastic carriage. One presses into the belt to allow the axis to continue to move, the other juts out for pressing on the endstop.

Standard Tesselated Geometry - 81.92 kB - 01/07/2020 at 19:41

Download

Finder backside attachment.stl

Provides a place to stick something to the back of the Finder. Filament used to go in here.

Standard Tesselated Geometry - 30.75 kB - 01/07/2020 at 19:40

Download

  • it's not all perfect though

    Wayne7 hours ago 0 comments

    Not everything works exactly as I'd like it to.

    First, here's a bit of a bizarre bug I encountered while getting ready to scan my phone screen, which I last did using the original ladybug:

    Whoops,  I guess this turns out to be impossible. I wish I had some concrete explanation for why this occurred --- I can get a very fuzzy idea why, but it breaks down if I think about it too hard. Like, could this be used for the world's worst autoleveling feature, by putting a phone at each corner of your build plate?

    Next let's talk about the trade off of having a larger field of view at the same magnification, with this very helpful drawing:

    image.png  If you have a larger field of view at the same depth of focus, it is more likely for parts of the image to be out-of-focus. Confocal microscopes like DVD players solve this by doing point scanning --- each bit of information or pixel if you will is obtained separately, with focus controlled using a servo feedback loop. To take a picture of the whole disk all at once and resolve all the information would require disgustingly perfect widefield optics and a perfectly flat field.  

    The same goes for us. It's not just that scanning this way can get around having a limited field of view, it's that having a limited field of view is essential for it to work in the first place. If all you're doing is mixing and matching pictures, then for the whole thing to be in focus at the end of it, all of the little pictures have to be in focus, too --- and the smaller each little picture's field of view is, the more likely that is to happen.

    Except. What happens if you fake it? What if you take your wider field-of-view image at many Z heights and artificially split them up into sections? Then you can mix and match the ones that are in focus, as if you had a smaller field of view to begin with! And it would make sense to extrapolate all the way down to the individual pixel, right?

    Well, congratulations, you've just invented image stacking. Let's try that next.  

  • Everything is different now

    Wayne03/25/2020 at 01:38 1 comment

    Imagine you're me: A broke grad student with dreams of turning his scanning microscopy hobby into something more. You scrimped and saved until you could afford old blu-ray players and a 12 dollar USB microscope, then sold plasma until you could upgrade to a broken 3D printer. You produced images that could always be described as "surprisingly good for being made of trash". 

    So then imagine you were offered the newest version of the only name brand USB microscope for free. How many microseconds would it to you to respond and go "yesyesyesomgplease"?

    Of course the first thing you do is try to smoosh it into your current broken printer hole. You use a file (!) to cut away a few choice bits of material because the microscope has some pokey bits that get in the way. then, it turns out the plastic on the old one was actually helping it squeeze in and stay put, and aluminum just doesn't want to bend like that. So... straps?

    I like stopgaps as much as the next guy, but this was starting to get ridiculous. Along with tons of dumb problems with the scope in the pi --- along with ahem, the coronavirus, a system I could work with from home (like on a laptop?) started sounding attractive.

    So that's why everything is different now, because my 3D printer is now one that didn't start off with an exploded mainboard. One of these guys basically.  

    tronxy_x1_tronxy-x1-3d-printer-review-the-facts-here-800x600.jpg (800×600)

    Pros: bigger build plate while being less huge in general, solid metal frame, easy mounting points to the build plate. Con: The build plate moves in X axis, rather than just Z or none at all. This means you can't just plonk some objects on without fixation.

    Anyway I have gcode control now and run it from a windows machine through pyserial! Getting it connected at all turned out to be the easiest process. Most of working with gcodes and such is just stringing, and then it is amazing to be able to just, for instance, home everything with a single line. Let the 3D printing nerds handle all the backend, like proper acceleration profiles --- I cannot believe I suffered this long listening to my system go kachunk kathunk kathunk for hours at a time. And then combine that with a much faster frame rate with a proper computer and open cv, and scans are at least three times faster now. Actually, twelve because the camera has four times the areal field of view without loss of pixel density! It really is fun to watch it go now.

    Also I implemented a brute autofocus, which is basically what I was doing before after the scan but which you can now just do in real time. 

    And with all that I'd like to present the new system's first image, of a Canadian Toonie. 

    Here's the EasyZoom link if you'd like to peruse through all 400 megapixels. 1096 images of 1.2 megapixels and 50% overlap each.

    The scope has a polarized light knob which makes a HUGE difference for reflective surfaces like this. But beyond that, I was astonished at how normal the image looked. With just about any scan I've taken, I can find some flaw that gives it away --- a speckled pattern if you squint, mismatches and odd warping, a marked color gradient. This has almost none of that. 

    If you zoom in you'll spot some stuff --- mostly around the coinedge as a consequence of autofocus. But it turns out that a lot of what I complained of microsoft ICE was not really its fault --- it worked so hard, the poor thing! In the end, if you give it better images it begin with, it will reciprocate. 

  • Inspection of Osseodensification drillbit

    Wayne03/06/2020 at 01:08 0 comments

    Yesterday a professor beckoned me over and asked me if I wanted to drill into a pig's bones and give it a dental implant. Who wouldn't? Fun! I got to learn all about proper drilling technique, including that being too slow/timid can lead to literally burning the bone and causing lots of smoke.

    This led to a discussion on the drill bit being used for the actual procedure. It turns out that they were invented by a dentist in my home state of Michigan, who discovered, rather serendipitously, that running a drillbit backwards in a pilot hole widens the bone by compacting it, rather than simply carving the material away. This leads to preservation of healthy bone and a better implant.

    Cylindrical objects like drillbits are the perfect target for 3D imaging, so I figured I'd give it a try. It was only when I was in bed last night, having done one such scan, when I realized that, hey! I'm a biomedical engineering student, and this is the first first honest-to-goodness BME application I've used my scanner for!

    So today I elected to do a relatively good job and put it all together in a video. Here it is after 10-12 hours! Enjoy? 

  • HEYAYAYAEEYAEYAY

    Wayne02/29/2020 at 23:39 0 comments

  • 100 followers special: Laser engraved wood kittens

    Wayne02/28/2020 at 00:08 0 comments

    I made this for my mom a couple of years ago when she was sick. They're a few of her favorite things; kittens most of the time, morphine just while in the hospital. 

    So I stuck it on the Finder bed with some brown paper underneath and set it to take 350 pics with about 25 percent overlap (final 50 megapixels).

    there're those obvious scan lines (from insufficient overlap over reflective areas), and I didn't show you the cropped part, but overall this one turned out awesome! I especially like looking at the areas where the grain of the wood is exposed:


    You can clearly see both the rings and the individual raster dots from the laser. I might try doing a high res scan on this kind of feature next time. 

    And now it's next time, so there it is:

    484 pics of 40% overlap picked from 2 different Z heights (900 something total). Here is where you can clearly see the color artifacts from the stitcher, especially compared to the medium res image before it. as usual all the detail is still there, ICE is just weird about it. 

    And for the lazy who don't want to click, here's another zoom of the center of that image above:

    Even that image displayed on most screens is a bit zoomable. I really want to go one level deeper, but I think I'd need some new equipment. 

  • Scan of a 3D print by 3D printer using 3D printed parts

    Wayne02/25/2020 at 03:23 0 comments

    (YOU CAN CLICK ON THESE, BY THE WAY!) 

    bout a hundred pics with 25 percent overlap. Color rendition is alright, stitching artifacts are minor, detail is excellent. 

    And there's a trick for getting fewer exposure problems! Rather than using the black plate background and getting that autoadjustment issue around corners, I happened to have a printed sheet of the same color filament I could just slip underneath. It actually makes a pretty big difference: 


    Of course I also took a max focus image of a section, though I got it a bit wrong --- I wanted just a gear. 

    Also about a hundred pics with 25 percent overlap.

    And since it's relevant, here's a scan by the original ladybug, of its own stage. 

    It's interesting to compare the squished, oily look of the orange plastic, which was the first layer printed onto a build tak, and the gold top layer of PLA, which gets duller and crumplier as you zoom in. Something something heat and expansion and a place to go!  

  • a fossil

    Wayne02/22/2020 at 22:55 0 comments

    I could probably skip the whole scanning process if I could take better regular pictures than this:

    "But the results are worth it!"

    I'd be wary for quantitative-paleontology, but that looks pretty decent! I feel like scanning this way shortcuts a whole lot of stuff about photography and lighting that I never really figured out. You could hand me an expensive DSLR or whatever and I really couldn't get a better image than this, megapixels be damned. 

    Fun fact, I used 2 Z heights and just flipped back and forth picking the good ones by hand. 

  • another rock

    Wayne02/21/2020 at 20:04 0 comments

    Emboldened by the success of the last rock, and driven by the potential for geological applications, of which there are many, I went and gathered some rocks from my house. Some of these are just random things picked up while others I borrowed from the resident ex-geologist. Being flat was the most important thing.

    I also spent time calibrating my system, instituting crude-but-better speed controls (for instance, if moving only a few steps, not blasting it as fast as possible and causing vibrations). I also determined how many pixels displacement per step there were at different focus/magnification configurations, which would let me calculate percent overlap in the X and Y dimensions for each picture. I've been going with round step number up until now, but it's better to use round displacements instead.

    Here's a rock:

    And this is it scanned at "low-medium" resolution, or 1 pixel per step displacement, with just 10 percent overlap between the X and Y dimensions each time (just 9 pictures): 

    See that obvious squareish darker area? That's a downside of having very little overlap each time. An image halfway over the edge (and thus partially viewing the black build plate) will tend to have a higher exposure and the rock at the edges will appear brighter than it actually is. Using more overlap helps the stitcher build the colors into something more natural:

    This is 50% overlap (25 pictures) and the dark spot is there but a less obvious. The obvious downside of increasing the overlap is the scan and stitch time, but another is that it can result in more feature blurring. That's because the stitcher is not perfect and every time two overlapping images are combined, some information is jumbled. It's not very obvious in this case, but if local shape veracity is more important than color, you should aim for as little overlap as possible. 

    25% overlap seems like a good compromise. Here's that with closer viewpoint so that you get so that it's 1.5 pixels/step:

    They're clearly the same rock, but with some differences, mostly in color. Overall, this picture appears more brightly colored, possibly because the light source is closer. There is also a bit less contrast between the dark and light veins. And there are areas of dramatic color differences, most notably in corners and bumps, like at the top --- I wonder if it has anything to do with the autoexposure being triggered by the red stripe?

    And then we've got the really exciting one, high resolution 7.5 pixels/step and 725 pictures selected from 3 Z heights (compressed from 100 to 5 megabytes of course). The rock actually moved a bit between Z heights (vibrations) but it was all able to figured out mostly alright:


    That's 4 hours of my time right there for you.

    And just as a reminder that the above image is composed of ones like this:   

  • a rock

    Wayne02/21/2020 at 00:15 0 comments

    this is a rock

    here is a compressed scanned image of that rock (250 images picked from 1250 at 5 Z heights cuz rocks are bumpy)

  • Fabrics (pt 3)

    Wayne02/13/2020 at 22:33 0 comments

    I already scanned these things so I might as well share them, right?

    First up we've got a piece of fuzzy blanket:

    Fun fact, this blanket kept me warm in the labs which we keep at 20 Kelvin, until one day a student used it in a Wearable Technology project. Bastard. 

    There's the high res version (2k pics over just the center) trying to be stitched:

    And there's the stitched version, reduced from a 70 megabyte jpeg to a 4.9 megabyte jpeg to meet Hackaday's 5 megabyte file limit. Honestly, it's not too terribly compressed except for when you look really close, which I guess is the whole point to scanning things like this.

     And then we've got some stretchy gold sequin fabric. This one was interested to scan because it reflects so much light it looks just white, then you zoom in real real close and it's gold again. 

    (fun with linux Cheese's kaleidoscope function)

    This is definitely one of those cases where the moving lightsource makes things wonky, or possibly opens up the opportunity to do something artistic. 

    And closest up version, likewise reduced down to 5 megs:

    I like this one because you can see that there's actually a lot of empty space between the sequins were the black fabric is visible. I would like to visit a material like this again sometime to see how you might get accurate (to a human) color rendition so up close. 

    If you thought this post was neat, please consider sharing it to someone else you think might be interested, too. 

View all 18 project logs

Enjoy this project?

Share

Discussions

hellresistor wrote 03/09/2020 at 20:05 point

Scan and Print a electronic board ;)

  Are you sure? yes | no

Wayne wrote 03/11/2020 at 21:13 point

Circuit board? I've done that, but haven't shared because the warping looks silly >_<

  Are you sure? yes | no

RandyKC wrote 01/24/2020 at 21:23 point

You might not want to post scanned images of money. The treasury department gets a little anal about that.

  Are you sure? yes | no

Wayne wrote 01/26/2020 at 01:12 point

I'm not breaking the law, and the scanning introduces so many distortions into the image that it's not funny. They'd be caught faster than right away!

  Are you sure? yes | no

Wayne wrote 01/26/2020 at 01:14 point

For instance, in addition to warping and internal mismatches, there is a marked color gradient visible for larger scans from corner to corner. The illumination variance between shinyspots and the dull also isn't what you'd expect, since the microscope (and the light source) is moving.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates