Team member Yujie is quarantined, too. This has left him weak and prone to suggestions, like "instead of doing your homework, can you implement this very specific form of focus stacking to prove a point about focus stacking?".
Say you have a lumpy object like this rock:
That certainly is lumpy. Even if you were to mix and max images at different Z heights, you would still have a problem --- there would be spots within each image that were out of focus. That method does alright if you have a gentle, sloping image, but is clearly not going to fly here. You need 15 images (1 mm apart) to cover the whole field.
I said last time that the ideal thing to do would be to physically point scan each pixel and combine them in 2D. And again, that's basically what focus stacking is --- you align the images and just pick out the best pixels from each one. But after I thought about fields of view and such for way too long, what I wanted to see was what if you just trended towards this, by matching focuses in chunks of more than a pixel:
We start out with a block of 1, which is just the image that was best in focus, and end in blocks of 20 pixels on a side. The changes are dramatic and choppy at first, but then we get into a rhythm and things just get smoother and more properly contoured and MM! Wow!
There are some flaws with this. One is that the code makes us lose pixels sometimes which causes everything to wibble or even weirder with smaller block sizes, like so:
Another is that the field of view changes towards the edge a bit as the camera moves, even though we are not changing our focal point (something that I did not account for, and which can be fixed by image alignment).
But fundamentally it actually works exactly as it's supposed to. Reinventing the wheel of course, but I think this insight into what stacking really is will end up being useful. The goal with stacking, by the way, beyond getting better images, is to also use the depth-from-focus information gained to make depth maps and 3D models.