Not everything works exactly as I'd like it to.
First, here's a bit of a bizarre bug I encountered while getting ready to scan my phone screen, which I last did using the original ladybug:
Whoops, I guess this turns out to be impossible. I wish I had some concrete explanation for why this occurred --- I can get a very fuzzy idea why, but it breaks down if I think about it too hard. Like, could this be used for the world's worst autoleveling feature, by putting a phone at each corner of your build plate?
Next let's talk about the trade off of having a larger field of view at the same magnification, with this very helpful drawing:
If you have a larger field of view at the same depth of focus, it is more likely for parts of the image to be out-of-focus. Confocal microscopes like DVD players solve this by doing point scanning --- each bit of information or pixel if you will is obtained separately, with focus controlled using a servo feedback loop. To take a picture of the whole disk all at once and resolve all the information would require disgustingly perfect widefield optics and a perfectly flat field.
The same goes for us. It's not just that scanning this way can get around having a limited field of view, it's that having a limited field of view is essential for it to work in the first place. If all you're doing is mixing and matching pictures, then for the whole thing to be in focus at the end of it, all of the little pictures have to be in focus, too --- and the smaller each little picture's field of view is, the more likely that is to happen.
Except. What happens if you fake it? What if you take your wider field-of-view image at many Z heights and artificially split them up into sections? Then you can mix and match the ones that are in focus, as if you had a smaller field of view to begin with! And it would make sense to extrapolate all the way down to the individual pixel, right?
Well, congratulations, you've just invented image stacking. Let's try that next.