Now that the LEDs are blinking, it's time to start working on the recognition software (ongoing experiments on GitHub). The plan is to use OpenCV, and more specifically OpenCVs SimpleBlobDetector to recognize the LEDs.
Feature detection with SimpleBlobDetector is fairly easy:
Mat cooked = ...; SimpleBlobDetector::Params params; params.minDistBetweenBlobs = minDistance; params.filterByInertia = false; params.filterByConvexity = false; params.filterByColor = false; params.filterByCircularity = false; params.filterByArea = true; params.minArea = minArea; params.maxArea = maxArea; Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params); vector<KeyPoint> features; detector->detect(cooked, features);
The idea was to filter the blue (or red) parts of the image and then running the blob detector to find the blue (or red) spots.But not everything is as simple as it seems: using default parameters to the blob detector resulted in finding zero LEDs. Luckily, OpenCV allows you to quickly create some sliders in a UI that allowed me to tweak the parameters until a reasonable match was made, that looked like the image at the top of this log entry.
The most important parameters appear to be minimum- and maximum area size and minimum distance between blobs, but I suspect that these parameters are very much dependent on circumstances like distance to the LEDs, brightness settings of both the video and the LEDs, etc.
So at this point, there are some things to consider:
- Would it be possible to automatically find the ideal parameters, for instance by running a generic optimization algorithm? I have some previous experience with implementing Nelder-Mead as an optimization, so it shouldn't be too difficult to try this.
- The inputs that I'm giving SimpleblobDetector may not be optimal. The results above were obtained by giving the blob detector an image that consisted of the blue channel of the input image, already pre-thresholded. Determining a threshold is exactly what the SimpleBlobDetector is supposed to do by itself.
- Another improvement that I could try is to feed the blobdetector the difference between a reference image with unlit LEDs and the image with LEDs lit. This may run into movement artifacts, but it's worht a try.
- The LEDs are oversaturated in the image. It seems like a good idea to dim the brightness of the LEDs a little, maybe by some constant value, maybe by dynamically adapting the brighness if feature registration fails.
Because over-engineering is fun, my next step is going to be to try Nelder-Mead on the blob detector parameters...