Close

How does the line detection work

A project log for Serial Sensor

Free app for using your phones camera as a sensor and build some fun stuff. Additionally build in sensors are supported.

serialsensorSerialSensor 02/22/2021 at 18:140 Comments

So how does the line detection work? If you already have some experience with image processing, the follwoing might not be to interesting for you.

So internally the line sensor generates an output that looks like that:

The red crosses beeing potential line points while the green circled ones are assumed to be from the line which we are interested in. As you can see, there is some nice shadow which also creates potential line detections.

But how do you get there?

Using the sobel operator you can calculate the gradient image, meaning that there is a high response when there is a color change in the image. For the scene above, the gradient image looks like:

So you can see the line, but also the shadow is creating a high response. Potential line points are calculated by assuming that our line is between these high respones, which explains why the "shadow points" are between the line edge and the actual shadow edge.

Finally for extracting the line which we walk along the valley. So we start at some point, beginning from the bottom and check if we do find a nice detection within the next row that is withing our valley. The below image illustrates that, the left and right point cannot belong to the current evaluted point, since there is a "hill" (or maxima) in between.

So what's the sensor output now?

You'll get the first and the last detected line point's lateral position, as well as the distance in between these two points in meters (check the sensor details inside the app for more inforamation). The provided distance between these two points can then be used to go faster / slower depending on how much line the camera can currently detect.

How you go from the image to meters?

The line points are transformed into a 3D frame of refernce (again, see sensor details inside the app). This is done using the intrinsic camera parameters and the height of your phone over ground. The intrinisc camera parameters are known,  since you are doing a camera calibration before you can use the line sensor.

For this to work, we assume that the line and your phone is on an even surface.

Whats not working so far?

With this approach you cannot detect horizontal lines.

Computation times?

Galaxy S3 (with lineage OS...yeah) average of ~30ms

LG V30 average of ~9ms

Discussions