Close

Intro to reflective marker tracking

A project log for Less than $100, high FOV, mobile AR glasses

Augmented Reality glasses using modified lowcost Android phones

kvtoetkvtoet 02/09/2019 at 16:310 Comments

A short explanation of the positional tracker that I've made and that is used in these glasses.

The technique used are reflective markers with low exposure camera setting. 

Basic steps are these:

- Calibrate the phone once by shining the LED (which is positioned next to the camera) on a piece of retro reflective material close up to the camera. This will put the camera in low exposure setting when auto-exposure is enabled. Then lock the exposure.

Some phones allow the exposure to be controlled directly without the use of auto-exposure. However this has become default only on later, higher end models of phones.

This calibration needs to be done once every reset, or after the camera has been released from the exposure lock.

- The camera feed is picked up from an OpenGL context, using a camera texture (GLES11 extension, available on all phones). A shader is ran over the camera input, comparing the colors in the camera texture with the color of the flash LED (or marker color). All other colors are filtered out. What remains are a few blobs. 

- The remaining image is a black and white binary image, which is bitpacked using another shader, in a smaller image. Finally this much smaller image is downloaded to CPU with GLReadPixels.

- From these blobs, some might be static part of the scenery, such as a lamp or other object. The tracker remembers all objects, then flashes the LED briefly and identifies which objects disappeared and which stayed present. There are additional filters, but this is it in a nutshell.

- The blobs that disappeared are reflective blobs, and thus have a high likelihood of being retro reflective markers.

- The distance is taken from the radius of the marker circle, in a nutshell. 

- Lastly a high resolution photo is taken so the QR code printed on the marker can be read out and the orientation of the marker can be determined. This step isn't fully integrated yet. (The QR code reader is implemented, and the POSIT algorithm for the orientation has been written and tested, but not yet all brought together in a working package). Right now, the object orientation is taken from the gyroscope and accelerometer.

Some clear upsides of this type of tracking are:

- Very minimal impact on performance. The biggest performance hit is a single iteration of the binary camera image on CPU, to do blob finding. 

- Process doesn't get much heavier with more blobs. Scales well.

- Latency is low as the algorithm finishes quickly.

- Works exceptionally well in low light conditions, as reflective markers show up brightly and environment dimly. High light environments still need some extra work, but achievable (depends on brightness of LED too).

- No camera calibration required.

- Works well with low resolution cameras.

- Long range tracking. Longer range can be achieved by increasing the marker size too. Much better than sole QR code tracking.

- Solid tracking; it can stand shaking the camera (low impact by smearing) and finds the marker in <200ms from a dead drop point.

- The use of markers works well with multiplayer AR. SLAM has difficulty with this.

Discussions