The principle behind it is a pair of laser lines, separated by a known angle and rotating at a constant rate which then sweep a pair of sensors with a known separation about the axis of rotation.
For a laser pair (L1 and L2) sweeping down around the horizontal axis, the time between laser L1 hitting the top and bottom sensor is a factor of how far the sensor pair is from the centre of rotation due to the constant rate and variable, relative, angle.
The time between laser fan L1 and L2 hitting the same sensor is always the same, regardless of distance, because the angle and the rate of rotation within the timeframe of the sweep are very constant. Taking the ratio of these two times will remove the rotational speed from the equation and give you a relative distance between the sensor pair and the lasers. Knowing the exact vertical separation between the sensors and also knowing the exact angle between the laser fans will fix the distance to a known scale.
I've re-done some of the calcs I did a while ago (at that time I was thinking 8MHz clocks!) and added a link to the google sheet as well as an image of the chart. At 5m you're looking at a single distance measurement LSB accuracy of 2mm (20mm sensor seperation, 32MHz counter, 25Hz/120deg laser sweep). Implementing the logic and counters within a CPLD/FPGA would be more development (for me at least) but could push the accuracy by up to x10, proportional with the clock frequency increase.
Fusing multiple readings and a 6DoF sensor and the accuracy for the sensor cluster should be very good.
That’s the fundamental principle, and would work fine if the sensor were just sliding around a table with the lasers at the edge. However, in reality, the sensor is going to be attached to a head which has six degrees of freedom and as soon as the head tilts or moves above or below the laser plane (as in the diagram below,) the perspective angles (and thus the time differential) change. This will throw off the simplistic view of the world above.
This is where we need pattern recognition; a way of recognising the alignment of the sensor head by the order and timing that the laser line hits the various individual sensors. I’ve chosen a circular band with sensors around the top and bottom for this example because it is the easiest to visualise. I’d likely have chosen this configuration as my proof of principle too because it’s a lot easier on the maths to fit to a known curve.
As you can see, with the swipe of a single laser across the sensor head, it is not possible to differentiate between a side view with no roll, and a rolled head lower down in the vertical axis. However, add a 6DoF sensor to the head and the system now knows the down-centric attitude of the head, it also knows the flat line the laser describes indicates that the sensor head is in line with the roll giving a relative attitude with respect to the laser. Now add a second laser on the opposite wall, and the 3D picture becomes easier as the attitude which gives a shallow curve on one side, will describe a greatly exaggerated curve from the new laser.
I believe the relative angles between the sensors given by the system can be used in a modified perspective n-point computer vision algorithm to give the distance and orientation to each of the known laser bases. Fuse that with the measured attiude of the sensor cluster from the 6DoF and it should be enough to accurately pin-point the cluster in 3D space. The more bases the better the position and attitude estimation.
SCALING IT UP TO WAREHOUSE SIZE
As the only critical time for interference is that limited time it is sweeping over the cluster, it should be easy to have multiple bases sweeping without the need to sync them all together. Indeed, you could enforce a minimal interference policy by giving lasers a slightly different sweep frequency. Each sweep frequecy could also be used to differentiate between...Read more »