The maths?

A project log for Room Based VR Positioning

My idea for a room based VR positioning system. The goal is for tennis court coverage to play "real" virtual tennis!

Lee CookLee Cook 10/19/2016 at 23:506 Comments

I’ve just put my thoughts down elsewhere while trying to explain what I see the maths problem behind this project, and I thought I’d share them here too. It’s quite late at the end of my third 15hr work day so a little slack is appreciated. That said, if what I’m proposing is a complete load of bo**cks feel free to point out the errors – with corrections please! ;-)

My system can't produce exact bearings/'pixels' like the lighthouse system does, but it can measure the relative angles between the various points and, in theory, relate those angles to an absolute distance.

It's similar, or perhaps even the same(?), as the n-point problem I think.

There is only one orientation (and from a perspective point of view, distance) of the target with respect to the laser emitter, which when scanned with the laser line, will give that particular ratio of angles. The problem arises because that orientation could be anywhere on the sweep of the laser, like a toroid in the axis of rotation. However, if you know the attitude of the target with respect to the axis attitude of the laser sweep, it becomes fixed against that axis – there is only one orientation of the target wrt to the laser axis which has that particular target pitch and roll.

The "orientation" reduces down to a relative translation of the target (x,y,z) with a rotation in target yaw/heading - and the whole thing potentially rotated around the vertical axis.

Having a second set results from another characterised laser scanner (characterisation done during the room calibration routine) and it should be possible to calculate the yaw/heading component of the target and fix the target on the XY plane.

The whole reduction thing relies on having known attitudes for the emitters and the targets so they can all relate in the same axis.

Anyway, that's my theory. I've talked to one of the mathematicians at work and he seems interested enough in the system to want to give me a hand getting it working. Perhaps when I give him this talk he'll laugh in my face at my naivety and point out I need 30 scanners to make up for the lack of real bearings.

I hope not.


Mike Turvey wrote 10/21/2016 at 20:34 point

You're definitely right that I have "vive lighthouse" on the brain.  I find myself waking up in the middle of the night with new ideas or considerations.  I definitely see now that if you don't have a highly precise, well-known rotational speed, then adding a second laser at a known angular offset would give you essentially the same information.  I.e. it would let you derive the precise angular distance between two points relative to the laser source.  I'm curious-- does it give you anything else, or is the rest of your system the same regardless of whether you're using 1) two lasers with a known angular offset OR 2) one laser with a known angular velocity?

  Are you sure? yes | no

Lee Cook wrote 10/21/2016 at 23:21 point

I would think the Lighthouse system is a lot easier on the maths because the pulse-scan system lets you use epipolar geometry to get an easy general fix as a starting point for the location and orientation.  I think they must have both bases visible at the start before they can fall back to a single base/point (essentially IMU locating which is only valid for a short period of time before the gyros will wander). As the pulse-scan gives angles, you're also able to use classic n-point without alteration.

This is just guessing though.

Apart from the things I mentioned below (the ability to differentiate with variable scan frequencies and the ability to detect fast and perhaps compensate for motion with the differences between the two line scans) if you removed the pulse part from the pulse-scan then the system would be the same.

As I understand it the pulse part is also the major limiting factor of Lighthouse because it only works to 5 or so meters and would need repeaters to expand the volume.

The major down-side to my system is that you *must* use a n-point style algorithm for locating, which means multiple sensors on the cluster visible to a base; you can't get a bearing to a single sensor on a cluster.

  Are you sure? yes | no

Mike Turvey wrote 10/22/2016 at 07:24 point

I think you're right about the lighthouse being easier from a math standpoint, or at least it's fairly analogous to the perspective-n-point problem that is well researched in computer vision.  You've got a more unique set of unknowns to solve. 

I hadn't previously heard that the pulse was the main limitation on the distance of the lighthouse's reach.  I'm curious to test that out sometime-- might have to do it outside at night to ensure I've got enough distance.  

As far as getting an initial fix using the lighthouse, that's doable using a single base station.  I've heard that they require a minimum of 5 points visible for an initial fix, but then require only one point visible for a continued fix, as long as there is sufficient motion for the IMU to pick up.  I think these claims make it pretty clear that they're using the epnp algorithm for the initial fix (which technically only requires 4 points for a fix, but does much better with 5) as well as a Kalman filter for updating the position as new data comes in (either from the IMU or a horizontal or vertical scan of a point).  The big argument I've heard for the second lighthouse is to handle occlusion.  But I suspect an equally significant reason is that the precision in the depth dimension away from a single lighthouse is much less precise, and having two lighthouses can mitigate that.

Totally separate question: With your setup, would you ever plan to have stand-alone static-position IR receivers at one or more known locations (or locations precalculated at startup)?  Presumably this could give you a known reference point that might make it much easier to know the angular position relative to a laser source.

  Are you sure? yes | no

Lee Cook wrote 10/22/2016 at 14:45 point

The limitation of the pulse was in a YouTube video with "Tested", they were in a large tradeshow space and the Tested guy asked about scaling up to fill that space, Alan Yates said that the lasers were eye-safe to about 20m but that going beyond the 5m would need to have repeaters for the sync.

Supposedly, for every fix you get with n-point, it reduces the number of degrees of freedom of the cluster by two; four should reduce it to zero but, and this is just what I've read as I've not implemented it, the more points you have the easier it is?  The single fix would be able to supplement the IMU info so it's not completely dead-reckoning. My system would need one (or perhaps two because of the az/el axis) additional fixes over the lighthouse for a similar level of DoF reduction.

I don't think static receivers in my system would add anything.  The thing I'd have to have is a calibration routine at the beginning which characterised the position of the bases with respect to each other.  Once the n-point is sorted (LOL, like it's that easy, will def need help I think) then that should be fairly easy, kind of a reverse epipolar simultaneous equations.

I'm hoping that the depth precision of my system is the better aspect, it's the angular aspect wrt the base that it will struggle with I think.  However, the lack of pulse circuitry, the lack of incredibly accurate motor control, and the lack of the need to sync the bases together should make the bases a lot cheaper.

  Are you sure? yes | no

Lee Cook wrote 10/21/2016 at 13:05 point

HI Mike,

I think it's because you've been working on the Lighthouse solution where the rotational speed of the laser is very precisely fixed.

The second laser in my system is because the rotational speed of the sweep is *not* precisely fixed.  Only assumption is that, for the few tenths of a millisecond that it is sweeping over the sensor cluster, it is constant.  The two lasers, which have a fixed, and precisely known angle allow you to determine the rotational speed of the laser which you can then use to determine the angle between the individual sensors in the cluster.

As the time for one base to interfere with another is limited to the time it is sweeping over the cluster having many bases should be easy with the ability to differentiate between multiple emitters by altering the rotational speed slightly, say a few tenths of a Hz.  The cluster can look at the times between sweeps to know which sensor it came from.  It would also mean that free-running, non-synchronised bases shouldn't  interfere with each other for more than a sweep or two in hundreds.

I was trying for a large-scale, warehouse capable system.

Back before MEMS, when IMUs were just the realm of the defence industry, I was thinking I may also be able to determine the cluster rotational speed from having two laser sweeps so close together. In theory, and depending on how good the measurements are, the difference between the angle measurements of the first sweep vs the second should tell you the amount moved.  I will use an average of the two to use for pose estimation.

Sorry, I've had this idea locked away for so long as a "I'll patent that someday" finally talking about it and I can't shut up....

  Are you sure? yes | no

Mike Turvey wrote 10/21/2016 at 04:58 point

Hey Lee,

I've been trying to wrap my head around this for a while, and I haven't quite got it figured out.  Something occurred to me today.  Since the two lasers are a known distance apart, one will follow the other by a known and constant time, regardless of the distance, correct?  Given that, it would seem that you could "emulate" the second laser sweep by just injecting a second pulse into the data line x milliseconds after the first pulse.  And given my understanding, there wouldn't be any difference on the data lines between using the two lasers vs. just using one laser to sweep and injecting such a delayed second pulse.  If that's really the case, then it would seem that the second laser sweep can't really be providing any additional position data, since it can be fully emulated without knowing any position data.  I'm probably missing something here, but I'm not sure what?

  Are you sure? yes | no