Close

Trigonometry

A project log for Precision Indoor Positioning

Performing high precision indoor positioning using Vive Lighthouses, Arduino Due, and TS3633-CM1

mike-turveyMike Turvey 10/13/2016 at 20:5512 Comments

So far the algorithm for deriving position from the lighthouse angles and known relative positions of the sensors is eluding me. I'm thinking that the solution will come from setting up a set of equations and then solving for the unknowns. And that's what I've been doing so far with pen and paper. Lots of trigonometry here, as I'm deriving all of my equations from trig relationships, setting up triangles wherever I can (they're everywhere!). At least it's a pretty awesome problem. I probably won't have much more time to work on it until the weekend. Just wanted to drop a note saying "I'm still working on it." More to come...

Discussions

Lee Cook wrote 10/18/2016 at 11:24 point

HI Mike, if it's not too much of a problem, would it be possible for you to dump files with the layout of your board and any raw data you get from your setup please?

It would really help me get a head-start processing the data I'd likely get from my setup (which is still a long way off arriving from China). I was going to try work out a course attitude/heading from the relative angles between the sensor hits which, in theory, I should be able to get from your readings too.

Thanks in advance,

Lee

  Are you sure? yes | no

Mike Turvey wrote 10/19/2016 at 05:43 point

Sure thing.  I just uploaded a CSV with raw sensor readings.  Each row is one "reading."  This is either an assertion or a deassertion of a single sensor.  I described what each column means in the comments for the uploaded file.  Also, I dumped the data set into a google spreadsheet and used some simple formulas to derive the length of each pulse, for an example.  

The sensors are located on a flat board about 25cm square.  The sensors are located at the following locations on a cartesian coordinate system defining the lower left corner of the board as the origin:
0:(18,4,3)
1:(0,0,0)
2:(5,10,3)
3:(0,21,0)
4:(13,18,3)
5:(23,21,0)

If you look closely at the picture of the board, you should be able to see these coordinates handwritten by each sensor.

These measurements were taken with the board approximately perpendicular to the lighthouse, about 3m away.

If you have any questions or want more details or data, just let me know.

--Mike

  Are you sure? yes | no

Lee Cook wrote 10/19/2016 at 08:27 point

Thank you!  I'm going to try (actually I'm going to see if I can't recruit a mathematician from work to do it... ;-) ) to reduce the terms of problem using IMU data to remove pitch/roll aspects of the scanner and sensor.

Am I ok in assuming that the board and the lighthouse box are on the flat?

  Are you sure? yes | no

Mike Turvey wrote 10/19/2016 at 17:24 point

Lee-- happy to help.  I'm not sure exactly what you mean by "on the flat."  The lighthouse is roughly 2m above the ground, aimed down, maybe 15 degrees or so.  The board with the sensors is about 1m off the ground, sitting on my desk.  It's very crudely oriented facing toward the lighthouse.  The only measurement that I attempted to do precisely was the location of the sensors on the board.  They should be precise to within a couple mm.  Does that work for you, or do you need precise distances and orientations?

  Are you sure? yes | no

Lee Cook wrote 10/19/2016 at 23:38 point

Sorry, "on the flat" just means flat and level ground - generally used when commissioning a system to mean it is not tilted with respect to down in any way.

My system can't produce exact bearings/'pixels' like the lighthouse system does, but it can measure the relative angles between the various points and, in theory, relate those angles to an absolute distance.

It's similar, or perhaps even the same(?), as the n-point problem I think.

There is only one orientation (and from a perspective point of view, distance) of the target with respect to the laser emitter, which when scanned with the laser line, will give that particular ratio of angles.  The problem arises because that orientation could be anywhere on the sweep of the laser, like a toroid in the axis of rotation.  However, if you know the attitude of the target with respect to the axis attitude of the laser sweep, it becomes fixed against that axis – there is only one orientation of the target wrt to the laser axis which has that particular target pitch and roll. 

The "orientation" reduces down to a relative translation of the target (x,y,z) with a rotation in target yaw/heading - and the whole thing potentially rotated around the vertical axis.

Having a second set results from another characterised laser scanner (characterisation done during the room calibration routine) and it should be possible to calculate the yaw/heading component of the target and fix the target on the XY plane.

The whole reduction thing relies on having known attitudes for the emitters and the targets so they can all relate in the same axis.

Anyway, that's my theory. I've talked to one of the mathematicians at work and he seems interested enough in the system to want to give me a getting it working.  Perhaps when I give him this talk he'll laugh in my face at my naivety and point out I need 30 scanners to make up for the lack of real bearings.

I hope not.

  Are you sure? yes | no

George Wang wrote 10/17/2016 at 03:20 point

Hi Mike. There is some discussions on Reddit about the math behind Lighthouse: https://www.reddit.com/r/Vive/comments/4xllq7/eli5_steamvrs_tracking_algorithm/. Alan Yates, the creator of Lighthouse is also on Reddit and he posted some very helpful comments. Hope this helps.

  Are you sure? yes | no

Mike Turvey wrote 10/17/2016 at 07:39 point

George, thank you for that thread. I hadn't seen it, and it's got some very useful discussion. It led me to a paper about EPnP, which is supposedly a very efficient algorithm for solving the Perspective-n-Point problem. There's even some BSD licensed reference code for the algorithm. Next up, I'm going to try to feed in some captured data and see if I can get sane pose vales to spit back out.

  Are you sure? yes | no

Lee Cook wrote 10/14/2016 at 07:49 point

Welcome :)

Assuming both bases are are identical (i.e. rotate in the same direction):

If you mount one of the bases upside down (to invert its direction of rotation wrt to the other) and you *know* (IMU?) the target is not upside down.  Then you can tell which station is which by the order the laser scans each sensor; one will be the mirror of the other.

  Are you sure? yes | no

Mike Turvey wrote 10/14/2016 at 16:53 point

That is a creative solution.  I'm sure the lighthouses are rotating in the same direction.  To be honest, I haven't even started looking at the pattern used by multiple lighthouses transmitting in the same room.  It might be that they send sync signals right after each other, and always in the same order (You manually set the lighthouses to one of 3 channels).  The more I think about it, the more I suspect successive sync pulses right after each other must be how it works.  If that's the case, then I wouldn't have to decode any extra data, and it should be fairly easy to distinguish the two.  Maybe worthwhile to make some progress on supporting two lighthouses while I'm reading up on all the computer vision math.

  Are you sure? yes | no

Lee Cook wrote 10/13/2016 at 23:24 point

Take a look at this:

https://en.wikipedia.org/wiki/Epipolar_geometry

If you've both bases up and running it may be the easiest way to get the initial fix.

  Are you sure? yes | no

Lee Cook wrote 10/13/2016 at 23:30 point

Lighthouse also calibrates the room at the beginning.  Knowing the angles from each to each other, this should give enough information to give the relative positions of the bases.

  Are you sure? yes | no

Mike Turvey wrote 10/14/2016 at 00:38 point

Thank's Lee.  I haven't yet started tracking using both lighthouses because I will need to figure out how to robustly determine when I'm receiving a pulse from which lighthouse.  I had heard that tracking still works well with the lighthouses even when you have a single lighthouse operational (although I believe the precision in the distance to the object from the single lighthouse is in the ~2cm range instead of submillimeter).  

Of course, getting to the point where I can read both is going to be totally necessary.

Even though I've looked at it in the past, for some reason I hadn't thought of this problem in terms of the computer vision problems-- it totally makes sense.  The system can be modeled as a super high resolution camera, and you can ignore that the pixels do the sensing instead of the camera.  Thank You!  It gives me a whole new set of directions to look into.

It looks like there may even be some good discussions on systems that estimate position given a single camera and fiducials: http://inside.mines.edu/~whoff/publications/2008/steinbisISMAR08.pdf 

Really appreciate the suggestion,

Mike

  Are you sure? yes | no