Close

Solving for position using tori point clouds

A project log for Precision Indoor Positioning

Performing high precision indoor positioning using Vive Lighthouses, Arduino Due, and TS3633-CM1

mike-turveyMike Turvey 02/08/2017 at 21:462 Comments

It's been a while since my last log, but progress is being made. The big problem has been how to convert from point locations & lighthouse angles into a known position of the tracked object in space. The first part of that problem involves figuring out where the lighthouses are in relation to the tracked object. The libsurvive project is also solving much the same problem, so I've been working in the context of that project.

There are a number of ways to solve this problem, and it's not clear at this point which solution will work the best. The approach I'm taking is as follows:

Now, that sounds simple enough. But I'm not aware of any simple solution for finding the converging location for multiple tori, particularly when there is noise in the signal so they don't all converge perfectly.

So, here's my solution:

For each torus, generate a point cloud. Then, for one of the tori, for each point on the torus, compute a least-squares distance to the nearest point on each other torus. Whichever point has the lowest least-squares distance will be a good estimate of the location of the lighthouse.

Now, you need to refine that initial guess to get a higher precision estimate of the lighthouse location. Right now I am using the same approach as above, except just rendering part of the tori at higher resolution. It looks something like this:

In this image, you can see that there are so many tori being calculated that it's hard to distinguish each one. But you can also see the area that looks denser. The red "star" indicates the location of the initial guess, and as a result a second pass was done just in that area, with each tori being rendered in much higher resolution. Looking very closely, you can even see that a third pass was done in the center of that patch to eek out the most precision possible using this approach. While this seems to work fairly well, it's also somewhat memory intensive, and very CPU intensive.

So, I'm planning a different approach for refining the initial estimate that avoids a point cloud altogether by employing a gradient descent. Basically, it involves looking at the 3 dimensional gradient of the least-squares distance to each torus, and iteratively moving the "best estimate" point along this gradient until it reaches a local minimum (and ideally this will also be the global minimum, too).

Discussions

Lee Cook wrote 02/09/2017 at 12:30 point

Would this approach work with only the relative angles between the two sensors or do you need the absolute values that the Vive gives?

I've been pushing an iterative approach too, I had to abandon the ellipse fit stuff as the perspective from real-world data was causing the ellipse to appear bigger which then pushed out all the circle/ellipse calculations.

Anyway, one of the things that I have had had a reasonable amount of success with is refining the initial search values from the light data and sensor normals.  A simple one is to run through the sensor normals and reduce the boundary x/y's sweep against what they can see (expanded a small amount for perspective depending on how large the item is - so your test set-up needs a larger boundary than an HMD).  I'm currently playing with weighting the normals for the lit-time, assuming the item is small enough that the LH to sensor distances doesn't change significantly over a single sweep so relative lit-time is more a function of angle. I do a sanity check of the result against the sensor with the largest lit-time "area" (Xtime*Ytime) which I assume is the most face-on sensor.

  Are you sure? yes | no

Mike Turvey wrote 02/09/2017 at 16:58 point

The absolute values are needed.  The reason is that you can't tell the angle that the two points make relative to the lighthouse if you only have relative lighthouse angles.  At first I thought you could do that, but I was assuming that I could use lighthouse angles as the azimuth and elevation in a spherical coordinate scheme.  But, as it turns out, you can't do that.  You can consider one of the lighthouse angles to be the azimuth (even though it's arbitrary, x sweep seems the intuitive choice to be azimuth), but the elevation behaves quite differently.  

In spherical coordinates, imagine a constant elevation of, say, 45 degrees.  When you sweep the azimuth in a full circle, you'll get a cone.
In "lighthouse coordinates," imagine a constant Y elevation of 45 degrees.  When you sweep the X in a full circle, you'll get a plane.


More to the point, let's leave the Y elevation at 45 degrees.  And imagine an ideal lighthouse with an X value that sweeps from 0 to 180 degrees.  At X=90 degrees, everything works as you expect.  If you have two vectors pointing out from the lighthouse with Y=0 and Y=45, then it's a 45 degree angle between those two vectors.  But, as X moves closer to either side, even though the Y values don't change, the actual angle between the two vectors will get smaller.  At the extreme, when X=0 and X=180, the angle between Y=0 and Y=45 goes to 0.  


I wouldn't be surprised if the end "best algorithm" is a bit of a mashup of several of the current ones under development.  And it sounds like your algorithm might nicely break down into "find initial guess" and "iterate to a better guess."  I haven't yet cleaned up my code to really clearly distinguish between those two steps, but I plan on making those two steps very clearly separate.  If you wanted to just plug in code for "iterate to a better guess," it should be easy once I make those changes.  


If there are multiple algorithms that also use an iterative gradient descent to get to the "best" solution, then we could merge multiple gradients to hopefully get a more robust/ better gradient for a solution.  Basically, to do this all an algorithm would need to provide is "how good of an answer is the point at (x,y,z)?"

  Are you sure? yes | no