Close
0%
0%

HTC vive lighthouse custom tracking

3d position tracking using our own design of the sensors, an altera fpga for decoding the signals and custom code for triangulation and poseestimation

Similar projects worth following
HTC vive ligthhouse tracking is super nice in terms of accuracy and cost. While a typical mocap system costs a lot of money, a pair of lighthouses cost only approx. 250euro. Then of course you need sensors. But those are rather cheap as well (approx 5 euro). In this log we are going to describe our system, which copies the functionality of HTC system, ie 3D tracking of custom sensors and pose estimation of objects.

We are a student team from Munich, Germany. Our goal is to make Roboy balance and walk. For this we need accurate tracking. We decided to replicate HTC lighthouse tracking for our purposes and in November 2016 a fascinating journey began.
Checkout this excellent review on the HTC lighthouse tracking system.

At first we tried decoding the signals from the sensors using Intel Edison and MKR1000, which in case of the Edison turned out to be impossible and in case of the MKR was limited to a small amount of sensors. For the Edison the hardware interrupts were not handled fast enough. This is due to a threaded interrupt system. We also tried using the MCU, which wasn't fit for the job eaither.

The MKR was simply overwhelmed by all the interrupts.

We disassembled one of the HTC vive controllers for getting our hands on those sensors. We noticed the HTC controllers were using an ICE fgpa. So we thought if they use it, there must be a reason.

Then soldered VCC, GND and Signal copper cables (0.1 mm), using enough flux. And covert the sensor with a bit of glue to protect it from accidental damage.

In the previous prototype we were routing all signal cables coming from the sensors in parallel. This turned out to be a bad idea. Because of induction the signals pollute each other. In the vive controller, they deal with it by isolating the signal with VCC and GND. So thats what we are also doing.

In the following picture you can see the complete setup:

  1. The custom object, with 4 sensors
  2. The de0 nano FPGA
  3. The MKR1000

[Note: This is the old setup. New updated setup is described in section 7.]

Notice that there are only 4 sensor signal cabels (grey, blue, yellow, red). The other cables are VCC (purple, orange) and GND (green, brown).

The connection to the MKR is via SPI, where the MKR acts as the Master. An additional pin to the MKR notifies the MKR, when there is new data available. This triggers the SPI transfer.

Our vive tracking consists of a couple of modules:

  1. Decoding the sensor signals and calculating the sweep durations (this is done on the fpga)
  2. Transmitting the sensor values via SPI to the MKR1000
  3. Transmitting the sensor values wirelessly via UDP to the host
  4. Triangulation of the ligthhouse rays
  5. Distance Estimation wrt a calibrated object
  6. Relative Pose correction using a calibrated object

1. Decoding sensor signals

[Note: This is the old decoder. New updated decoder is described in section 7.]

On the de0 we are using a PLL to get a 1MHz (1us) clock

Then we feed the sensor signals to one of these lighthouse modules

The spi module looks like this

2. Transmitting the sensor signals via SPI

[Note: This is the old data format. New data format is described in section 7.]

The MKR acts a the SPI Master. Whenever there is new data available (ie when the fpga decoded a valid sweep), it notifies the MKR via an extra pin. The MKR then starts downloading a 32 bitfield, which encodes the data in the following way:

  • bits 31 - 13: sweep duration (in micro seconds)
  • bit 12: valid sweep
  • bit 11: data
  • bit 10: rotor
  • bit 9: ligthhouse
  • bits 8-0: sensor id

3. Transmitting the 32-bitfield via UDP

The host listens to UDP broadcast messages. We are using google protobuffer for the custom messages. When the host receives a trackedObjectConfig message, it opens sockets for the sensor and logging messages and sends the respective ports via a commandConfig message to the MKR. The MKR is waiting for this message and once received, starts sending the sensor values augmented with a milliseconds timestamp.

This sort of infrastructure is very convenient once you start using many tracked objects. You just turn the thing on and the tracking is initiated. We implemented a yaml reader, which saves and reads information about an object (eg the relative sensor locations on the object, or a mesh to be used with it...).

4. Triangulation...

Read more »

thesis2.pdf

Masterthesis covering our custom lighthouse tracking

Adobe Portable Document Format - 39.48 MB - 03/16/2018 at 14:09

Preview

Ligthouse_Tracking_MidT.pdf

Midterm presentation slides of the project. Feel free to take a look :)

Adobe Portable Document Format - 14.53 MB - 02/01/2017 at 12:25

Preview

  • TS4231

    Simon Trendel05/14/2018 at 23:18 0 comments

    So eventually we have moved to the much more integrated TS4231 from Triad. The reasons are size, price and assembly time. While Luis chip works like a charm, the time to assemble the two-sided PCB with 30+ components has been very annoying. The TS4231 only requires 2 caps, a resistor and the photo-diode. We assembled  63 of the new sensors in approx 2 hours.  You can find our kicad layout here.

    The sensors are cheaper now aswell, like 1.50 euro per sensor. If you make more it will get even cheaper.

    The size of new sensor is approx 10x9 mm, so it is smaller aswell. If you are crazy enough to go for double sided assembly, you might be able to half that size.

    One downside of the TS4231 is, that it needs to be configured via its two cables. On power-up it goes into a state, where the envelope signal is output on the data line (which caused major confusion, when we started using them). It does that for about a minute or so, then goes to SLEEP. We have written a small verilog module called ts4231.v, that automatically configures a connected sensor to go to WATCH state, in whatever state it is. This module essentially does the same as the arduino sample uploaded by Triad. I dont know, why Triad decided to design the sensors like this. It would have been cool, if the sensor would just output the envelope signal on whatever single line, without going to SLEEP state. But the way it is designed you will need two wires for sure for each sensor. Which sucks of course, but I guess thats the price for this pretty little fellow, that also gives you the modulated infrared signal on the data line (lighthouse v2 we are coming).

    So far, we are very happy with the new sensors. They seem to be a lot more robust against shallow incident of light and to be in general less noisy than Luis design. 

    Currently, we are designing tracking sleeves, with an integrated fpga+esp8266 solution, so stay tuned...

  • Masterthesis

    Simon Trendel03/16/2018 at 14:09 0 comments

    So I finished my thesis yesterday, it covers the motor control and communication infrastructure of Roboy 2.0, but also describes  in more detail how our custom lighthouse tracking works. I directly compared our system with the HTC Vive by mounting a Vive controller onto our calibration object. We are pretty close and it will get even better for sure.

    You can find the thesis in the files section.

  • lighthouse coordinate frame

    Simon Trendel01/11/2018 at 22:02 0 comments

    Opening the ligthouse reveals the motor rotation axis

    side dimensions
    bottom dimensions
    front dimensions

    We have to take this into account for the calibration...

  • in your gibbous phase

    Simon Trendel01/10/2018 at 23:55 2 comments

    The sensors are probably hold up in customs...

    So far, we haven't seen anyone actually using the mysterious factory calibration values. Although it might be very handy to just use the ootx values.. in the end, calibrating your 'camera' is something one has to do all the time. For the lighthouses unfortunately there is no nice opencv example. 

    The plan is to calibrate them with our own correction function (might be close to the original or not, but should do the job). 

    The function we are using is

    elevation += phase[lighthouse][VERTICAL];
    elevation += curve[lighthouse][VERTICAL]*pow(sin(elevation)*cos(azimuth),2.0) + gibmag[lighthouse][VERTICAL]*cos(elevation+gibphase[lighthouse][VERTICAL]);
    azimuth += phase[lighthouse][HORIZONTAL];
    azimuth += curve[lighthouse][HORIZONTAL]*pow(cos(elevation),2.0) + gibmag[lighthouse][HORIZONTAL]*cos(azimuth+gibphase[lighthouse][HORIZONTAL]);

     The phase is a constant angle offset. The curve scales a quadratic function that curves the lighthouse plane. The gibbous magnitude and phase describe a distortion with respect to the motor angle.

    We are using a least square optimizer to minimize the measured angle to a ground truth over the above parameters. 

    The result looks good...in simulation ;P

    Now we gotta build a real life calibration platform with know angles...

    UPDATE:

    We decided to laser cut a plate, such that the photodiodes are at known distances. 

    Using the seams between the tiles and a laser distance measurer, we tried to place the board as precise as possible wrt the lighthouses. We calibrated each lighthouse individually using the minimizer mentioned above. However, we adapted the optical model to be closer to reality. So instead of using spherical coordinates (ie pin hole camera), the motors are treated to have their respective offsets and as sweeping planes. This is important because otherwise any estimated curve parameter would be wrong for sure using the pin hole model. So far we left tilt aside.

    After the calibration the relative pose estimator also finally started to produce reasonable results

    Which in turn results in a relative lighthouse pose correction that is not so bad

    The pose estimator for the calibration object produces ok results too.

    This truly feels like some light at the end of the tunnel. The pose estimation has never worked properly with real sensors. So the calibration seems to be crucial for any tracking application. The calibration is rather annoying, since it highly depends on how accurately you are able to put the calibration object relative to the lighthouse. At one of the institutes at TUM they have a vikon system. So that might be the next step, to get a lot of data and get very accurate calibration values. It might also be possible with a lot of data, to figure out which kind of permutation of sin/cos/+/- and in which order the original HTC factory values are used. 

  • Pushing simulation while waiting for sensors

    Simon Trendel01/02/2018 at 01:00 0 comments

    So now we are at a state, where we want object pose estimation from our sensor data. This is kinda problematic, because the lighthouses are not perfect. They are optical systems with certain manufacturing tolerances. There is a reddit discussion concerning the factory calibration values and their meaning. This videos is explaining the values to some extend, while this video describes some of the strategies to cope with those values.

    First of, we had to implement an ootx decoder in fpga, that continuosly decodes the frames from both lighthouses. That was fun and the decoded values are sent via a ROS message to our host. 

    So far, we have been using exclusively our own custom sensors, and are pretty confident that those produce comparable signals to the original HTC ones. But you never know until you try. Also Valve has announced their second generation of the lighthouses which will do something called sync-on-beam. We are not sure what exactly that is, but we have the strong believe that this might be related to a modulation of the MHz infrared light to convey information while the beam sweeps each sensor. This would allow for an almost arbitray amount of lighthouses. Our sensors are not able to decode such a signal. Triads second generation of the sensor amplifier TS4231 are. So we ordered those and we had the pcbs manufactured. 

    Over the holidays our pick-and-place machine is resting. Which is boring of course, but we decided to push simulation further and extend the overall system to be able to track multiple objects. We are using autodesks fusion 360 for CAD design, because it is free for students and it has an extensive python API. The API allows you to write your own plugins for fusion. We are working on tendon robots, so those plugins come in very handy when defining attachment point of the tendons for example. 

    So the first thing we did, was to implement a plugin for the sensor positions on our robots. When you run the plugin, it lets you choose for which link you want to define the sensor positions and then you simply click on your model where you want those sensors. I might upload a video later. You can find the plugin here: https://github.com/Roboy/DarkRoomGenerator

    Another plugin we have written is a sdf exporter. It lets you export your robot from fusion to sdf. Then you can load that model into the gazebo simulator. This plugin can also export the lighthouse sensor positions. These can then be loaded by our system and used for pose estimation. You can find the plugin here: https://github.com/Roboy/SDFusion

    In the following video you can see a simplified model of the upper body of roboy2.0. The green spheres are the simulated sensor positions. Currently our pose estimator tries to match the relative sensor positions (as exported from fusion) to the triangulated 3D positions. The resulting pose is quite ok, although fails for some orientations. This could be because of ambiguity, but needs further investigation ( EDIT: the pose estimator works fine, it was just my crappy pc that couldn't keep up with the computation. This caused the simulated sensor values to be updated asyncronously)

    Because the visual tracking alone is not good enough, we extended our system to also include IMU data. This is done via the ROS robot_localization packages' extended kalman filter. So now, everything is prepared for the real data. Once the new sensors are assembled we will try to figure out, how to use those factory calibration values to get good lighthouse angles.

  • custom sensors

    Simon Trendel09/01/2017 at 22:58 0 comments

    The circuit of the HTC original sensor is out there. You can also buy those sensors from the US. We decided to build our own sensor anyways. As it turned out, our sensor has a pretty cool property. More on that later...

    The HTC sensor works by measuring the envelope signal of modulated light pulses. Luis sensor is based on cascaded amplification stages and signal conditioning filters.

    After some breadboard prototyping, Luis tweaked the different amplification levels.


    We etched the circuit and build a more compact prototype. 


    On the picture above you can see the cool feature of our sensor. The photodiode can be placed up to 1 meter away from the amplification circuit using coax cable. We are planning on building a full body morph suit.

    Then we miniaturized the sensor further. Check out the design here.


    We ordered the pcbs in china and assembled them in makerspace. The result is a 16x12mm sensor. We moved from de0-nano to de10-nano-soc. The sensor data is retrieved from the fpga via altera master/slave interface and sent to the host via ROS message.

  • IMU MPU6050

    Simon Trendel03/08/2017 at 23:04 1 comment

    The next step was to retrieve IMU data from our tracked objects. Even though the de0-nano comes with an accelerometer, we are using an external MPU6050. They are quite cheap, like 3 euro and come with accelerometer, gyroscope and the Digital Motion Processor, which fuses the sensor values to useful stuff, like eg quaternion, gravity. The data is retrieved on the ESP8266 which uses I2C to communicate with the MPU6050. The MPU6050 has an extra interrupt pin, which signals data availability. This interrupt pin is routed into the fpga and the connection controlled there. This is necessary because we are using gpio2 on the esp, which needs to be low on boot from internal flash (if gpio2 is high, this means boot from sd card). We also progressed on the command socket infrastructure, which allows rudimentary control of hardware from the GUI (we can now reboot the esp for example, or toggle IMU streaming). In the video below you see the quaternion from the MPU6050 DMP estimation streamed into our rviz plugin and visualized with the red cube.

    The orientation looks very stable, so we hope to skip the bloody epnp stuff...we will see.

    Below you can see how the MPU6050 is wired up to the esp and the de0-nano.

  • using esp8266 instead of mkr

    Simon Trendel02/28/2017 at 23:49 0 comments

    The MKR1000 has always been an intermediate solution. even though he is quite small, there is an even smaller wifi chip, the esp8266. it has a programmable micro-controller and just needs some resistors, two buttons and a serial programmer to be programmed.

    NOTE: there seems to be a lot of confusion out there, about how to correctly wire the little guy in order to program it. therefor the following should help you get started:

    • do not use the vcc 3.3V of your serial programmer (usb does not seem to provide enough current for the esp, which can peek to 200 mA). instead use a voltage supply with enough current. The esp seems to be very sensitive towards incorrect voltage, which leads to undefined behaviour causing major headaches
    • the slave_select pin is GPIO15 which needs to be low on boot
    • use the following wiring scheme:

    When programming the esp (be it from arduino ide or commandline) hold both buttons pressed, start download, release reset, then flash. your bin should start downloading, if not try again. (make sure you pay attention to the NOTE above).

    We removed the SPI core we were using so far, simply because we didn't trust it. We noticed some glitches in our sensor values and one suspicion was the SPI core. anyways...here is how it looks now:

    The esp is acting as the SPI slave (which makes a lot more sense, because the fpga should control when stuff is send out). One SPI frame for the esp consists of 32 bytes = 256 bits = 8 * sensor_values. In each lighthouse cycle, a frame with up to eight sensor values is transmitted to the esp via SPI. The code for the esp hasn't changed much compared to the previous MKR code, except for the SPI slave and that we send out 8 values via UDP at one go, instead of each sensor individually.

View all 8 project logs

Enjoy this project?

Share

Discussions

Anran wrote 05/06/2022 at 11:06 point

Excellent work!

  Are you sure? yes | no

CNLohr wrote 11/01/2017 at 03:38 point

Have you gotten a chance to check out all of our libsurvive research on this?  Got a pretty stable sensor going with just an ESP8266 + TSV992 TIA, able to resolve frequency, phase, etc. really well with very few parts.  Sampling at 80 MHz on that $2 proc = sweet sweet accuracy.

  Are you sure? yes | no

Simon Trendel wrote 12/02/2017 at 14:23 point

ah sweet, haven't been checking on your project for quite a while now. I'm wondering now that you mention esp, whats the upper limit of lighthouse sensors you can use?

For us, it's limited by the number of gpios on the fpga (like 80 or so).

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates