• Finding the Pose

    Mike Turvey05/30/2017 at 18:09 0 comments

    I've been working a lot with the libsurvive project to get this project going. It's very well aligned, and a small community of awesome hackers has converged on that project, as well as getting some unofficial support from the Valve guys from time to time. My main area of focus has been on finding the pose of a tracked object given the angle data from the lighthouse. This has been a much bigger undertaking than I ever imagined. I finally have pose estimation working at a somewhat usable level.


  • Working with the Vive Tracker

    Mike Turvey02/24/2017 at 18:23 0 comments

    I'm willing to try different approaches to figure out the right mix of easiest & cheapest for the solution to this whole tracking problem. HTC was kind enough to send me one of their new trackers to test out. The tracker only supports sending position data out through USB, so the Due isn't a good choice here. Instead, it may be something more like a raspberry pi. We'll see. As for the software stack, the tracker should be a great fit with the libsurvive software stack. From what I can tell, it will look a lot like a Vive controller, although of course it has a different USB device ID.

    So, I now have two lighthouses and a tracker. But no controllers or HMD. I feel like I'm starting to build a Vive system piecemeal.

    So, for now I'm focusing my efforts on getting this thing to work with libsurvive, as that seems to be the most promising path forward. I'm definitely not giving up on the microcontroller path long term, as I think it has a lot of promise, too. But a solution using this tracker would be much easier for others to duplicate in the near term.

  • Solving for position using tori point clouds

    Mike Turvey02/08/2017 at 21:46 2 comments

    It's been a while since my last log, but progress is being made. The big problem has been how to convert from point locations & lighthouse angles into a known position of the tracked object in space. The first part of that problem involves figuring out where the lighthouses are in relation to the tracked object. The libsurvive project is also solving much the same problem, so I've been working in the context of that project.

    There are a number of ways to solve this problem, and it's not clear at this point which solution will work the best. The approach I'm taking is as follows:

    • Given any pair of points (sensors), and the angle they make relative to the lighthouse, the set of possible locations of the lighthouse is defined by a torus.
    • Given a set of tori, look for the spot where all of the tori converge in one point, and that will be the location of the lighthouse

    Now, that sounds simple enough. But I'm not aware of any simple solution for finding the converging location for multiple tori, particularly when there is noise in the signal so they don't all converge perfectly.

    So, here's my solution:

    For each torus, generate a point cloud. Then, for one of the tori, for each point on the torus, compute a least-squares distance to the nearest point on each other torus. Whichever point has the lowest least-squares distance will be a good estimate of the location of the lighthouse.

    Now, you need to refine that initial guess to get a higher precision estimate of the lighthouse location. Right now I am using the same approach as above, except just rendering part of the tori at higher resolution. It looks something like this:

    In this image, you can see that there are so many tori being calculated that it's hard to distinguish each one. But you can also see the area that looks denser. The red "star" indicates the location of the initial guess, and as a result a second pass was done just in that area, with each tori being rendered in much higher resolution. Looking very closely, you can even see that a third pass was done in the center of that patch to eek out the most precision possible using this approach. While this seems to work fairly well, it's also somewhat memory intensive, and very CPU intensive.

    So, I'm planning a different approach for refining the initial estimate that avoids a point cloud altogether by employing a gradient descent. Basically, it involves looking at the 3 dimensional gradient of the least-squares distance to each torus, and iteratively moving the "best estimate" point along this gradient until it reaches a local minimum (and ideally this will also be the global minimum, too).

  • Intel Edison my friend

    Simon Trendel11/06/2016 at 23:07 5 comments

    Hi everyone, I'm part of a student team in Munich, Germany. We are trying to use the vive tracking system for our humanoid legs. I joined this project recently and wanted to share some insides we had using the sensors with different hardware than the arduino due and the TS3633-CM1 modules. Namely Intel Edison and Genuino 101 and sensors we got from disassemblying a vive controller. This log entry serves as additonal information for the interested reader, or people who might have some microcontroller flying around. In the future we will work closer with Mike and in particular try to use the same hardware.

    We wanted to order the TS3633-CM1, but currently they are sold-out. We couldn't wait to get started and disassembled one of the vive controllers. Each controller contains 24 sensor. At a price of 130$ per controller, this is a lot cheaper than the 6.95$ triad wants (we pay 100$ extra shipping to Germany).

    As you can see, connecting the sensors is quite difficult, because the contacts are tiny.

    For measuring the pulse width, we wanted to use an Intel Edison, because we had one lying around. After hacking for a while and scanning the forums, it became apparant, that the interrupts on the Intel are in fact threaded and simply not fast enough for measuring 62us - 135us.

    #include <iostream>
    #include <vector>
    
    #include <chrono>
    #include <mraa.h>
    #include <mraa/gpio.h>
    
    using namespace std::chrono; 
    
    static volatile int 		counter_falling = 0, counter_rising = 0; 
    static  high_resolution_clock::time_point t1, t2; 
    static std::vector<microseconds> 	timeElapsed; 
    static bool 			valid; 
    
    
    // Interrupt routine for the input data from the TS3633 sensor
    void IntrValveFalling(void* arg){
        counter_falling +=1;  
    }
    
    // Interrupt routine for the input data from the TS3633 sensor
    void IntrValveRising(void *arg){
        counter_rising += 1;  
    }
    
    void IntrValveBoth(void *arg){
        if(valid){
    	counter_falling++; 
        	valid = false; 
    	t2 = high_resolution_clock::now();
    	duration<double> d = duration<double>(t2-t1);
    	microseconds us = duration_cast<microseconds>(d);
    	timeElapsed.push_back(us.count()); 
        }else{
    	counter_rising++; 
    	t1 = high_resolution_clock::now();
    	valid = true; 
        }
    }
    
    
    int main(int argc, char** argv){
        mraa_result_t rv; 
        mraa_init(); 
        const char* board_name = mraa_get_platform_name();
        fprintf(stdout, "Version: %s\n Running on %s\n", mraa_get_version(), board_name);
        
        mraa_gpio_context m_gpio; 
        gpio_edge_t edge_r = MRAA_GPIO_EDGE_RISING;
        gpio_edge_t edge_f = MRAA_GPIO_EDGE_FALLING; 
        gpio_edge_t edge_b = MRAA_GPIO_EDGE_BOTH;            
     
        // J17-7 on the Intel Edison Breakout Board
        m_gpio = mraa_gpio_init(6); 
        if(m_gpio == NULL){
            std::cout << " cannot open J17-7 pin...\t closing" << std::endl; 
        }
    
        mraa_gpio_dir(m_gpio, MRAA_GPIO_IN); 
    
        rv = mraa_gpio_isr(m_gpio, edge_b , &IntrValveBoth, NULL);
        if(rv != MRAA_SUCCESS) std::cout << "MRRA return code: " << rv << std::endl;  
    
        // J17-8 on the Intel Edison Breakout Board
        mraa_gpio_context m_gpio_2; 
        m_gpio_2= mraa_gpio_init(7); 
        if(m_gpio_2 == NULL){
            std::cout << " cannot open J17-7 pin...\t closing" << std::endl; 
        }
    
        mraa_gpio_dir(m_gpio_2, MRAA_GPIO_IN); 
    
        std::cout << "reading GPIO J17-7: " << mraa_gpio_read(m_gpio) << std::endl; 
        std::cout << "reading GPIO J17-8: " << mraa_gpio_read(m_gpio_2) << std::endl; 
    
        sleep(5);
        std::cout << "Counter falling= " << counter_falling << std::endl; 
        std::cout << "Counter rising= " << counter_rising << std::endl; 
    
        counter_rising = 0; 
        for(auto e : timeElapsed){
    	    std::cout << "counter: " << counter_rising << " E: " << e << std::endl; 
    	    counter_rising++; 
        }
        
        mraa_gpio_isr_exit(m_gpio); 
        mraa_gpio_close(m_gpio); 
    
        mraa_gpio_isr_exit(m_gpio_2); 
        mraa_gpio_close(m_gpio_2); 
        mraa_deinit();
        return MRAA_SUCCESS;
    }

    There is a way via busy wait loops, which unfortunately can't be used when using multiple sensors. Here is the code:

    #include <stdio.h>
    #include <unistd.h> 
    #include <mraa.h>
    
    int main() {
    	 mraa_init();
    	 mraa_gpio_context pin = mraa_gpio_init(...
    Read more »

  • Strategy

    Mike Turvey10/23/2016 at 22:54 0 comments

    I'm pretty convinced at this point that the algorithm to use to find location is EPnP. The authors of the original paper have provided sample code to run the algorithm, which is awesome. But, the sample code depends on OpenCV libraries, so that's a bit of a hitch. I'll have to pull out or rewrite the parts of OpenCV that it uses in order to get this loaded on the Arduino Due.

    So, here's the current plan:

    1) Modify/ use the existing EPnP code while running on a desktop to derive the position and pose of the sensor board relative to the lighthouse. Values will be captured, and offline analysis will be done. i.e. This is not realtime, just proof that I can get the code to work.

    2) Build a derivative EPnP library that does not rely on OpenCV. Ensure that it behaves the same as (1).

    3) Load the code from (2) onto the Arduino Due, and have it calculate pose information realtime. Benchmark, and see if it's "good enough"

    4) If the speed of the algorithm is (3) is too slow, iteratively attempt to optimize the algorithm, likely by trying to push as much math as possible into using integer math instead of floating point math. Not sure how easy it will be to do this with EPnP, but probably worth a shot.

    5) Build support for on-device calibration of a world reference frame. I'm expecting this to have a user experience something like:

    1) Place the tracked device where the origin should be & press a button. 2) Place the tracked device at least a meter away in the +Y direction & press a button. 3) Place the tracked device in the +X direction by about a meter & press a button.

    At this point, the tracked object should be able to have a well defined world coordinate system, and it should know how that coordinate system maps to the Lighthouse's coordinate system. The Arduino should be able to consistently spit out the tracked object's location in the world coordinate system now.

    And here's the part I really like: 6) It should spit out location data in standard NMEA sentences. That's the standard way that just about all GPS receivers report location information. Why does that matter? It means you could hook this up to any device that expects to use a GPS receiver, but instead it's using lighthouse tracking instead. For example, you could hook it up to a small drone using an off-the-shelf UAV board, and program flight paths using off-the-shelf software, without any modifications.

  • Algorithms for Lighthouse Positioning

    Mike Turvey10/18/2016 at 04:38 0 comments

    I've learned a lot in the last week about Kalman Filters, Discrete Linear Transformations, the Perspective-n-Point problem, and more. The big breakthrough was realizing that you can treat the lighthouse and its sensors as just a really high resolution camera that only sees a few points. And as a bonus, you don't have to deal with the usual computer vision problem of identifying "important" features of an image.

    Perspective-n-Point is the problem of solving for the camera position and orientation given n points that the camera is able to see. This is effectively the problem we need to solve. (Technically, we want to know the position of the object relative to the camera, but that's a trivial difference.) There's been lots of research into the problem and a number of algorithms exist. The algorithms seem to fall in two categories: Iterative approaches that improve on an initial approximation, and algorithms that solve for the pose all at once. One of the most efficient strategies is an algorithm called EPnP, or Efficient PnP, which solves the problem in O(n) time. Once you have a good pose, if you can assume that the tracked object only moved a little between observations, it can be appropriate to use the previously calculated pose as the input to an iterative algorithm to get the new pose.

    One concern is that the implementations of EPnP (and I suspect the other algorithms as well) work on floating point values, not integers. The Cortex M3 in the arduino Due does not have a floating point unit, and by at least one crude measure, floating point operations take ~40 times longer than integer operations. I'm doubtful that these algorithms would lend themselves to an integer variant of the solution.

    And, just to throw a wrench into all of the above goodness, it's worth noting that the Lighthouse technology isn't quite the same as a camera. That's because the two sweeps of the lasers (to detect the horizontal and vertical angles) do not occur at exactly the same time. In fact, they're ~8ms apart. While a simple algorithm may ignore this, an algorithm targeting maximum precision would need to take it into account. For high precision, integrating the values from an inertial measurement unit (IMU) would also be a good idea (just like the Vive controllers and headset). To integrate all of these different measurement updates into a single pose, Kalman filtering appears to be the way to go.

  • Trigonometry

    Mike Turvey10/13/2016 at 20:55 12 comments

    So far the algorithm for deriving position from the lighthouse angles and known relative positions of the sensors is eluding me. I'm thinking that the solution will come from setting up a set of equations and then solving for the unknowns. And that's what I've been doing so far with pen and paper. Lots of trigonometry here, as I'm deriving all of my equations from trig relationships, setting up triangles wherever I can (they're everywhere!). At least it's a pretty awesome problem. I probably won't have much more time to work on it until the weekend. Just wanted to drop a note saying "I'm still working on it." More to come...

  • Major code rewrite

    Mike Turvey10/08/2016 at 09:17 2 comments

    I haven't had as much time to work on this as I'd like. My daughter had a hip surgery a couple days ago, and she's and my other kids have been my focus lately. Anyway, the last time I was looking at this, I had some weird intermittent glitches that I was having a hard time tracking down. Given that the old code was basically taking the logic for a single sensor and running it six times (once for each sensor in the current setup), it wasn't really optimized and needed a major overhaul anyway. I spent a couple hours tonight to do that redesign and rewrite.

    The first major change is that instead of having a separate ring buffer for each sensor, I now have a single ring buffer for all sensor data. This means that it's now obvious what order the sensors have been read. Previously, if processing lagged too much (usually because I was printing too much debug info to the serial port), it became impossible to keep the incoming sensor data in sync.

    Since we now can easily tell what order the sensors have been read, I changed how the OOTC pulse is being read. Instead of each sensor separately detecting the OOTC pulse, it is now detected via cooperation of all sensors. This will be especially important in getting high precision angle measurements. While all sensors receive the signal at basically the same time and assert their lines together, the microcontroller has to service the interrupts serially, and hence they get timstamped separately. Now, when an OOTC pulse is detected, we look at all sensors to find which sensor's interrupt was processed first, as well as which sensor was processed first for deasserting the OOTC signal.

    I also got rid of a little floating point math, which might speed things up a bit. I've also been able to verify that the ring buffer is being serviced frequently enough that there's no current risk of overflow.

    I'm very happy to say that I'm now comfortable enough with the incoming data that I'm ready to start trying to triangulate position given the angles. But, this is my weakest area in this project, so it may take me a while to figure it out. If anybody has any pointers or references, I'd really like to hear them.

  • Reading 6 sensors, (mostly) reliably

    Mike Turvey10/02/2016 at 06:08 0 comments

    First off-- The code cleanup has gone well. Aside from being better structured, it now supports multiple sensors. Right now 6, but trivial to add more. Here's a short dump of the angles being captured:

    The first thing to note is that sensors 0-4 appear rock solid. But, if you look closely at sensor 5, the X and Y values sometimes flip. Obviously not good.

    I haven't root caused this yet. My initial thought was that it's due to the interrupts piling up for all six sensors when the initial sync pulse (a.k.a. OOTC) is sent. From the docs, it takes 12 clock cycles from an interrupt being asserted in hardware until the interrupt service routine (ISR) code starts up. No mention on how long it takes to return from the ISR, but certainly at least a few clock cycles. Let's assume that the entire ISR takes ~20 cycles to enter, run and exit-- it's a good ballpark figure. Then, if 6 interrupts are asserted at once (as happens when the OOTC pulse is sent), then the 6th interrupt will be serviced ~100 clock cycles later than the first interrupt. At 84MHz, that's ~1.2us. While that could be substantial for causing drift in the measured angles, it probably isn't enough to be causing the error above. (note: We'll fix the above issue by reading the OOTC pulse with only one sensor-- it's a lower priority for now, but will be necessary before we're done).

    So, another interesting observation is that when this bug hits, X and Y are swapped. That's particularly odd because X and Y are acquired completely independently. It's hard to imagine an issue that would always affect both of them. This should be an interesting issue to root out.

  • Multiple Sensors

    Mike Turvey10/01/2016 at 08:51 0 comments

    Since I was able to get one sensor working really well, it was clear that the next step is multiple sensors. I've built a rig with 6 sensors. I'm not sure of all of the math yet, but I'm pretty sure that it is important to avoid more than 2 points being on a single line and more than 3 points existing on any plane.

    In expanding the code to support multiple sensors, I seem to have introduced a few bugs. Need to fix them before I can reliably get angular positions from all the sensors.