Close
0%
0%

Room Based VR Positioning

My idea for a room based VR positioning system.
The goal is for tennis court coverage to play "real" virtual tennis!

Similar projects worth following
I love the thought that well-made VR environments are just around the corner, with the endless possibilities it brings to engineering, science, the arts, medicine and, of course, gaming.

As a kid I imagined it how cool it would be to be able to go to a Niven style “Dream Park” where people could game. Then, after I became an engineer I quite often would look at the technologies I came across and they would slot in place in my head as part of a system to make it happen. This is one of those things I’ve had rattling around in my head for about the past 17 or 18 years.

I’m putting it on here because I don’t think I’m ever likely to get round to producing my own VR arcade :-) and because I don’t think this approach is patented. I’m hoping that someone would want to take it forward to an open source VR system to push the VR adoption that little bit faster.

If you do use it please comment so I know!

THE PRINCIPLE

The principle behind it is a pair of laser lines, separated by a known, precise, angle and rotating at a rate (assumed to be constant only while in the short time it is sweeping the object) which then sweep a pair of sensors with a known separation.

For a laser pair (L1 and L2) sweeping down around the horizontal axis, the time between laser L1 hitting the top and bottom sensor is a factor of how far the sensor pair is from the centre of rotation due to the constant rate and variable, relative, angle.

The time between laser fan L1 and L2 hitting the same sensor is always the same, regardless of distance, because the angle and the rate of rotation within the time-frame of the sweep are very constant. Taking the ratio of these two times will remove the rotational speed from the equation and give you a relative distance between the sensor pair and the lasers. Knowing the exact vertical separation between the sensors and also knowing the exact angle between the laser fans will fix the distance to a known scale.

Many of these pairs, along with horizontal and vertical sweeping lasers, will give you the distance and orientation of the object cluster with respect to the base station.

COMPARING THIS APPROACH TO THE HTC VIVE SYNC PULSE:CONSTANT SPEED APPROACH

PROS:

  • The lack of a "pulse" phase means the distances can be much greater.
  • This approach does not need exact rotational speeds, only consistent speeds whilst sweeping over the object which makes the hardware easier/cheaper.
  • Different rotational speeds (within bands) can be used to determine different bases and axis. This allows multiple bases sweeping without the need to sync them all together. 
  • As the only critical time for interference is limited to the time it is sweeping over the cluster, different rotational speeds for all the bases minimises the interference between them to very infrequent clashes. Then, on the next sweep following the clash, they have moved away from each other.
  • Target movement, so long as it's linear whilst the laser sweeps the target, changes both the apparent speed of rotation as well as the duration between the hits. This essentially compensates for movement.

CONS:

  • With a single base station you still have degrees of freedom in the position of the tracked object. It would need three base stations or two base stations and an IMU to give full pose. Interestingly, this goes away for tracked robots where the ground plane and the sensors height and orientation to that plane are known (i.e. ideal for factory tracking).
  • Due to the fact that you only get relative angles between the detector positions, it's impossible to get a position without doing a full pose estimation. By contrast, the Pulse:Constant sweep approach rough angles can be estimated easily and, with triangulation, rough position too.
  • Target movement, if fast enough, could make one base look like another if the apparent rotational speed change is great enough.

3dPositioningHVScanning-V2.zip

Java Netbeans project to give the relative angles in Az and El from a base to a cluster of sensors. Plot filters based on a (very rough) estimate of detection.

x-zip-compressed - 210.19 kB - 11/07/2016 at 00:07

Download

  • Another post, another poser attempt

    Lee Cook03/29/2018 at 11:20 0 comments

    It’s been a long while since I worked on this project, life got in the way and, to be frank I think I burnt myself out of enthusiasm trying to get the circle:ellipse approach I outlined in my last post to work. The actual ellipse fitting algorithm worked like a charm, however, the type and amount of distortion perspective would put in to the “ellipse” ended up being way more than I was able to efficiently to compensate for.  Essentially, the circle is transformed by a parallelogram which, unfortunately, moves the original centre line to above/below the point of maximum width:

    Trying to understand this effect lead me to a paper by Randall Bartlett a professor at Auburn University.  In the paper (The Bad Ellipse: Circles in Perspective), he confirmed that the minor axis does align with tilt direction of the plane the circle is on vs the viewing point, however he explained it in terms of vanishing points (giving rise to the parallelogram).  It was at this point I ran out of steam and let home and work life take over for a long while.

    Over the past few months there has been a renewed vigour on the @CNLohr LibSurvive project with the Discord chat becoming very active and, while I’m not contributing to Libsurvive, it did actually give me the impetus to try another time to get a working poser for my system. So, after quite a bit of reading on ellipses, vanishing points and other related topics, it finally dawned on me that I should be able to use the vanishing point principle to make a poser.

    The approach is fairly simple, if the base station is moved from its current position in a direct line toward the centre of the sensor cluster then the sensors would appear to move in a straight line toward/away from the centre point of the object.  I.e. they would move toward/away from the vanishing point (think of the stars in a Star Trek warp journey):

    So, if you take an estimate of the bearing  with an arbitrary range (red point) giving theoretical points for the sensors (purple), you can then work out what the estimated angles between the sensors are. These angles would remain constant regardless of the distance the base was from the cluster. There is then an iterative process where you take groups of three sensors and, using actual sensor position data along with the angles from the bearing estimate, you find the point which would gives that pair of angles.

    Doing this for all the visible sensors will give a number of positions, you then iterate and refine the estimated bearing until the positions converge and the angles match the estimate at the convergence point. So, once you have the bearing, you then alter the range until the angular distances between the points and the bearing point are the same.  You can also look at the rotational aspect by looking at the measured angles vs the cluster relative axis.   

    After all this, you should have a range, bearing and base rotation wrt to the cluster on that bearing.

  • About turn again - The Mathematics

    Lee Cook01/18/2017 at 12:30 0 comments

    After taking a detour (and dragging a couple of others with me) through the realms of ePnP, I’ve done an about turn and gone back to my original idea of using circles and their apparent ellipses for tracking position and attitude. I’ve already implemented most of the steps and I’m reasonably confident that this approach will work, though I still need to see how well it will sit on a 300MHz Cortex M7. Anyway, here’s the approach as it stands at the moment:

    Take three sensor points in 3D space which have been lit with laser light (i.e. base angles are known). Construct a circle in 3D space using those three points and determine: the centre of the circle, the radius of the circle, the relative angles of the sensors around the circle and, the normal to the plane the circle sits on (use a combination of the three sensor normals to determine the outward facing side).

    At the end of the process you should have something like this which shows a circle constructed from A, B and C:

    Once the 3D circle information is known then start working on the information from the laser sweeps. Use the relative angle/bearings between the sensors to determine the angular magnitude between the sensors (AB, AC and BC). These distances relate to a series of chord lengths around an ellipse with the defined theta angles between them:

    The following equation can be used to determine the length of a chord on an ellipse:

    The problem easier due to the fact that, at this stage, there is only the need to match the ratio of the chord lengths with the ratio of the major/minor axis at those angles. This allows the removal of the minor axis as an unknown as well as the root. However, the problem is made more difficult in that the sensor group may be at any theta angle on the ellipse depending on the attitude of the HMD with respect to the base. This has the effect of “sliding” the sensor points around the ellipse as shown below:

    This basically boils down to a system of three equations where only R (axis ratio) and S (slide angle) are unknown:

    Unfortunately my mathematical skills failed me at this point and I have had to solve this part through iteration. I’m still hopeful that a colleague will take a look at it and come through with a less computationally intensive solution.

    Perspective errors in the ellipse can be reduced by taking advantage of the fact that there are actually two “slide” angles in the ellipse which will fit the chords - exactly PI radians apart. Solve for the slide angle in 0<S<PI and also in the PI<S<2PI ranges then average the two sets of results (still to do in the code).

    Once solved, this gives a major/minor axis ratio, chord lengths (with the minor axis set to one) and, the “slide” angle at which the chain of chords starts, and we start relating the two sets of figures, circle and ellipse together to figure out the pose:

    • The ratio of the major/minor axis gives the angle the circle has been tilted from perpendicular to the base.
    • The “slide” angle will determine how much the HMD has been rotated around the centre of the circle (in the plane of the circle), use an accelerometer to determine up and the correct angle
    • The ratio of the original chord length vs the calculated one gives actual length of the minor axis
    • Scale the minor axis up by the major/minor axis ratio to get the major axis length
    • Use the major/minor axis to determine the positions of the points wrt to a zero and then work back to determine the centre of the measured ellipse from the scan.
    • The centre of the ellipse is a bearing to the centre of the circle.
    • The major axis is the angle created by the circle radius length – use both to determine the distance between the base origin and the centre point of the circle.

    This is about as far as I have come so far. Through with use of external sensors and the sensor normals it should be possible determine the position and orientation of a base wrt to the HMD. Depending on...

    Read more »

  • First iteration of a simulator

    Lee Cook10/30/2016 at 23:15 1 comment

    If there's one thing I've learned over the years it's that you need a decent simulator of the real-world characteristics of your system to develop & debug the control aspects. I've attached a zipped Java Netbeans IDE (8.2) project as the start of mine which will hopefully help with the developement of the maths algorithms...

    It's the first iteration which at the moment defines a spherical sensor cluster:

    Imagine a six sided die blown up to a sphere. At the centre of what was each face, the middle of each edge and the 3-point corners is a sensor.

    The cluster can be placed in a XYZ space, be given an attitude vs down (the output from an accelerometer) and then rotated on the horizontal plane for a heading.

    The program then prints an Az and El angle which would be read from a system similar to Lighthouse and taking the differences would be the output from my system:

    The next iteration, hopefully done by the end of next weekend will fix the inevitable bugs. I'll also expand the functionality to look at the normals of the sensors vs the angle of the base along (potentially taking in to account distance too) to try and estimate if the sensor would detect the laser and, if it does, the duration of the pulse. I'll refine the estimates with real-world data as the hardware comes along.

  • The maths?

    Lee Cook10/19/2016 at 23:50 6 comments

    I’ve just put my thoughts down elsewhere while trying to explain what I see the maths problem behind this project, and I thought I’d share them here too. It’s quite late at the end of my third 15hr work day so a little slack is appreciated. That said, if what I’m proposing is a complete load of bo**cks feel free to point out the errors – with corrections please! ;-)

    My system can't produce exact bearings/'pixels' like the lighthouse system does, but it can measure the relative angles between the various points and, in theory, relate those angles to an absolute distance.

    It's similar, or perhaps even the same(?), as the n-point problem I think.

    There is only one orientation (and from a perspective point of view, distance) of the target with respect to the laser emitter, which when scanned with the laser line, will give that particular ratio of angles. The problem arises because that orientation could be anywhere on the sweep of the laser, like a toroid in the axis of rotation. However, if you know the attitude of the target with respect to the axis attitude of the laser sweep, it becomes fixed against that axis – there is only one orientation of the target wrt to the laser axis which has that particular target pitch and roll.

    The "orientation" reduces down to a relative translation of the target (x,y,z) with a rotation in target yaw/heading - and the whole thing potentially rotated around the vertical axis.

    Having a second set results from another characterised laser scanner (characterisation done during the room calibration routine) and it should be possible to calculate the yaw/heading component of the target and fix the target on the XY plane.

    The whole reduction thing relies on having known attitudes for the emitters and the targets so they can all relate in the same axis.

    Anyway, that's my theory. I've talked to one of the mathematicians at work and he seems interested enough in the system to want to give me a hand getting it working. Perhaps when I give him this talk he'll laugh in my face at my naivety and point out I need 30 scanners to make up for the lack of real bearings.

    I hope not.

  • Shelved no longer!

    Lee Cook10/07/2016 at 19:08 0 comments

    Ok, I have come to the conclusion that in order for this to become a non-virtual reality (sorry, couldn't resist) I would need to progress it to the point where it generates interest. So.. I’ve shelved other projects and this will now take all of whatever time is left after work, a horrendous commute and family of four kids. (i.e. don't expect rapid results!)

    I’ve ordered a load of line lasers, MEMS units and other bits which will be coming to me on a slow boat from China.

    While I’m waiting for those I've been looking at photodiode detector circuits. The parts for this circuit:

    (originally from <here> and copied later <here>) are on order as it looks the most promising starting point for the input stage initial design. Hopefully it will bias out a lot of the ambient light current but let the modulated laser current through.

    It has to be said, I’m nervous about getting back in to proper analogue electronics.

View all 5 project logs

Enjoy this project?

Share

Discussions

alickabrook1 wrote 05/17/2023 at 21:53 point

Good Desigm

  Are you sure? yes | no

Mike Turvey wrote 05/05/2017 at 14:50 point

Hey Lee, not sure how much you're still interested in this, but there are some folks associated with the libsurvive project building lighthouses, and they're discussing your stuff on the libsurvive Discord channel.  Not sure if you're still subscribed or watching over there, but wanted to let you know.

  Are you sure? yes | no

Lee Cook wrote 05/06/2017 at 10:57 point

Hi Mike, thanks for the heads up.  I'm not following on Discord as much (though I do dip in every now and then) but I get email notifications about all the updates going through Github.

Work has been fairly full on and I've been trying to finish my mantis cnc machine to help build the parts and route some pcbs (I've designed the M7 processing and I've 3 versions of a ransimpedance amplifier to try).

I got completely bogged down trying to find a really simple solution to the maths which would fit on the atmel M7 chip. The ellipse matching didn't really work when I was throwing real-world values at it because the perspective would throw off the results.

I'm thinking some of the stuff you guys are doing with Charles should fit right in.

  Are you sure? yes | no

Mike Turvey wrote 12/16/2016 at 16:24 point

Hey Lee-- glad to hear you're still looking at PnP.  Are you specifically looking at ePnP or a different algorithm?  Looking to write the code from scratch, or adapt one already out there?

  Are you sure? yes | no

Lee Cook wrote 12/16/2016 at 20:22 point

Thanks.

I'm still open to any approach. Though I'm going to see if I can do my own initial estimator direct from the sensor data and try different scemes for cherry picking sub-sets of lit sensors for speed/performance.

Initially though I just expect to implement the off the shelf EPnP example code to see how the M7 performs and how it could be optimised.

  Are you sure? yes | no

Jonathan Kelly wrote 10/01/2016 at 00:16 point

Wondering, have you looked at https://hackaday.io/project/15496-precision-indoor-positioning ?

Curious to see how that goes.  

What sort of precision and error are you wanting?

If there was something low cost that could reliably fix a position in 3 dimensions +/- 5cm over around 5m I would be pretty interested.

  Are you sure? yes | no

Lee Cook wrote 10/01/2016 at 13:21 point

Yeh, I've been trying to chase him down to see if he is willing to either sell, or specify the BoM and circuit for, the modulated laser that goes with the TS3633 modules he's using.

I've re-done some of the calcs I did a while ago (at that time I was thinking 8MHz clocks!) and added a link to the google sheet as well as an image of the chart on the sheet to the project images.  At 5m you're looking at a single distance measurement LSB accuracy of 2mm (20mm sensor seperation, 32MHz counter, 25Hz/120deg laser sweep).

This is a *really* simplistic view, in reality you'd be able to average over multiple sensors and you'd be fusing it with the 6DoF data - think of how GPS gets more accurate over a period of time.

Doing the logic and counters within a CPLD or FPGA would be more development (for me at least) but, on these, you could push to x4 -> x10 times the counter frequency - with proportional effect on accuracy.

  Are you sure? yes | no

Mike Turvey wrote 10/01/2016 at 17:20 point

Hey Lee,

The TS3633 are designed to go with the Vive lighthouses (A.K.A. Base Stations).  While the Vive controllers use dedicated FPGAs for some of the critical timing work and a full computer for translating those timings into position, the Vive is capable of sub millimeter precision in 2 axes, and about 2cm precision in the depth axis.  If you use two lighthouses, I believe it becomes submillimeter in all axes.  All that my project is doing is trying to replace the dedicated Vive VR controllers and full computer with a microcontroller and a custom tracked object (in the form of a constellation of sensors).

  Are you sure? yes | no

polyhistor10 wrote 09/26/2016 at 13:28 point

My idea for localizing is as follows;

- have two laser beacons of known coordinates sweeping the area. Line lasers are very cheap now and they can be modulated to 10kHz.

- each beacon is fitted with a compass module. The sweeping beam is modulated with the bearing value from the compass.

- the target has a light sensor with demodulating software. As the beam hits the target it is able to read the bearing from that beacon.

- the target can be continuously triangulated with the bearings from the two beacons.

I have built this scheme with inconclusive results, mainly because I am struggling with a laser receiver and demodulating software.

  Are you sure? yes | no

Lee Cook wrote 09/26/2016 at 20:38 point

How are you modulating the signal?  You would also need to add at least one (two if the item can go above and below the plane the beacons are on) more transmitter in order to fix the item in space.

  Are you sure? yes | no

polyhistor10 wrote 10/20/2016 at 01:09 point

Lee,

I am only working in 2D. I agree you would need vertical scanning for 3D.

Have you had a look at this system?;

http://www.hizook.com/blog/2015/05/17/valves-lighthouse-tracking-system-may-be-big-news-robotics

  Are you sure? yes | no

Lee Cook wrote 10/20/2016 at 09:04 point

Yeh, I've been following vr fairly closely for a while. It was lighthouse that tipped me over the edge to go public with my idea as I figured it was only a matter of time before someone else patented the technique.

Lighthouse is very cool, and they're making the tech free for hackers. Have a look at Mikes project (above), he's using relatively cheap COTS items to track with the lighthouse base station and sensors (TS3633). Lighthouse method allows him to look at the system as a high resolution camera problem so, in theory, there is a huge amount of examples to draw from.

The downsides to Vive are however the need for very accurately matched clocks with minimal drift, or the ability to drift sync them against a master station (similar to the 1PPS embedded in GPS). It also requires those clocks to precisely matched with rotational speed of the lasers and the timing of the sync pulses. Finally, from what I've read, because of those sync pulses they're pretty much at the limit of what they can achieve. In order to scale up the working volume they believe they would need to add additional sync repeaters around the room to match the 20m working distance of the lasers.

My end goal is to have two or three of my base stations around half an *open* tennis court to play full size virtual tennis!

  Are you sure? yes | no

polyhistor10 wrote 10/20/2016 at 14:38 point

My end goal is to make a robotic lawn mower for my rectangular lawns!

  Are you sure? yes | no

Jonathan Kelly wrote 09/24/2016 at 03:28 point

I have also been thinking about ways of localising things in space as part of another project I am working on.

A thought I had is to perhaps borrow ideas from how aircraft VOR navigation beacons work.

This is a bit simplified but basically aeronautical VORs have a ground station that sends out 2 radio signals, one is a highly directional signal that rotates like your idea and sweeps 360° 30 times a second.  

The other is a reference signal.  The sweeping signal changes its phase from the reference depending on where it is in the sweep (0° and no phase change, when aimed 90° it has a 90° phase shift etc).  

The aircraft VOR receiver picks up the beacon and compares the phase of the 2 signals and calculates what angle it is from the beacon.  Aircraft use that to determine what bearing they are from a known point on the ground.

Now...back to your idea.  Suppose you had a beacon that sent quickly pulsed light signals - a reference pulse of one colour that pulsed at a constant frequency (say 1kHz) and a rotating pulse (that rotated say at 10 times a second) of a different colour but that adjusted its pulse frequency between say 1kHz to 1.359 kHz, depending on the angle it was currently aimed at.

The receiver would detect the 2 light signals and compare the pulse frequency between say the red constant frequency pulse with the green variable pulse one and thus know the angle it was from the beacon when it detected the green signal.  eg If it was 200Hz different then it would be 200° from the beacon.

Now suppose you had 3 of these beacons distributed to known points in the room.  

Assuming the receiver could differentiate between the light signals of each beacon, it could work out what angle it was from each beacon and from that could determine by triangulation exactly where it was in  the room.

Potentially you could do away with the reference pulse from the beacons if the 0° frequency was able to be accurately measured by the receiver.

My idea is not very thought through and may have big flaws or not be practical but hey... thought it worth saying.

  Are you sure? yes | no

Lee Cook wrote 09/24/2016 at 14:53 point

I think any idea is worth saying.  Thanks for the comment.

The main problem with angular triangulation is that the angles need to be very accurate, around 0.5deg for 1cm accuracy at 1m.  Take that to 5m for a reasonable sized room and you're looking at 0.1deg of accuracy.  I don't think you'd be able to encode that amount of accuracy at 10Hz with a digital signal.  That'd be 18000 different encodes per second (if you doubled the emitters, 36000 if you had a single with half facing the wall), with 1800 unique over the half sweep - perhaps double if you wanted to gaurentee full reception.

Really, I think you'd have to provide an analogue signal, perhaps multiple lasers with the power modulated for each over the sweep - say RGB with each transmitter sending a combination of RG, GB, RB ramp down one while ramping up the other.

Reception hardware would be quite expensive and would need very good calibration ADCs with 4096 levels.

Thanks for looking!!!

  Are you sure? yes | no

Lee Cook wrote 09/30/2016 at 11:35 point

I've been looking at the Triad Semi TS3633 since you last posted.  Lighthouse modulate their lasers at around 2.5 MHz according to the spec, I'm assuming for noise rejection.

Say a 2mm beam width at 10Hz sweep, 5m distant would light the photo detector for (I think?) around 6us.  In the 2.5 MHz range would that give enough of a signal in order to accurately detect the delta changes in the frequency?

  Are you sure? yes | no

Mike Turvey wrote 10/20/2016 at 21:25 point

Interestingly, 6us is almost exactly how long I'm seeing the pulse as the lighthouse laser sweeps across my sensor.  The TS3633 are sensitive to a 1-5 MHz carrier frequency.  Certainly this helps with noise rejection, but somewhere I saw that the long term plan for supporting many lighthouses in a single space was to not just do TDMA (as is done now, but doesn't scale well beyond about 2 lighthouses), but to add in FDMA.  So, you could have multiple lighthouses operating in the same space, but using different carrier frequencies.  With the sweep pulse lasting 6us, that should give plenty of time for a dedicated ASIC to determine the carrier frequency "channel."

  Are you sure? yes | no

Lee Cook wrote 09/30/2016 at 11:42 point

Sorry, just thinking some more...  How about something even closer to the VOR you originally looked at.  Two lasers, different wavelengths, modulated at the same frequency but the phase changes.  Phase differences may be easier to detect, though at those frequencies the board layouts (or their calibration) become more problematic.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates