Close

First Prototype

A project log for OpenEyes

Open source visual aid platform

cristidragomir97cristidragomir97 08/21/2019 at 17:370 Comments

Originally, my idea for this system was to take the frames from a depth camera, split them into a 2D grid that corresponds to the arrangement of the vibration motors, and then make a weighted average of the distances in each cell. 

The resulting output would be translated into proportional vibration rates for each motor in the grid.

The software took a couple of hours to develop, however building the vibration matrix turned out to be a more complex task then I expected. 

My first idea was to put all the motors on a piece of cardboard (pizza box to be completely honest).

After painstakingly soldering all the leads, I proceeded to connect the wires to a couple of PCA9865 based PWM controllers and ran the default test that runs all the channels in a wave.

It felt like the whole thing was vibrating, distinguishing unique motors was impossible. Maybe if I test with one motor at a time? Nope, I could distinguish it was on the first two columns and somewhere on the upper side, and that was about it. I also found out that between every couple of tests, leads from the motors would break, leaving me with random "dead pixels"


I figured putting all the motors on the same surface was a really bad idea, so I carefully removed them all and started again. 


Fast forward two weeks and a couple of component orders later, and I had a working prototype. The new construction of the vibration matrix turned to be less prone to vibration leaks, and "passed" all the wave tests. The RK3339-based NeoPi Nano turned out to be powerful enough to process the feed from the depth camera without dropping frames. Time to integrate the haptic display with the image processing code.

Here's where it became clear that the matrix of motors wasn't a solution.

While the origin of vibration was identifiable while testing with pre-defined patterns, the real-life feed was a whole different story. 

First off, it was clear early on that modulating the intensity of each motor was not a solution because motors with low intensity wouldn't be distinguishable if surrounded by one or many with high intensity. 

Instead I opted for modulating the rate, similarly to the way parking sensors in cars work. 

However, tracking the rate of 30 different sources in real-time is quite a cognitive task, and it gets tiring after a short while. (aka, wear it for long enough, and you'll get a headache)

Another issue was that while walking or simply rotating the head, frames would change faster than the lowest rate of some motors, so they wouldn't get to vibrate enough for the wearer to get a sense of the distance before the distance would change again. 

Tweaking the min/max rate turned to be a fools' errand, because what seemed as ideal to some cases, for example, a room full of objects, was a disaster to others, such as hallways and stairs. Tweaking the rate in real-time could have been an option, but I assumed it would be even more of a dizzying experience. 

On top of everything, depending on the distance and the number of objects in the frame, vibration leak was still an issue. 

I was quite disappointed, to be honest, but I still pitched the concept to an incubator inside my university and I even got accepted. 

Armed with new resources, hope and a lesson learned,  I went back to the whiteboard.

Discussions