Close
0%
0%

Low Cost Tongue Vision

A sensing device for blind people that builds a visual image by stimulating the surface of the tongue

Similar projects worth following
Blind or partially sighted people can face challenges in performing standard daily activities that sighted people take for granted. One fundamental challenge is to build up an awareness of the surrounding environment, in terms of a mental image.

In the absence of normal eyesight, the human brain can learn to use the visual cortex to interpret unusual sensory stimulation as "sight". I intend to address the challenge by creating a low cost device that provides visual information by stimulating the tongue, which is particularly sensitive, using an array of electrodes to create a low-resolution image.

This "vision" device will help the user build up a mental picture of the surroundings, as well as provide cues to aid orientation and spacial awareness.

Background

This is a concept that I first read about a few years ago. At the time I made some plans to build one using a couple of AVR microcontrollers, a VGA CMOS camera and a dual-port RAM, but I never followed it up. It's easier now to use a Raspberry Pi to capture and convert video data, simplifying the custom circuitry needed.

There is a commercial "tongue vision" device available, but it costs $10,000. I aim to create something similar using low-cost hardware, with all design information freely available, so that anybody can make one. The total hardware cost will be less than $100.

The following paper describes how the brain can adapt to use the tongue for vision in this way:

Brain plasticity: ‘visual’ acuity of blind persons via the tongue

This is the commercial device I am aware of (though I haven't actually seen one):

BrainPort V100

I don't know know any details about how the commercial unit works, as the only information I have is from the website above. So it could be that my design ends up to be quite different in terms of the actual signals that are generated.

Overall concept

The project will consist of the following elements:

  1. Interface lollipop - the user will place this inside their mouth, against the tongue. The lollipop will have a rectangular array of contacts on one side that provide electrical stimulation to the tongue.
  2. Wearable camera/control unit - this will connect to the lollipop via a thin wire. The unit will send a stream of live pre-processed image data to the interface lollipop. It will also provide power to the lollipop and include controls to adjust the image intensity. As a wearable device it could be incorporated into an item of clothing , such as a hat.

I intend to base the interface lollipop around the Atmel ATMega328P microcontroller, and the camera/control unit around the Raspberry Pi Zero.

Block diagram

The first block diagram shows the concept as described:

Inside the camera/control unit, the Raspberry Pi will run software to process video data from the camera, convert it into a suitable low resolution format and send it to the interface lollipop. A power supply circuit will be needed to power the Pi from a rechargeable battery and also power the lollipop. The lollipop itself will contain the microcontroller-based interface circuit to drive the array of contact points.

The second block diagram shows a possible alternative design where the image data is captured via a smartphone and sent to the interface lollipop via bluetooth. This would allow the cost of the control unit to be minimised, as it would only need to provide power and adjustment controls for the interface lollipop.

For this project I will focus on the standalone concept, but will include the option for bluetooth in the interface lollipop in order to add compatibility with the smartphone concept. Eventually, it should be possible for the interface lollipop to be completely wireless, but this will involve putting a battery inside it and I don't intend to do this yet.

Project goals

  1. Demonstrate a working prototype with low cost, easily manufactured design
  2. Develop software to provide the basic functionality
  3. Experiment to find the most effective stimulation methods
  4. Make all design data and software freely available

  • 1 × Atmel ATMega328P-AU AVR microcontroller Flash microcontroller
  • 1 × SN74LS07DR open-collector buffers Logic ICs / Gates and Inverters
  • 2 × SN74LS156D 3-to-8 mux, open-collector
  • 2 × MC14555BDG 3-to-8 mux, CMOS
  • 1 × 12MHz SMD crystal 3.2x5.0mm

View all 11 components

  • Population of the board

    Ray Lynch10/06/2016 at 22:02 0 comments

    I've got all the necessary parts for the lollipop interface now, so I've gone from this...

    to this, with 9V battery for scale:

    The quality of the boards (made in China) is really nice. I used a conventional soldering iron, as there aren't too many parts, and the sizes are manageable. I used a file to round off the corners of the PCB and get rid of the sharp edges, and cleaned off the solder flux with brake cleaner (not sure what's in this, but it smells like it should be banned).

    On the BLE interface connector, the microcontroller MOSI, MISO, SCK and /RESET pins are accessible, and can be used to program it. I programmed a quick bit of code to flash one LED, and it lives! I noticed a mistake I'd made in the circuit - I had thought that port C was a full 8-bit I/O port, but the top two bits can only be used as ADC inputs. This meant that two mux control lines weren't accessible.

    Luckily I could connect them to the SDA and SCL lines, which I had reserved for communication with the Raspberry Pi. I hadn't decided whether to use I2C or the UART, so made both SDA/SCL and TX/RX available on the interface pins. The required data transfer rate is rather low, and the UART will suffice, so the SDA/SCL lines can be repurposed as general purpose I/O to control the two missing mux control lines. I added a couple of wires to the top of the board to make the connections and no track cutting was necessary. I've updated the schematic appropriately.

    I've also made a connection cable for the Raspberry Pi, that plugs into the GPIO pins 1-10 with a short bit of IDC connector. The cable connects the 5V, 3.3V, ground, TX and RX lines from the Pi to the lollipop interface board. A 9V battery clip provides the power connector for the lollipop 3-18V CMOS supply.

    A momentary pushbutton switch provides a means by which to shut down the Pi when running headless. An LED hanging from the 3.3V line is connected to the Pi GPIO4 pin via a resistor, pulling the pin up to 3.3V. The push switch is connected between this pin and ground. When the switch is pressed, it activates the LED, and by pulling the GPIO pin down to ground can also be detected by the software. If necessary, the software could also turn on the LED when the switch is not pressed, by driving the pin low, though I need to avoid driving the pin high to prevent a possible short to ground via the button.

    Here's a mockup image of how the lollipop board might look with the BLE interface fitted (not intended for the current prototype, but to be supported in the future):

    The next steps are to write the microcontroller software, and then get the Raspberry Pi talking to the lollipop interface via the UART connection.

  • Software on the Pi

    Ray Lynch10/02/2016 at 20:55 0 comments

    Whilst waiting for the PCBs to arrive, I started on the software that will run on the Pi. This needs to read a sequence of still frames from the camera, convert them to a low resolution and then send them to the interface lollipop. For now I'll concentrate on the reading from the camera, and leave the interfacing part until later.

    The final prototype will run headless, but during development I wanted a basic on-screen user interface that will allow the selection of some test patterns, and the ability to view the processed camera image in real time. As I'll be writing the software in C, I decided to use the very nice OpenVG library from Anthony Starks, which offers a quick way to draw text and graphics on the Pi independent of X:

    A.J. Starks OpenVG library for Raspberry Pi

    The images will be captured by using RasPiStill in signal mode. Every time we send SIGUSR1 to the raspistill process, it will save an image from the camera. The following techniques are used to improve the speed:

    • The -bm flag selects burst mode, where exposure parameters are only set once, at the beginning of capture
    • The -e flag is used to save the image in .bmp format, for quick reading without any CPU-based decompression
    • To save in monochrome, we use "-cfx 128:128"
    • The preview image is sized and positioned to fit in the allocated space on screen
    • The saved image dimensions are set to 128x128 pixels
    • The image is saved into a RAM disk and not on to the SD card (each saved frame is deleted as it's processed)

    Using these techniques, the software can easily process 20 frames per second on a Raspberry Pi 2B, including the simple conversion down to 14x14 pixels. The limiting factor becomes the screen redrawing time. To improve the speed over the ajstarks library functions, the software implements a faster rectangle plotting function that reuses an existing path object, rather than creating and destroying a path for each plot. Similarly, the greyscale colours are cached and used repeatedly.

    Rather than using system calls to identify the raspistill process, the software forks at the beginning and the child process becomes raspistill, passing the PID to the parent.

    I've defined several test patterns in the software, which can be individually selected as an alternative to the camera image. There is also a basic function to stretch the contrast of the image, or to convert it to black/white only. The output window shows in real time the data that will be sent to the lollipop interface.

    With the software running on the Pi, the screen output looks like the screenshot below, with some pretty pictures at the top, then the various test patterns that can be selected, and on the bottom the live preview from the camera and the actual output that will be sent to the interface lollipop:

    Test patterns 6 and 7 are animated, with 6 ramping up in brightness and 7 rotating around the centre point.

    The C source and Makefile are available on github (MIT license):

    Link to github repository for Raspberry Pi software

    ...and now the PCBs have arrived!


  • PCB design, part 2

    Ray Lynch09/28/2016 at 21:20 0 comments

    So far the biggest part of the project has been the layout of the interface PCB, and I began working on this a couple of weeks before starting to enter the project details and logs. The layout is now complete! Even as a beginner I found Eagle to be quite straightforward to use and produce a nice layout. The resulting board is about 80x30mm in size, which I think is fine for the prototype. I managed to pack the components reasonably tightly, but I only populated one side of the board, so there might be room for a bit of future size optimisation (although there's plenty of routing on the reverse side).

    I've reserved space to stack the Adafruit BLE module on top of the board without increasing the footprint. I've tried to keep the layout neat and tidy, and added some labelling on the silkscreen layer to identify the pinouts. I also added a small drawing of a platypus - a mammal that can sense electric fields!

    Here's the final (prototype!) board layout:

    To make the gerber files I referred to the sparkfun tutorial I referenced before, using their suggested CAM file:

    Gerber generation - tutorial from Sparkfun

    I wanted to check the gerber files in a tool other than Eagle, as a sanity check. I found gerbv, an open-source gerber viewer that works really well:

    gerbv - free/open-source gerber viewer

    I used this to check particularly that the solder mask around the vias was dimensioned correctly to produce the annular rings.

    I plan to make all the design and manufacturing files freely available, but first I want to make sure that it actually works. I've placed an order for 5 boards from pcbway in China. I don't have any experience of this supplier (or any supplier, in fact), but the European manufacturers seem to be quite expensive if you want a short lead time, and to get boards reasonably quickly it's cheaper to get them from China, with airmail shipping.

    I chose lead-free HASL surface finish, which is cheap. ENIG (gold) would be more appropriate for the sense matrix, but I wanted to keep the cost down for these first boards, as I don't know yet whether there are any mistakes in the design that would make the board unusable. So there's no gold finish for now.

    I ordered all the SMD parts from RS components in the UK, and found you can even buy 0805 capacitors in quantities of 5! I did check that I could easily get all the required parts in the intended packages before I started the PCB layout, to avoid designing for something that was unobtainable.

    With the sense lollipop components and boards on the way, it's time to think about the supporting software on the Raspberry Pi.

  • PCB design, part 1

    Ray Lynch09/27/2016 at 20:07 0 comments

    After drawing the lollipop interface schematic in Eagle, I started with the PCB layout. As a beginner, I found the following tutorials from Sparkfun very helpful, to get to know the basics for both through-hole and SMD layout:

    Sparkfun introduction to Eagle

    Sparkfun SMD layout in Eagle

    The challenge here was to make the board as small as possible. After drawing a few vias with standard 0.6 mm drill hole diameter, I decided that a matrix size of 14x14 pixels was optimal to avoid making the board too wide. The routing of signals from the vias is a consideration as well as the space required for the vias themselves. This number of pixels should be more than sufficient to create a usable image.

    In order to make the required layout for the electrodes, I used the polygon tool to create a rectangular strip of copper for each column, on the reverse side of the board, based on a 0.025 inch grid. After placing all the vias within the copper strips (but not connected to them), I made the rows by joining the vias vertically with tracks on the front side of the board.

    To gain the necessary clearances for the outer ring around each via, I set the design rules to specify a track-to-via clearance of 8 mil, and a stop mask clearance of 16 mil. This produces an annular ring of thickness 8 mil around each via. When it comes to the production of the PCB, these dimensions need to be respected for the board to function as intended.

    After drawing the complete matrix, the detail of the electrode construction looks like this:

    I began to place the other components. After trying the autorouter I realised that there would be no chance of making a compact layout unless the routing was done manually. Starting with the drivers for the rows and columns, I came up with a fairly compact routing, using both sides of the board. The other parts are in approximate positions with the routing for them still to do. The board is starting to take shape:

  • Interface circuit design

    Ray Lynch09/21/2016 at 19:10 0 comments

    To recap, the lollipop interface circuit will consist of:

    • ATMega328P microcontroller
    • MC14555P CMOS high-side row multiplexer
    • 74LS156D open-collector column multiplexer

    The CMOS part will have a variable supply voltage. To allow it to remain compatible with the control signals from the microcontroller, I'll use a buffer IC to perform level conversion. By using a part with open-collector drivers, the outputs can be pulled up to the CMOS voltage:

    • 74LS07 hex non-inverting buffer with open collector outputs

    With the addition of pullup resistors, a few decoupling capacitors and a crystal for the microcontroller to allow accurate timing, it should be possible to design a relatively small board, given the low number of components.

    I would also like the option of a bluetooth interface, to allow pairing with a smartphone. The SPI version of the Adafruit BLE module looks suitable, so I'll try to support this. This is a 3V module, so it makes sense to run the microcontroller at 3V as well - this won't cause any problems with the control signals for the 5V 74LS parts, as a quick look at the datasheets show that the levels remain compatible.

    Adafruit SPI BLE module

    Although I have some experience of building on stripboard/veroboard, I've never designed my own PCB before. So it will be a challenge to make a well laid-out board while learning how to use the software tools.

    After doing a bit of research, I decided to try Eagle, mainly due to its ubiquity and the availability of a Linux version. I can use the freeware licence with this project, as it's non-commercial and the board size will be below the imposed limits.

    Here's the completed circuit diagram, as drawn in Eagle:

  • Implementing the sense matrix

    Ray Lynch09/18/2016 at 20:30 0 comments

    To determine a suitable size for the interface lollipop, I made some samples with cardboard. I decided that the part in contact with the tongue needs to be no larger than 40x30mm, to fit comfortably inside the mouth. But for the initial prototype, it will be difficult to include the sense matrix and the required driving electronics in such a small area. By making the board slightly longer, but still 30mm in width, there should be enough space. It will stick out of the mouth, but this will give something to hold on to and won't look too awkward.

    I decided to design the driving circuitry for a sense matrix resolution of 16x16 pixels. I'm not sure yet whether this number of pixels will fit into the available area, but this is the goal.

    To drive the sense matrix I chose the ATMega328P 8-bit microcontroller, because of its versatility and ease of use. It's available in a TQFP package with 20 I/O pins, plus SPI and UART, and is straightforward to program in assembler or C. It's also widely available at low cost, and can be flashed using a simple parallel cable.

    To drive the 16 rows and 16 columns, I will need to multiplex the limited number of I/O lines. I intend to use standard logic parts to do this. For the high side (rows), I will use a CMOS part, MC14555P, a dual 1-of-4 mux. By using two of these ICs, I can use 6 I/O lines of the microcontroller to individually select 1 of 16 rows. I can set the row voltage to anything within the allowed CMOS supply range, ie. from 3 to 18V, simply by varying the supply voltage to the mux ICs.

    For the low side (columns), I will use a similar dual 1-of-4 mux, but with open collector outputs. A suitable part is the 74LS156. When an output is high, it will float, rather than being driven to the 5V supply voltage of the 74LS156, so no current will be drawn through the sense matrix. When low, the column will be pulled down to ground and the pixel in the intersecting active row will see a voltage across it.

    To test out the concept, I built a circuit on breadboard using a high side CMOS part and a low side open-collector 74LS part. The parts used were a 4049 (inverters) and 74LS03 (open-collector NANDs) as I had these to hand. The 74LS was powered by a 7805 regulator, and I used a 741 op amp as a voltage follower to generate a variable supply for the 4049. This let me set the CMOS voltage between 3V and 12V or so when connected to a 16V plug-in supply. I held the high and low output wires against my tongue, about 1mm apart.

    I found that this setup worked quite nicely, and by adjusting the CMOS voltage I could set the intensity of the stimulation. At 3V it wasn't noticeable, with the effect starting at about 4-5V. This will depend on the amount of moisture present. Around 6V-7V was best, with 8V being high and 9V rather too high for comfort.

    So the concept of using standard logic parts to drive the rows and columns seems to work, and the use of CMOS high side and open-collector low side allows the stimulation voltage to be easily controlled. Now we need to connect these to the microcontroller.

  • Sense matrix concept

    Ray Lynch09/15/2016 at 20:41 0 comments

    The interface lollipop needs an array of contacts that will stimulate the surface of the tongue. How will this be implemented? I plan to use standard PCB features without any exotic small dimensions, to allow for cheap and easy manufacturing.

    To avoid having an enormous number of lines to control each "pixel" individually, I will make an array of rows and columns, where each row and each column has a separate driver. Only one row and one column will be driven at a time. The intersection of the active row and active column will switch on a particular "pixel".

    I plan to implement this using a 2-sided PCB with vias for each "pixel". Each column of vias will be connected vertically on the non-contact side of the board. On the contact side, horizontal rows of copper will surround each row of vias, but the vias will be isolated from the copper by a thin annular gap. By selecting the appropriate diameter for the soldermask removal around each via, a copper ring will be exposed around each via. The soldermask will cover the rest of the contact side of the board.

    The vias (each column) will be driven high when selected, and the copper rings (each row) will be pulled low. The voltage across the gap from via to ring will stimulate the tongue (the voltage will be quite low!).

    When a particular row and column are driven, there will also be a voltage drop between all the vias in the column, and all the rings in the row. But the distance between them will be much greater than the small separation on the intersecting "pixel". My hope is that the surrounding "low level" pixel stimulation will be negligible, but it remains to be seen how effective this method is. There is plenty of scope for adjusting the voltage and the pulse duration to an appropriate level to get a good result at the target pixel, while limiting the spurious effect along the associated row and column.

View all 7 project logs

Enjoy this project?

Share

Discussions

janavikhochare99 wrote 12/25/2018 at 12:31 point

Hii this is Janavi here I was keen of making a similar project like you ..but i have no idea about how to make the lollipop PCB if possible can you help us with the design and it's making if possible .

  Are you sure? yes | no

Jonathan Morton wrote 03/22/2018 at 10:15 point

Hi Ray

Really interested in this project.  How is it going?

I'm hoping to make a similar device except with an accelerometer for balance.  I think my lollipop will only need 6 or so contacts.  I was really hoping to find one commercially available. No luck so far.

  Are you sure? yes | no

nori wrote 02/08/2017 at 15:01 point

hello, I am an MA student at the University of the Arts London.  Really interested in your project, could you give me your email address. I have something wanna talk through email. Thank you. Ceci

  Are you sure? yes | no

tomas wrote 11/15/2016 at 18:36 point

Hi, i really liked the project. Its been a while since your last log. Did you do any progress?

  Are you sure? yes | no

Mark Jeronimus wrote 10/06/2016 at 14:38 point

Only having high-side on one ring and low-side on another ring…

This will probably start tasting really bad after a while due to electrolisys.

What this needs to do is apply AC (square wave) to each ring pair using 2×16 tri-state buffers.

  Are you sure? yes | no

Ray Lynch wrote 10/06/2016 at 21:29 point

Hi Mark, interesting, I'm curious to find out how it feels, and tastes. Once the prototype is working I'll experiment a bit with different drive methods to see what can be improved, and I'll definitely bear this in mind. Thanks!

  Are you sure? yes | no

Ray Lynch wrote 09/08/2016 at 20:39 point

Hi, thanks a lot for the comment! I'm hoping that the concept will allow for more than simply brightness perception. For now the main objective is to get a working system that presents image data from the camera, but then it could easily be extended to "display" different kinds of information. For example, perhaps the depth of a point on an image could be represented by its intensity, and this should definitely be possible. Let's see!

  Are you sure? yes | no

leokeba wrote 09/08/2016 at 01:02 point

Hi, I think the concept is really great and powerful. However, if you want it to be a real useful tool for blind people to use everyday for tasks like navigation and spatial awareness, I think brightness perception is waaaaay too "abstract" and a really poor use of the concept. Maybe it would be cool to sense something flat like a picture or a drawing through it, but most of it would be for aesthetic purposes. Depth perception, on the other hand, would be incredibly much more useful.
There's this project that you could maybe look into, but it seems that that the source code will never be published. http://hackaday.com/2014/11/03/stereo-vision-and-depth-mapping-with-two-raspi-camera-modules/
Personally, I would try something with a kinect. I read somewhere that it could achieve around 10 fps with a raspberry pi 3, which would be great if you could extract depth information and route it to the depth matrix in realtime.

Anyway, good luck with this project, I'm really looking forward to see how you will build that tongue sense matrix :p

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates