Touch sensors enable speech and sound to be understood

Amir Amedi and Reichman University colleagues have released a study describing touch-based technology to help people understand speech and sound – and to detect their location in the future.

The sensory substitution device can deliver speech simultaneously through audition and as fingertip vibrations which correspond to low frequencies extracted from the speech input. 

40 non-native-English-speaking individuals with normal hearing were asked to repeat distorted sentences, simulating hearing via a cochlear implant. In some cases, vibrations on the fingertips corresponding to lower speech frequencies were added to the sentences. To simulate these frequencies, an audio tactile SSD was developed to convert sound frequencies to vibrations.

The level of understanding increased over a 45-minute training period accompanied by visual feedback. Participants were then able to understand a new set of sentences in a noisier environment and under difficult conditions. Performance improved significantly when the they received a corresponding vibration in addition to the audio. 

Amedi believes that “the adult brain can also learn, in a relatively simple way, to use one combination of senses or another to better understand situations. This assumption is consistent with the institute’s previous findings showing that the brain is not divided into separate areas of specialization according to the senses, but rather according to the performance of tasks.”

Post doc Katarzyna Ciesla said that the next phase of our research is being carried out with people who are hearing-impaired and completely deaf. Sensory intervention will be individually tailored to each of the participants, as a combination of sound and vibration, or for the deaf, vibration alone before the implantation of a cochlear implant. This is aimed at establishing their understanding of speech with the help of a changing vibration.

Share: Pinterest