Hearing loss is a problem that affects a substantial part of the population, both young and old. As many as 36 million American adults report hearing loss to some extent, and while a vast number of them would greatly benefit from the use of a hearing aide, only 20% of the people that should wear them actuallydo.
One of the major complaints concerning hearing aids is that they don’t always allow the wearer to distinguish between the sounds they want to focus on (like people speaking to them) from distracting background sounds. However, a new technology has incorporated the use of “neural networks,” allowing many hearing impaired people to hear and recognize speech almost as well as regular hearers do.
A team of hearing scientists at Ohio State University has paired up with computer engineers to address the problem of filtering out words from distracting background sounds, and they may have come up with a viable solution. The new technology takes advantage of neural networks to increase the ability of test subjects to differentiate spoken words from other sounds. Thanks to these neural networks, test subjects have up to 90% recognition – much higher than the 10% recognition enabled by older hearing aids.
A computer algorithm developed by DeLiang Leon Wang, Professor of Computer Science and Engineering at Ohio State University, analyzes the sounds detected by the hearing aid, picks up speech patterns, and removes the interfering background noise. The computer algorithm examines all of the sounds, looking for the speech that is dominating the sounds in the background. Noisy speech is the name given to other people talking in the background, and stationary noise is the name given to background sounds (which include traffic sounds, air conditioners, background music, etc.). Both of these types of speech are dominated by noise, but foreground speech dominates the noise around it. The algorithm looks for the speech that dominates the noise around it, and filters out the rest of the sound.
The technology has proven to be incredibly effective, and a number of patents have already been taken out on it. It enables people to comprehend about 85 percent more foreground language, despite background babble or conversations; this is up from 25 percent from previous technology. Interestingly, a test administered to non-hearing-impaired students at Ohio State University proved that this technology worked better than they expected. Those without hearing impairment scored lower on the listening test than those that suffered from hearing loss.
The sky is the limit when it comes to potential uses for this technology. It can be integrated into smart phones, Bluetooth headsets, and other communication devices. Now that the technology has effectively begun to solve the “cocktail party problem” of too many background conversations on top of background noise, this breakthrough could provide the hearing impaired with a real chance of being able to communicate effectively.