AI powered glasses could be a gamechanger for people with hearing loss

A project building on a 2015 University of Stirling study could harness AI to support those with hearing loss

An elderly man adjusts a hearing aid
Hearing loss impacts more than 1.2 million adults in the United Kingdom

A University of Stirling researcher is part of a UK-wide team studying 'hearing glasses' that could dramatically improve how people with hearing loss experience sound by combining lip-reading technology and artificial intelligence - potentially enhanced by the power of cloud computing.

The COG-MHEAR programme is funded by the Engineering and Physical Sciences Research Council and led by Professor Amir Hussain at Edinburgh Napier University.

A 5G-cloud-based noise cleaning strand of the work, which builds on a pioneering 2015 study into audio-visual hearing aids led by Professor Hussain when at the University of Stirling, has been co-led by Heriot-Watt University and involves Dr Ahsan Adeel from the University of Stirling’s Faculty of Natural Sciences – working alongside researchers from the University of Edinburgh and Edinburgh Napier University.

A potential application could build on the concept of a 5G-IoT cloud-based audio-visual hearing aid proposed in a paper in 2018 by Dr Adeel, Associate Professor in Artificial Intelligence in the University of Stirling’s Computing Science and Mathematics Division, and Professor Hussain.

This could help with hearing loss by creating a device that filters out background noise in real time, even in loud environments. It may use a small camera built into glasses to track the speaker’s lip movements, while a smartphone app could use 5G to send both audio and visual data to a powerful cloud server.

There, artificial intelligence could isolate the speaker’s voice from surrounding noise and send the cleaned-up sound back to the listener's hearing aid or headphones almost instantly.

A superpower

Professor Mathini Sellathurai from Heriot-Watt is co-leading the wireless 5G-cloud-based noise cleaning strand of the work. She said: “We’re not trying to reinvent hearing aids. We’re trying to give them superpowers.

“You simply point the camera or look at the person you want to hear.

“Even if two people are talking at once, the AI uses visual cues to extract the voice of the person you’re looking at.”

This approach, known as audio-visual speech enhancement, takes advantage of the close link between lip movements and speech.

While some noise-cancelling technologies already exist, they struggle with overlapping voices or complex background sounds—something this system aims to overcome.

“There are only a few big companies that make hearing aids, and they have limited support in noisy environments. We want to break that barrier and help more people, especially children and older adults, access affordable, AI-driven hearing support.”

A new approach to an old problem

More than 1.2 million adults in the UK have hearing loss severe enough to make ordinary conversation difficult, according to the Royal National Institute for Deaf People.

Hearing aids can help, but most are limited by size and processing power and often struggle in noisy places like cafés, stations or workplaces.

One option is to shift the heavy processing work to cloud servers – some as far away as Stockholm – allowing the researchers to apply powerful deep-learning algorithms without overloading the small, wearable device.

“There’s a slight delay, since the sound travels to Sweden and back,” said Professor Sellathurai, “but with 5G, it’s fast enough to feel instant.”

From lab to life

The COG-MHEAR team led by Professor Hussain is working on multiple fronts, from on-device edge AI to complementary cloud AI, to achieve optimal results for sustainability.

The researchers are working towards a functional version of glasses as well as other form factors. They're also speaking to hearing aid manufacturers about future partnerships and hoping to reduce costs to make the devices more widely available.

Professor Sellathurai believes the cloud-based model could one day be made public, allowing anyone with a compatible device to connect and benefit.

The COG-MHEAR project is led by Professor Amir Hussain, from Edinburgh Napier University. The COG-MHEAR team also includes co-investigators from the universities of Glasgow, Edinburgh, Manchester and Nottingham as well as key collaborators from University of Oxford and Academia Sinica in Taiwan. The project’s industry collaborators include Nokia Bell Labs and Sonova AG.

You may also be interested in