Harnessing AI to improve hearing technology

Researcher
Professor David McAlpine
Writer
Georgia Gowing
Date
24 March 2023
Topic

Share

A new research partnership with global tech company Google will explore the use of artificial intelligence to optimise the way hearing devices work.

A new research partnership with global tech company Google will explore the use of artificial intelligence to optimise the way hearing devices work, including seeking to tackle the long-standing problem of listening in noisy environments by ‘hyper-personalising’ hearing aids and cochlear implants to each user’s unique hearing pattern.

Not perfect: Hearing devices are very efficient at amplifying sounds but struggle to distinguish between them.

The collaboration is part of Google’s Digital Future Initiative, and it brings together Google, Macquarie University Hearing, Cochlear, National Acoustic Laboratories, NextSense and the Shepherd Centre.

Hearing loss affects about 3.6 million Australians, and it can have wide-ranging health implications, influencing everything from an individual’s educational and employment opportunities to social isolation and the likelihood of developing dementia in later life.

Academic Director of Macquarie University Hearing, Professor David McAlpine, says despite hearing loss being so widespread, it is a severely undertreated problem, and one that often goes undetected.

“About a third of people who have hearing aids don’t use them, and one of the reasons for this is that current technologies don’t work for every person in every situation,” Professor McAlpine says.

“In normal hearing, the brain is using its 30,000 neural connections from the ear to sift through the sounds we’re hearing, helping us focus on the those we want to concentrate on – the classic ‘cocktail party problem’.

“The sensory cells of the inner ear, which are most sensitive to damage by noise or as we age, amplify sound and make different sounds distinguishable from each other, and this is difficult for hearing technology to reproduce.

“Hearing aids are highly effective at amplifying sounds to make them audible again, but they struggle to distinguish between sounds.

“In noisy environments, such as bars or restaurants, that means different competing sounds are all amplified to the same degree, making it hard for us to separate out a conversation from the background noise.

“Voice recognition technology has the same challenges, which explains why the digital assistant on your phone might suddenly fire up for what seems like no reason or play Ariana Grande when you asked for AC/DC.”

Tackling noise in public spaces

Hearing aids and cochlear implants require adjustments, training, and a period of rehabilitation to ensure the settings are tailored to an individual’s needs.

Professor McAlpine says one thing we do not always do well with hearing technologies is match them to each person’s individual experience of hearing loss and what they want to achieve with their devices.

Party problem: People with hearing devices can struggle to hear in noisy environments.

For hearing aids in particular, someone who copes well with their device settings at home in a relatively quiet environment might struggle to cope with loud public spaces, and this can lead to them either reducing their social activities or abandoning their hearing device altogether because they find wearing it to be stressful and exhausting.

“It’s a simple fact that people won’t use these technologies if they don’t fit their lived experience,” he says.

“One of the first things we want to explore is whether machine-learning algorithms can replicate things like the National Acoustic Laboratories ‘NAL’ and ‘NL2’ formula used by audiologists worldwide when fitting someone with a hearing aid.

“An automated process based on an individual’s listening performance – beyond the relatively simple audiogram that is the current clinical tool for fitting hearing aids – would reduce the number of return visits and the amount of tweaking required when someone gets a new device.

“Ideally, we want to map the performance of a hearing-impaired individual’s inner ear and listening brain, compare this to a model of normal hearing, and use this information to optimise the settings of their device, thereby restoring their hearing to normal or near-normal performance.

“This mapping would be dynamic, adapting to the environment, and reducing the need to adjust to new hearing devices, as the profile would be transferable.”

New hope for individualised technology

This approach could be used to treat all sorts of hearing disorders such as tinnitus (ringing in the ears) and hyperacusis (extreme sensitivity to sound).

In theory, it could also help optimise any listening system, including voice recognition systems and ‘hearables’ like noise-cancelling headphones, which help improve listening performance for people with clinically normal hearing but who struggle to hear in background noise.

Hearing technology helps people around the world connect with people and their surroundings, but there are many more people who could benefit.

“This is a tremendously exciting initiative at Macquarie University’s Australian Hearing Hub, bringing together leading experts from the commercial, academic, not-for-profit, and government sectors to tackle the most pressing challenges for people living with hearing loss, and their families,” Professor McAlpine says.

“It could help to transform hearing health in Australia and worldwide, delivering ground-breaking research and innovations, new technologies, therapies, and interventions to support communication, wellbeing and social connectedness.

“These are ambitious goals that cannot be achieved in isolation, and we look forward to seeing what we can accomplish together.”

Simon Carlile is Google Australia’s research lead on the project.

“Hearing technology helps people around the world connect with people and their surroundings, but there are many more people who could benefit,” he says.

“As part of Google’s Digital Future Initiative, this exciting collaboration will help us explore new ways to design and improve machine-learning models that better fit the needs of the individual listener – and develop a more precise and accessible approach to hearing care.”

Professor David McAlpine

David McAlpine, pictured, is Distinguished Professor of Hearing, Language and the Brain at Macquarie University, and Academic Director of Macquarie University Hearing.

Macquarie University Hearing, Cochlear, NextSense, NAL and the Shepherd Centre are all members of the Australian Hearing Hub. The Hearing Hub is located on Macquarie University’s Wallumattagal Campus and was established in 2013 to drive innovation and collaboration in health and technology.

Share

Back To Top

Recommended Reading