There are 1.5 billion people globally who have some form of hearing loss. For many decades, Australia has led the way in building more accessible hearing technology. The cochlear implant was developed in our own backyard and has gone on to become the gold standard for hearing clinical protocols, diagnostics and treatments for people living with severe to profound sensorineural hearing loss. For 75 years, NAL has set global standards to assess hearing impairment, developing hearing healthcare innovations and the most widely used prescription software by audiologists in the world today. And through NextSense, Australia has one of the world’s largest and most established cochlear implant programs, and has pioneered the use of telepractice.
But there is more work to be done to ensure the underlying technology is accessible and useful for everyone.
As part of our Digital Future Initiative, we’re announcing new partnerships to explore new possibilities and AI solutions for hearing healthcare. This collaboration involves five organisations across healthcare service delivery, research and technology sectors; including Cochlear, Macquarie University Hearing, National Acoustic Laboratories (NAL), NextSense and The Shepherd Centre.
Together, we’ll be focused on new applications of AI and machine learning to develop listening and communications technologies, overcome its current challenges – and pave the way for more customised hearing healthcare.
Our first project seeks to personalise hearing models to better address individual listening needs to enhance hearing aids and other listening devices.
This technology could be particularly beneficial for people using listening devices in complex listening environments – such as busy restaurants, group brainstorms or live orchestral performances. The overlapping sounds in these kinds of settings can make it strenuous or overwhelming for people using these kinds of devices to process and decipher various types of sound.
A person with a cochlear implant having a conversation in an transit station (a complex listening environment)
This project will explore new applications of AI to better identify, categorise and segregate sound sources. Ultimately, this might make it easier for people using assistive listening devices to follow a conversation or activity as the technology could help to prioritise sounds, such as a person speaking – and filter out others, such as background noise.
The collaboration is intended to invest in the extraordinary talent of Australians, and help continue Australia’s proud track record of hearing technology innovation. To help lead this effort on ground, we are delighted to welcome Simon Carlile, a distinguished world leader in this field, as he returns to join Google Research Australia.
Prof Greg Leigh AO (NextSense), Dr Simon Carlile (Google Research), Prof David McAlpine (Macquarie University), Dr Zachary Smith (Cochlear), Prof Catherine McMahon (Macquarie University), Dr Aleisha Davis (Shepherd Centre), Dr Malcolm Slaney (Google Research), Sam Sepah (Google Research), Dr Brent Edwards (National Acoustic Laboratories)
This initiative builds on Google’s long-standing commitment to make the world more accessible for people with deafness or hearing loss. Over the years, we have launched a range of accessibility tools on Android, including Live Transcribe, Lookout, Sound Amplifier, Live Caption and new improved TalkBack. Additionally, Project Relate is a first-of-its kind Android app that aims to help people with non-standard speech better communicate and be understood. And with Project Euphonia, we’re looking to assist people with atypical speech to be better understood.
With our partners, we look forward to building on this work and designing tools with and for people who are deaf or hard of hearing. Because as long as there are barriers, we still have work to do.