ALS Therapy Development Institute and Google use artificial intelligence to improve speech recognition for people with ALS.
It is an understatement to say that there are many challenges that people with ALS are confronted with as they are forced to adjust to changes in their physical abilities. One of the key changes encountered is a loss of independence. People living with ALS sometimes need help with everyday tasks like getting dressed or switching a light on. As ALS progresses, loss of muscle function eventually takes away a person’s ability to walk, write, speak, swallow, and breathe, thus shortening their life span. Not only are people robbed of their independence, they lose their ability to express their true selves. Just as people come to need more assistance, they also suffer from dysarthria, or impaired speech, which makes it harder to ask for help.
“Once bulbar symptoms set in, people started to have a harder time understanding me.” –Noel LeVasseur, person with ALS
During a series of wide-ranging discussions between the ALS Therapy Development Institute (ALS TDI) and Google about ALS, conversations repeatedly circled back to this specific problem of impaired speech. Michael Brenner, PhD and Julie Cattiau at Google confirmed that this was the type of problem that Google could really help with.
“Speech recognition should work for everybody.” – Michael Brenner, Research Scientist at Google
Over the past number of years, the clinical operations team at ALS TDI have found that even people with severe dysarthria (speech impairment) could be understood by close friends and family members. When Michael and Julie at Google learned this, they wanted to explore the idea that, with enough data, an artificial intelligence (AI) tool could learn how to interpret an impaired voice. Google has a strong track-record for building tools that can recognize speech and translate language. They saw this as an opportunity to train their standard speech recognition algorithms to identify impaired speech in much the same way as it understands accents. The premise being that existing AI tools hadn’t heard enough ALS-affected voices yet to optimize an algorithm so that it could recognize them.
Dr. Brenner argued, “Speech recognition should work for everybody.” People with dysarthria and other physical limitations should be able to have access to written and spoken communication - email, the internet, social media, options for independent access to reading, television operation, and more.
To build tools that facilitate verbal communication, ALS TDI has been recruiting people with ALS who are willing to record their voices. Some have recorded hundreds or thousands of specific phrases in order to train and optimize Google’s AI-based algorithms - thus mobile phones and computers can more reliably recognize and transcribe the phrases expressed. This might allow people with ALS to independently send text messages or to generate spoken commands using Google Home devices.
Photo credit: Tamara Lackey Photography
"If I ever need help, I want to be able to say, ‘OK, Google, broadcast call for help!’, have it understand me and send the message." - Andrea Lytle Peet, person with ALS
The more voice samples integrated into the Google AI model, the better the model will perform. To that end, anyone living with ALS is encouraged to participate in ALS TDI’s Precision Medicine Program. This will leverage Google’s speech recognition technology to build assistive applications that people with ALS can use to communicate again. Click here to read more about Google's use of AI to better understand impaired speech.
“We want to thank all of our participants living with ALS who have already given their time and energy by recording their voices.” Maeve McNally, Senior Director, Clinical Operations at ALS TDI
Please fill out this short form to volunteer and record a set of phrases.
If you would like to donate to ALS TDI to support their research to find effective treatments for ALS, click here.