Register to attend our ALS Town Hall on April 21 to hear ALS TDI's CEO, Fernando Vieira M.D. discuss our recent research with Google and answer questions from the community. Register here.

A key part of the ALS Therapy Development Institute (ALS TDI)’s approach to research is collaboration. We are always seeking to partner with other organizations to collaborate on projects that will bring us closer to our goal of developing treatments that will help people with ALS. In recent years, one of our most prominent collaborations has been with our partners at Google.

Project Euphonia Collaboration

Project Euphonia Origins
In 2018, members of Google’s artificial intelligence research team began working on an initiative to leverage their Artificial Intelligence (A.I.) speech recognition technologies to create tools to help people with impaired speech. In order to get started, they needed a wealth of data in order to “train” their machine-learning algorithms to better interpret the voices of people with dysarthria. ALS TDI, meanwhile, had a large archive of voice recordings collected from participants in our Precision Medicine Program (PMP). Google’s team reached out ALS TDI about initiating a collaboration to analyze the data and Project Euphonia was born.

Initially, the project focused on analyzing these recordings of people with ALS to see if AI could detect the presence of the disease as an early diagnostic tool. However, the researchers soon began to see other practical applications for improving the daily lives of people living with ALS.

“The engineers who were working on this project thought, ‘hold on a second… if we have those speech samples, could we actually do more with it?’” recalled Google’s Julie Cattiau, a product manager for Project Euphonia, in a recent interview on ALS TDI’s Endpoints podcast “Could we try to help people communicate more easily by improving the accuracy of speech recognition for them?’”

Project Euphonia Evolution
Google already had many products that use speech recognition – things like voice-to-text software on Android phones or voice activated Google Home smart speakers. However, because they were developed using data from people without impaired speech, the algorithms behind these programs often struggled to understand the words of people with conditions like ALS. By working to make these products work better – and by simultaneously developing new tools to specifically aid people’s communication – the researchers felt they could help people with speech issues achieve greater independence throughout their lives. In 2019, the program expanded to include crowdsourced data from people with other conditions that cause speech impairment, such as traumatic brain injury, Down syndrome, and stroke.

“We've worked with a lot of trusted testers over the years, and we know that often people will have the types of conditions that can cause speech to be impaired, sometimes also have mobility impairments,” said Cattiau. “And so, we want to make sure that they have access to tools such as the Google Assistant or smart speakers to be able to do tasks such as closing the door and turning the lights on and off, or playing some music, using their voice and without necessarily having to stand up or ask for help. So, to achieve those two goals, our approach has been to make speech technology work better, to personalize it for each individual and to, as a result, allow them to access this kind of technology every day.”

In 2019, ALS TDI and Google’s collaboration was highlighted in an episode of Age of A.I., a YouTube original series hosted by Robert Downey Jr. Age of A.I. Episode 2 features former NFL linebacker Tim Shaw, who is battling ALS, as he works with a team at Google to help restore his ability to communicate, testing the prototype of Project Euphonia for the first time.

Last year, the first publicly available tool from Project Euphonia, called Project Relate, was released in an ongoing beta test. The app, currently only available for Android phones, transcribes speech by people with dysarthria in order to fulfill a number of different communications needs. It can transcribe speech and copy it into other apps on a device, read the transcription out loud in order to help the user communicate with others more clearly, and even recognize commands to operate other smart devices such as turning on lights or playing a song.

Recent ALS TDI and Google Collaboration

In addition to helping launch Project Euphonia, ALS TDI has continued to work with Google to analyze data from the PMP and create tools for people with ALS. In early 2022, they release a pre-print paper co-authored by ALS TDI scientists and Project Euphonia researchers outlining a new AI-based tool for scoring ALS symptom severity.

This algorithm is able to listen to phrases recorded by someone with ALS and automatically assign an objective score based on the ALSFRS-r speech category. In addition to sharing the paper, the authors also made the code for the tool publicly available for other researchers to study – and, potentially, continue to improve upon.

What to do next: