Voice recognition technology can be an extremely useful tool for navigating modern life – especially for those with limited mobility. However, for people with conditions that can also affect speech, like ALS, taking advantage of these technologies can be challenging – if not impossible. A beta app from Google called Project Relate is working to change this, while also making everyday communication easier for people with dysarthria, or non-standard speech. 

How Project Relate Works

The app uses machine learning to examine large amounts of data. Automatic speech recognition (ASR) technologies that can understand spoken language must be trained using millions of recordings of people’s speech. However, most of these examples come from people with so-called “typical” voices. Thus, people with dysarthria – including those with ALS or other conditions such as stuttering, traumatic brain injury, or Down’s syndrome – often find common voice recognition tools such as Google Assistant, Apple’s Siri, or Amazon’s Alexa don’t work for them. 

The team behind Project Relate believed there was a solution to this issue – the challenge was getting the right kind of data.

“We hypothesized that the basis of what we do for standard speech recognition could be applied to something like dysarthric speech if the model could see enough examples of that speech,” says Project Relate’s Technical Program Manager Pan-Pan Jiang. “So, the issue wasn't really the machine learning part. It was on the data side.”

The Google team worked to gather over a million recordings from over 1,000 people with various kinds of dysarthria. They then used this data to train an algorithm specifically designed to understand atypical speech.

The Project Relate app then allows users to further train this algorithm to understand their own unique speech. After downloading the Project Relate app, each user is asked to record 500 phrases. Processing this data may take up to several days, after which the user receives a custom ASR model that can understand their speech. Using this model, users unlock and can engage with the app’s features, including:
  • Listen: A tool that converts the user’s speech to text and displays it on their phone’s screen. This can be used to make oneself understood in face-to-face conversations, on video calls by sharing the app’s screen, and for a variety of writing tasks like composing an email.
  • Repeat: This feature repeats what has been said out loud in a clear, computerized voice. This allows users to make themselves audibly understood by people who might not be used to their way of speaking.
  • Assistant: Using its customized algorithm, this feature allows users to engage Google Assistant on Android devices, which might otherwise struggle to understand commands from someone with dysarthric speech. Google Assistant allows users to ask questions – like, “What is the weather?” or, “What time is it?” – and to control their devices with their voice to do things like ask for directions or play a song.

Many of these features could be especially useful for people with conditions like ALS that limit both speech and mobility. According to a recently published paper, Relate is even able to continue to understand deteriorating speech, so long as users keep making new recordings as their voice changes. 

However, there are some limitations to who can use Project Relate. Users must also be over the age of 18, the only available language is English, and the app is only available on Android phones. While there are no plans to develop the app for other platforms such as iOS, there are similar apps available for people with iPhones, such as VoiceItt.

The ALS Research Collaborative (ARC), Project Euphonia, and Project Relate

Project Relate began as part of a larger initiative known as Project Euphonia – a collaboration between Harvard and Google Researchers to use artificial intelligence to help people with dysarthria. At the time of the project’s launch in 2018, the Euphonia team knew they would need large sources of data to help train their algorithms to better understand a wider variety of dysarthric speech. To jumpstart their research, Google turned to an organization that had already collected a wealth of voice recordings from people with dysarthric speech – the ALS Therapy Development Institute (ALS TDI).

ALS TDI had a large archive of voice recordings collected from participants in the ALS Research Collaborative (ARC) study (which, at the time, was known as the Precision Medicine Program). Google’s team reached out to ALS TDI about initiating a collaboration to analyze this data, and Project Euphonia was born. Initially, the project focused on analyzing these recordings of people with ALS to see if AI could detect the presence of the disease as an early diagnostic tool. However, the researchers soon began to see other practical applications for improving the daily lives of people with non-standard speech.

“[Project Euphonia] started as something completely different,” says Pan-Pan. “But we quickly asked ourselves, ‘is there something we can actually do where we could help people right now?’ We thought, ‘let's tackle this and work on something where we can use what Google is really good at, which is machine learning, automatic speech recognition, and speech-to-speech conversion, and use what is in our wheelhouse and try to improve our technology for people with [non-standard] speech.”

In 2019, the program expanded to include crowdsourced data from people with other conditions that cause atypical speech.

Pan-Pan says that the contributions of people with ALS – both participants in the ARC Study, and volunteers who helped test the app in its early stages – were essential to its development.

“This community has been so crucial for us in building up our research and development,” she says. “We're so grateful to everyone who has participated in Project Euphonia, and now testing the Relate app. We love working with you.”

Signing Up for Project Relate

Project Relate is currently in beta – meaning that it is currently available to users, but the Relate team is still working to improve the app and looking for feedback from users. If you or someone you know would like to try the app, you can download it and find more information here.

What to do Next: