Many of us take for granted that most of the time our home voice assistants can actually help us out and understand what we are asking them to do. But for millions of users, voice assistant is entirely useless and the opposite of convenient.
USA Today reports about the more than 3 million people living in the U.S. that are unable to successfully use voice assistants like Alexa, Siri or Google due to a stutter or other form of speech impediment. You may not realize just how much you depend on this type of tech unless you, like millions of Americans, are forced to live without them.
For many of us, a simple trip to a new store or friend’s home is made easier by the help of a voice navigator. But for people like Jacquelyn Joyce Revere, a screenwriter from Los Angeles, talking to Siri in the car just doesn’t work. Revere explains, “When this stuff first started coming out, I was all over it. In LA, I need GPS all the time, so this seemed like a more convenient way to live the life I want to live.”
But the many times Revere, who has a stutter, attempts to use a tech assist she is reminded how useless they are to her. She says, “Every time I try to use it is another nail in the coffin, another reminder that this technology wasn’t made for me.” And with the proliferation of automated phone services, Revere says she often spends upwards of 40 minutes on hold and then ends up being hung up on when the automated operator does not wait long enough for her to get words out.
Though it is obvious big tech is not quite ready to cater to this population just yet, giants like Google, Amazon and Apple are working on ways to better understand. Typing commands to these assistants can be an option for those who stuggle using the voice assist. And Google’s Project Euphonia is looking at ways to better recognize more types of speech disabilities. They are now working to gather enough samples from people who have impaired speech so that the tech can learn how to interact with this population. Project Euphonia researcher Michael Brenner explains that they are not quite where they want to be yet, but working on it. Brenner says, “We don’t want to overpromise because we don’t know what is possible, but we want to help people. It would be ideal to have enough samples to have a general model, but we don’t. We want to find out how to do things that are useful with the speech we have.”
What do you think of the current research being done to make tech voice assistants more user friendly for those that are speech impaired?
Do you struggle speaking with assistants like Alexa and Siri?