Updated: Jul 2, 2020
In today's article, you will get to know how voice-activated assistants that are already an integral part of our modern living, such as Siri and Alexa, and other well-known features were first designed for a different purpose.
It is no surprise that we are surrounded by technology that serves a single purpose of making our life easier. Many of these inventions are quite new. For example, a voice-activated virtual assistant Siri was first used in 2011, less than nine years ago, three years later we saw Alexa, and Google Assistant ("Ok Google, how old are you?") is four this year of 2020.
However, all these voice-activated features have something in common - where they come from. They have a common so-called ancestor: an assistive technology used to aid people with learning disabilities.
In 1950s, Bell laboratories created the first speech recognition system that was only useful for numbers. With the funding from the U.S. Department of Defense in 1970s, speech-to-text recognition took a big leap ahead. Now it was able to recognize up to 1011 words! With... long... pauses... between... each... word. As you see, the era of voice-activated assistants still remained a dream.
It was not until 1990s when more people had access to speech recognition devices. The Dragon Dictate, later known as Dragon Naturally Speaking, was the product purchased by hospitals and educational institutions to help people with physical and learning disabilities transform their speech to text. According to the Yale Center for Dyslexia and Creativity, struggling learners, such as students with dyslexia, dysgraphia and reluctant writers who had a history of missed homework, found it much easier to dictate their ideas rather than trying to commit their ideas to paper. As the application of this technology grew, the world saw Google, Verizon, Apple, and other brand names interested in adding the useful feature of speech recognition to the mobile devices as we know them now.
Where are we today? In quest of aiding people with everyday life, companies offer more and varied assistive technology applications, with better word recognition rate, text-to-speech recognition, slide-to-text typing, and even "typing with eyes", or, as explained by specialists at W&M Students Accessibility Services department, brain wave technology where people who are paralyzed due to cerebral palsy (CP), or traumatic brain injury (TBI), can input data by focusing on symbols on the screen. Brain wave technology attracts a lot of interest from the entertainment industry as toy manufacturers and video game companies are seeking to make their products more attractive and interactive.
Most surely, we will see multiple new ways to use assistive technology in the coming years: how about brain wave technology that will cook dinner, anyone? And while we are grateful for our voice-activated virtual assistants to get directions, shop, and entertain, what makes it really breathtaking is the idea that assistive technology can offer someone a higher quality of life.
Dragon Naturally Speaking. The Yale Center for Dyslexia and Creativity. Retrieved on 07/01/2020 from http://dyslexia.yale.edu/resources/tools-technology/tech-tips/dragon-naturally-speaking/
The Evolution of Text-to-Speech Voice Assistive Technology. W&M Dean of Students, Students Accessibility Services. Retrieved on 07/01/2020 from https://www.wm.edu/offices/deanofstudents/services/studentaccessibilityservices/resources1/technologyspotlight/texttospeech/index.php