How DIVA makes Google Assistant more accessible
My 21 year old brother Giovanni loves to listen to music and movies. But because he was born with congenital cataracts, Down syndrome and West syndrome, he is non-verbal. This means he relies on our parents and friends to start or stop music or a movie.
Over the years, Giovanni has used everything from DVDs to tablets to YouTube to Chromecast to fill his entertainment needs. But as new voice-driven technologies started to emerge, they also came with a different set of challenges that required him to be able to use his voice or a touchscreen. That’s when I decided to find a way to let my brother control access to his music and movies on voice-driven devices without any help. It was a way for me to give him some independence and autonomy.
Working alongside my colleagues in the Milan Google office, I set up Project DIVA, which stands for DIVersely Assisted. The goal was to create a way to let people like Giovanni trigger commands to the Google Assistant without using their voice. We looked at many different scenarios and methodologies that people could use to trigger commands, like pressing a big button with their chin or their foot, or with a bite. For several months we brainstormed different approaches and presented them at different accessibility and tech events to get feedback.
We had a bunch of ideas on paper that looked promising. But in order to turn those ideas into something real, we took part in an Alphabet-wide accessibility innovation challenge and built a prototype which went on to win the competition. We identified that many assistive buttons available on the market come with a 3.5mm jack, which is the kind many people have on their wired headphones. For our prototype, we created a box to connect those buttons and convert the signal coming from the button to a command sent to the Google Assistant.
To move from a prototype to reality, we started working with the team behind Google Assistant Connect, and today we are announcing DIVA at Google I/O 2019.
The real test, however, was giving this to Giovanni to try out. By touching the button with his hand, the signal is converted into a command sent to the Assistant. Now he can listen to music on the same devices and services our family and all his friends use, and his smile tells the best story.
Getting this to work for Giovanni was just the start for Project DIVA. We started with single-purpose buttons, but this could be extended to more flexible and configurable scenarios. Now, we are investigating attaching RFID tags to objects and associating a command to each tag. That way, a person might have a cartoon puppet trigger a cartoon on the TV, or a physical CD trigger the music on their speaker.
Learn more about the idea behind the DIVA project at our publication site, and learn how to build your own device at our technical site.