Machine learning to make sign language more accessible
The text in the video above reads as follows: Welcome to SignTown! An interactive experience where you can learn sign language with a little help from AI. Like how to order at a restaurant ('milk tea?'). Or checking into a hotel and requesting shampoo or soap. How does it work? All it takes is a webcam and machine learning to detect your body poses, facial expressions and hand movements. Give it a try now at www.sign.town
Google has spent over twenty years helping to make information accessible and useful in more than 150 languages. And our work is definitely not done, because the internet changes so quickly. About 15% of searches we see are entirely new every day. And when it comes to other types of information beyond words, in many ways, technology hasn’t even begun to scratch the surface of what’s possible. Take one example: sign language.
The task is daunting. There are as many sign languages as there are spoken languages around the world. That’s why, when we began exploring how we could better support sign language, we started small by researching and experimenting with what machine learning models could recognize. We spoke with members of the Deaf community, as well as linguistic experts, working closely with our partners at The Nippon Foundation, The Chinese University of Hong Kong and Kwansei Gakuin University. We began combining several ML models to recognize sign language as a sum of its parts — going beyond just hands to include body gestures and facial expressions.
After 14 months of testing with a database of videos for Japanese Sign Language and Hong Kong Sign Language, we launched SignTown: an interactive desktop application that works with a web browser and camera.
SignTown is an interactive web game built to help people to learn about sign language and Deaf culture. It uses machine learning to detect the user's ability to perform signs learned from the game.
Project Shuwa
SignTown is only one component of a broader effort to push the boundaries of technology for sign language and Deaf culture, named “Project Shuwa” after the Japanese word for sign language (“手話”). Future areas of development we’re exploring include building a more comprehensive dictionary across more sign and written languages, as well as collaborating with the Google Search team on surfacing these results to improve search quality for sign languages.
Advances in AI and ML now allow us to reliably detect hands, body poses and facial expressions using any camera inside a laptop or mobile phone. SignTown uses the MediaPipe Holistic model to identify keypoints from raw video frames, which we then feed into a classifier model to determine which sign is the closest match. This all runs inside of the user's browser, powered by Tensorflow.js.
We open-sourced the core models and tools for developers and researchers to build their own custom models at Google IO 2021. That means anyone who wants to train and deploy their own sign language model has the ability to do so.
At Google, we strive to help build a more accessible world for people with disabilities through technology. Our progress depends on collaborating with the right partners and developers to shape experiments that may one day become stand-alone tools. But it’s equally important that we raise awareness in the wider community to foster diversity and inclusivity. We hope our work in this area with SignTown gets us a little closer to that goal.