Before I got into the accessibility field, I worked as an art therapist where I met people from all walks of life. No matter the reason why they came to therapy, almost everyone I met seemed to benefit from engaging in the creative process. Art gives us the ability to point beyond spoken or written language, to unite us, delight, and satisfy. Done right, this process can be enhanced by technology—extending our ability and potential for play.
One of my first sessions as a therapist was with a middle school student on the autism spectrum. He had trouble communicating and socializing with his peers, but in our sessions together he drew, made elaborate scenes with clay, and made music.
Another key moment for me was when I met Chancey Fleet, a blind technology educator and accessibility advocate. I was learning how to program at the time, and together we built a tool to help her plan a dinner event. It was a visual and audio diagramming tool that paired with her screen reader technology. This collaboration got me excited about the potential of technology to make art and creativity more accessible, and it emphasized the importance of collaborative approaches to design.
This sentiment has carried over into the accessibility research and design work that I do at the NYU Ability Project, a research space where we explore the intersection of disability and technology. Our projects bring together engineers, designers, educators, artists and therapists within and beyond the accessibility community. Like so many technological innovations that have begun as assistive and rehabilitative tech, we hope our work will eventually benefit everyone. That’s why when Google reached out to me with an opportunity to explore ideas around creativity and accessibility, I jumped at the chance.
Together, we made Creatability, a set of experiments that explore how creative tools–drawing, music and more–can be made more accessible using web and AI technology. The project is a collaboration with creators and allies in the accessibility community, such as: Jay Alan Zimmerman, a composer who is deaf; Josh Miele, a blind scientist, designer, and educator; Chancey Fleet, a blind, accessibility advocate, and technology educator; as well as, Barry Farrimond and Doug Bott of Open Up Music, a group focused on empowering young disabled musicians to build inclusive youth orchestras.
The experiments explore a diverse set of inputs--from a computer mouse and keystrokes to your body, wrist, nose, or voice. For example, you can make music by moving your face, draw using sight or sound, and experience music visually.
The key technology we used was a machine learning model called Posenet that can detect key body joints in images and videos. This technology lets you control the experiments with your webcam, simply by moving your body. And it’s powered by Tensorflow.js—a library that runs machine learning models on-device and in your browser, which means your images are never stored or sent to a server.
We hope these experiments inspire others to unleash their inner artist regardless of ability. That’s why we’re open sourcing the code and have created helpful guides as starting points for people to create their own projects. If you create a new experiment or want to share your story of how you used the experiments, you can submit to be featured on the Creatability site at g.co/creatability.