Natively Adaptive Interfaces: A new framework for AI accessibility
We believe technology is at its best when it works for everyone. That’s especially true when it comes to accessibility. For too long, people have had to adapt to technology — we want to build technology that adapts to them.
That’s the idea behind Natively Adaptive Interfaces (NAI), an approach that uses AI to make accessibility a product’s default, not an afterthought. The goal of our research is to build assistive technology that is more personal and effective from the beginning.
How Natively Adaptive Interfaces work
Instead of building accessibility features as a separate, “bolted-on” option, NAI bakes adaptability directly into a product’s design from the beginning. For instance, an AI agent built with the NAI framework can help you accomplish tasks with your guidance and oversight, intelligently reconfiguring itself to deliver a more accessible, personalized experience. In our research of prototypes that helped to validate this framework, a main AI agent could be used to understand your overall goal and then work with smaller, specialized agents to handle specific tasks — like making a document more accessible by adjusting the UI and scaling text for a more personalized experience. For example, it might generate audio descriptions for someone who is blind or simplify a page’s layout for someone with ADHD.
This often creates a “curb-cut effect,” where a feature designed for a specific need ends up being helpful for everyone. A voice-controlled app designed for someone with motor disabilities, for instance, can also help a parent holding a child.
Building with and for people with disabilities
The NAI framework is guided by the core principle: “Nothing about us, without us.” Developers collaborate with the disability community throughout their design and development process, ensuring the solutions they create are both useful and usable. With support from Google.org, we’re funding leading organizations that serve disability communities — like the Rochester Institute of Technology’s National Technical Institute for the Deaf (RIT/NTID), The Arc of the United States, RNID and Team Gleason — to build adaptive AI tools for their communities that will solve real-world friction points.
Grammar Lab, is an AI-powered tutor developed by RIT/NTID English Lecturer Erin Finton and built with Gemini models. A collaborative effort between RIT/NTID engineers, students and Google, Grammar Lab transforms years of RIT/NTID and Erin’s specialized curriculum into an adaptive tool that uses AI to create individualized multiple choice questions that center students' skills and language goals in both American Sign Language (ASL) and English. This allows them to strengthen language foundations in both ASL and English with greater independence and confidence. We recently highlighted this tool in a film produced for us by BBC StoryWorks Commercial Productions, showcasing how it helps Erin better support her students' learning.
We're excited by the innovative efforts being led by nonprofits and believe that by continuing to build in collaboration with the disability community, we can help make the world a more accessible place.