For decades, accessibility in tech has been a bit of a “bolt-on” feature: a set of menus or settings that users had to find and toggle after the product was already built. Google Research is looking to change that with a new framework called Natively Adaptive Interfaces (NAI). The goal is simple but ambitious: instead of forcing people with disabilities to adapt to technology, the technology should natively adapt to them.
Announced by Sam Sepah, AI Accessibility Research Program Manager at Google Research, the NAI framework uses multimodal AI agents to make accessibility a product’s default state.
How NAI works: The power of specialized agents
The core of the NAI approach is a system of “orchestrator” and “sub-agents.” Rather than a user navigating a complex maze of static menus to find accessibility tools, a main AI agent understands the user’s overall goal and coordinates with smaller, specialized agents to reconfigure the interface in real-time.
For example, if a user with low vision opens a document, the NAI framework doesn’t just offer a zoom button. Instead:
- An Orchestrator agent recognizes the document type and the user’s context.
- Sub-agents then work to scale text, adjust UI contrast, or even generate real-time audio descriptions of images.
- For a user with ADHD, the system might proactively simplify the page layout to reduce cognitive load and highlight key information.
This creates what researchers call the “curb-cut effect.” Much like how sidewalk ramps designed for wheelchairs ended up benefiting parents with strollers and travelers with luggage, AI interfaces that adapt to extreme needs often result in a better, more personalized experience for everyone.
A critical pillar of the NAI framework is its collaborative development. Google.org is providing funding to leading organizations, including the Rochester Institute of Technology’s National Technical Institute for the Deaf (RIT/NTID), The Arc, and Team Gleason, to ensure these tools solve real-world friction points for the disability community.
One standout example of this is Grammar Lab, an AI-powered tutor built with Gemini models. Developed by RIT/NTID lecturer Erin Finton, the tool creates individualized learning paths for students in both American Sign Language (ASL) and English. It uses AI to generate bespoke multiple-choice questions that adapt to a student’s specific language goals, allowing them to strengthen their foundations with greater independence.
The road ahead
While NAI is currently a research framework, we are already seeing its DNA in public-facing products. The recently launched Gemini in Chrome side panel and the “Auto Browse” agentic features are first steps toward a web that understands context and acts on a user’s behalf.
As we move closer to the launch of Google’s Project Aluminium, the NAI framework provides a glimpse into a future where our operating systems aren’t just platforms for apps, but active collaborators that adjust to our unique abilities in real-time.
Join Chrome Unboxed Plus
Introducing Chrome Unboxed Plus – our revamped membership community. Join today at just $2 / month to get access to our private Discord, exclusive giveaways, AMAs, an ad-free website, ad-free podcast experience and more.
Plus Monthly
$2/mo. after 7-day free trial
Pay monthly to support our independent coverage and get access to exclusive benefits.
Plus Annual
$20/yr. after 7-day free trial
Pay yearly to support our independent coverage and get access to exclusive benefits.
Our newsletters are also a great way to get connected. Subscribe here!
Click here to learn more and for membership FAQ

