These developers are changing lives with Gemma 3n
When Gemma 3n was released, we hoped developers would use its on-device, multimodal capabilities to make a difference in people’s lives. With more than 600 projects submitted to the Gemma 3n Impact Challenge on Kaggle, the community delivered on that promise.
Today, we’re excited to introduce the winners:
First Place: Gemma Vision
Gemma Vision is an AI assistant designed for visually impaired people. The developer’s brother, who is blind, played a vital role in ensuring features were genuinely helpful for the blind community.
Because holding a phone can be impractical while using a cane, the system was designed to process visuals from a phone camera strapped to the user’s chest. Functions can be triggered using a 8BitDo Micro controller or voice commands, allowing users to perform actions without navigating touchscreen menus.
This project also won the Special Technology Prize for Google AI Edge, a platform for deploying models on-device. It deployed Gemma 3n using the MediaPipe LLM Inference API and leveraged features like streamed responses in the flutter_gemma package to make the experience fluid.
Second Place: Vite Vere Offline
Vite Vere helps foster autonomy for people with cognitive disabilities. Originally developed using the Gemini API, this project leveraged Gemma 3n to make the digital companion work offline. By transforming images to simple instructions that can then be read aloud using the local device’s text-to-speech engine, the app enables users to navigate daily tasks.
Third Place: 3VA
For decades, Eva, a brilliant graphic designer with cerebral palsy, was limited to simple commands like “want food now.” This project fine-tuned Gemma 3n to translate pictograms into rich expressions that better reflect Eva’s voice. The team trained the model locally using Apple’s MLX framework, demonstrating a cost-effective way to develop personalized Augmentative and Alternative Communication (AAC) technology.
Fourth Place: Sixth Sense for Security Guards
Unlike traditional video monitoring systems that just detect motion, this project used Gemma 3n to provide human-level context and distinguish benign events from genuine threats. By integrating a lightweight YOLO-NAS model to detect initial movement and send it to Gemma 3n for processing, the system can handle high-bandwidth video feeds (up to 360fps and 16 cameras) in real time.
The Unsloth Prize: Dream Assistant
Voice assistants frequently fail users with speech impairments. This project used Unsloth, a library for efficient fine-tuning, to train Gemma 3n on an individual’s audio recordings. The result is a custom AI assistant that understands the user’s unique speech patterns and enables voice control over device functions.
The Ollama Prize: LENTERA
This project demonstrates how to bring AI to disconnected regions by transforming affordable hardware into offline microservers. Lentera broadcasts a local WiFi hotspot, allowing users to connect their devices to an educational hub running Gemma 3n via Ollama, a platform for local model deployment.
The LeRobot Prize: Graph-based Cost Learning and Gemma 3n for Sensing
Robotic exploration is often bottlenecked by the time spent sensing rather than moving. To solve this, the team built a novel “scanning-time-first” pipeline on top of LeRobot, a robotics framework developed by Hugging Face. This project used Gemma 3n to create plans while an inductive graph-based matrix completion (IGMC) model predicted latencies, demonstrating the viability of embodied AI at the edge.
The Jetson Prize: My (Jetson) Gemma
Integrating AI into our physical environment requires systems that are both responsive and energy-efficient. This project used a smart CPU-GPU hybrid processing strategy to deploy a context-aware voice interface on an NVIDIA Jetson Orin, demonstrating how helpful AI can move beyond screens to assist users in the real world.
From accessibility to crisis response, these projects show what's possible with Gemma 3n. Many others deserve recognition, so join us as we highlight a developer story every day on @googleaidevs over the coming month.