Protecting vulnerable audiences is at the heart of AI Safety efforts
As India looks to harness AI to supercharge its economic momentum, a safe AI ecosystem is the necessary foundation for achieving this. In other words, safety is the infrastructure for transformational AI, not an add-on. With more Indians coming online each month, our ability to realise the multiplicative power of AI depends entirely on how safe each user feels when using the internet.
Scams are becoming more sophisticated in India, fuelled by organised networks that now use tactics like digital arrest, screen-sharing fraud, and voice cloning to target high-value financial transactions. These attacks erode trust, in one’s own judgment and in the internet itself. The only effective response is a protection system that is faster than the scammer, tireless in detection, and built directly into the technology people rely on every day.
Today, ahead of the upcoming AI Impact Summit 2026, in New Delhi, we shared updates on Google’s multi-pronged approach to harness AI to protect vulnerable audiences from online harm, build robust privacy and cybersecurity tools for enterprises and AI models that are representative, equitable and inclusive. This is how we are turning the idea of safety as core infrastructure into concrete protections for people and businesses in India.
Updates to on-product protection
Our devices are the centre of our digital lives and so it’s crucial to implement robust protections right into the products and services we use daily.
- Real-Time Scam Detection on phone calls: Scam Detection, powered by Gemini Nano and rolling out on Pixel phones, analyses calls in real time and flags potential scams entirely on-device, without recording audio or transcripts or sending data to Google. The feature is off by default, applies only to calls from unknown numbers (not saved contacts), plays a beep to notify participants, and can be turned off by the user at any time.
- Protection for Financial Apps: We are piloting a new feature in India in collaboration with financial apps Google Pay, Navi and PayTM to combat screen-sharing scams. Devices running Android 11+ now show a prominent alert if a user opens one of these apps while screen sharing on a call with an unknown contact. This feature provides a one-tap option to end the call and stop screen sharing, protecting users from potential fraud.
- Google Play Protect has successfully blocked over 115 million attempts to install sideloaded apps that use sensitive permissions that are frequently abused for financial fraud in India. This is complemented by Google Pay’s system, which displays over 1 million warnings weekly for fraudulent transactions, actively protecting the backbone of our digital economy.
- Systemic Protection: We are pioneering Enhanced Phone Number Verification (ePNV), a new Android-based security protocol that replaces SMS OTP flows with a secure, consented, SIM-based check to raise the floor for sign-in security.
- SynthID Partnerships: We are providing early access to SynthID Detector and API, Google’s watermarking technology to identify synthetically-generated content, to strategic partners including academia, researchers, and publishers such as Jagran, PTI, and India Today among others.
Making Cybersecurity and Privacy more resilient and accessible
Google is working to leverage technology to protect users and the web and ensure AI works harder for the defender than for the attacker:
- Self-Defending Systems: We’re launching CodeMender, our new code security agent that identifies zero day vulnerabilities in code and patches them autonomously. CodeMender follows on previous agentic security successes, including Big Sleep and OSS-fuzz, a free fuzzing platform for critical open source projects.
- Google for Startups is helping to build secured AI Agents through initiatives like AI Agent Masterclass with IntelligentAgents and Agentic AI Roadshows with Nasscom that aim to train & equip ~500 early stage startups across India for building Agentic AI solutions for their use cases. We are training them on Secure AI Framework (SAIF) 2.0, Google’s new AI security framework on securing AI agents.
- Privacy-Enhancing Technologies (PETs): We are continuing to make industry-leading investments in privacy technologies to power more secure, private, and personalized experiences . Just this year we released a number of new products and open-source libraries, including Private AI Compute, Parfait and VaultGemma, which enable the wider ecosystem to build state-of-the-art privacy-preserving AI. Furthermore, we have published guidance and recommendations for governments and practitioners to share information and promote responsible use across the ecosystem.
Empowering kids, teens and the elderly with age-appropriate digital literacy tools
We meet product defenses with scaled user education drives, research partnerships, and financial commitment. For kids, teens and families, we are guided by three principles: protect kids online, respect families’ choices, and empower young people with age-appropriate experiences so they can explore and grow safely.
Bringing LEO to India
We are bringing our flagship program LEO (Learn and Explore Online) to India in December 2025. LEO trains teachers, practitioners and parents, building foundational knowledge on how to use Google’s parental tools and create age-appropriate online experiences.
Equipping teachers and students
The Super Searchers program is our flagship information literacy initiative. It equips users to critically evaluate information—including AI search results and generative content—using tools like SynthID to identify synthetically generated media. So far in 2025, we've directly trained over 17,000 teachers and 10,000 students. Through this 'train the trainer' model, the initiative has reached over 1 million end-users across India. Critically, we are now actively expanding this program to better equip vulnerable user groups, including low-income communities, women, and seniors, ensuring everyone can evaluate and find trustworthy information.
Protecting new users and senior citizens
Our DigiKavach campaigns, featuring the message “Mauka Ganwao, Paise Bachao,” have reached over 250 million people.
The “Sach Ke Sathi, DigiKavach for Seniors” program,delivered in partnership with Jagran, delivers in-person safety training to over 5,000 seniors across 25 Indian cities, reaching over 1 million end-users
We are actively investing in educating people on how to spot scams, no matter what platform they're on. Our latest effort is the Be Scam Ready game. Based on inoculation theory, this interactive game immerses users in real-life fraud scenarios in a safe setting, helping them develop the critical thinking skills needed to avoid scams in the real world.
Community Support
Through the APAC Digital Futures Fund, Google.org is providing $1 million to 5 leading think tanks and universities across APAC to conduct essential research and foster meaningful dialogue on the opportunities and challenges of AI. In India, the CyberPeace Foundation will receive $200,000 to support capacity building and deliver AI-driven cyber-defense tools to fight fraud and scams, deepfakes to create safer digital learning environments for children and teens, and strengthen responsible governance aligned with the IndiaAI Mission. As part of this effort, CyberPeace will work closely with the developer and startup community to develop potential tools through a series of hackathons and competitions
Ecosystem partnerships
Google is working with regulators, academia and civil society groups to improve online safety and AI governance.
Google worked with RBI to publish a public list of authorized Digital Lending Apps and their associated NBFCs, aiding in enforcement clarity. This update offers significant protection against malicious entities, safeguarding both trusted developers and users by raising the standard for all financial apps to be locally verified, traceable, and compliant.
Building on a shared vision of pioneering AI safety and governance standards, particularly for the Indian context, we are formalizing a strategic expansion of our work with IIT Madras and CeRAI (Center for Responsible AI) in following critical areas:
- First, we are working with ML Commons and CeRAI to develop the Hindi-language AILuminate Safety Benchmark, so that AI safety evaluation is accessible and relevant across languages..
- Second, through its Amplify Initiative Google will join CeRAI in contributing to the AI Safety Commons under the auspices of Safe and Trusted AI working group of the AI Impact Summit to build robust and diverse datasets.
- Third, we are partnering with CeRAI on Secure AI Framework. CeRAI will be evangelizing the usage of the framework through industry events and training at IIT Madras.
Building AI for the Global South
India’s scale, multilingual reality, and device diversity create a testing ground few countries can match. The security models and frameworks forged in India, which solve unique challenges like high multilingualism, varying digital literacy, and device sophistication, are creating the necessary blueprint for equitable AI adoption.
As India adopts AI, Google is committed to work in lockstep to ensure we mitigate the risks it poses with awareness and speed, providing users, enterprises and the government with a robust digital foundation for economic momentum and human development.