Google’s Safety Charter for India’s AI-led Transformation

Today, we unveiled Google’s Safety Charter for India’s AI-led transformation.
India’s digital economy has grown progressively, thanks to the flywheel created by affordable devices and access, online citizen services, ease of payments and a bustling digital marketplace of goods and services.
The invisible connective tissue that keeps this flywheel moving, is trust. The digital sphere can be an engine of growth only as long as the citizenry that use it feel safe in doing so.
As AI proliferates deeper into our digital lives, it’s crucial that we, as responsible stewards of India’s tech industry, ensure that the trust reposed by users in India's digital ecosystem remains unchanged.
Towards this, we’re excited to share Google’s Safety Charter for India’s AI-led transformation.
Under this charter, we shared how AI is eliminating vulnerabilities in enterprise software, how Google’s investment in its products and programs are keeping users safe, and the various ways in which AI is closing the gap between attackers and defenders.
Think of it as our blueprint for tackling the online world's new challenges, in collaboration with the wider ecosystem, under three key themes
Our safety charter is built on the framework of user safety, cybersecurity and responsible AI

Keeping the end user safe from online frauds and scams
Online scams in India are rapidly evolving and becoming increasingly complex. Reports show a significant surge in cyber fraud, with scammers leveraging advanced techniques like AI-generated content, deep fake videos, and voice cloning to create highly convincing fraudulent schemes.
UPI related frauds reportedly cost Indians over INR 1087 crore in 2024 and Industry estimates project that Indian entities could lose up to ₹20,000 crore to cybercrime in 2025 if left unchecked.
Scam resilience is a combination of on-product protections and user awareness. We combine policies and built-in technological protections under the Digikavach program, that help us to prevent, detect, and respond to harmful and illegal content, at scale and with depth.
DigiKavach - Mitigating financial fraud in the ecosystem and our products
The DigiKavach campaign has been building user awareness and resilience against online fraud in India. It has raised awareness about common frauds and scams reaching 177 million users and counting.
Building on this impact and furthering the collaboration with the Ministry of Home Affairs, Google has officially partnered with the Indian Cyber Crime Coordination Centre (I4C) to strengthen its efforts towards user awareness on cybercrimes, over the next couple of months in a phased approach.
- In 2024, we removed 247 million ads and suspended 2.9 million accounts for violating Google ads policies. For example, our Financial Services Verification policy, requires advertisers promoting financial services to comply with state and country regulations. This has dramatically reduced fraudulent financial ads.
- Search:
- By integrating advanced AI, including LLMs, we now catch 20 times more scammy pages before they can cause harm.
- Globally, targeted protections have slashed attacks like impersonating customer service or government sites by over 80% and 70% respectively.
- Android:
- Google Messages now offers enhanced protection from scam texts, protecting users from over 500 million suspicious messages a month using AI-powered Scam Detection.
- We also issued more than 2.5 billion warnings about opening URLs from unknown senders. This smart detection happens all on devices to keep conversations private.
- Google Play:
- Globally, Google Play Protect scans over 100 billion installed apps daily for malware across billions of devices.
- Since our Play Protect pilot rolled out in October 2024 in India, it has blocked nearly 6 crore (60 million) attempts to install high-risk apps that could have led to device infections, involving stopping more than 220,000 unique apps on over 13 million devices.
- Google Pay:
- 4.1 crore (41 million) warnings displayed against transactions suspected to be potential scams, safeguarding Indian users.
- Gmail:
- Automatically blocks more than 99.9 percent of spam, phishing, and malware, protecting over 2.5 billion inboxes globally.

Advancing Cybersecurity for public and enterprise infrastructure
Securing public and private digital infrastructure is more critical than ever, as they present strategic targets for bad actors.
Google products are built to be secure-by-design and secure-by-default. The idea is to protect people, businesses, and governments by sharing our expertise and continuously working to advance the state of the art in cybersecurity.
This translates in several ways:
- Sharing threat Intelligence with the ecosystem: The Google Cloud M-Trends report focuses on cybersecurity trends and incident response investigations conducted by Mandiant. It provides statistics and analysis of threats observed in the past year, including targeted attacks, ransomware incidents, and Cloud compromises
- Pushing the boundaries of possibility with AI: Our Project Zero team, in collaboration with Google DeepMind, discovered a previously unknown, exploitable vulnerability in SQLite – the first public example of an AI agent finding such a critical memory-safety issue in real-world software.
- Enabling SMBs to become secure-by-default: Through Google.org, we are providing US$5 million in support to The Asia Foundation (in addition to an earlier funding of $15 million) to expand the APAC Cybersecurity Fund’s reach, enabling the pilot of more than 10 new cybersecurity clinics, including strategic partnerships with Indian universities to strengthen the cybersecurity capabilities of local MSMEs and students.
- Quantum Cryptography Research: Google is collaborating with IIT Madras to push the boundaries of Post-Quantum Cryptography (PQC), aiming to redefine our digital interactions by pioneering smarter privacy control and advancing seamless, more secure online interactions through next-generation post-quantum anonymous tokens.
Building AI Responsibly
As AI becomes more capable, we’re building advanced safeguards that support our robust content policies, perform rigorous testing and monitoring, and provide easy-to-use tools that put you in control of evaluating information.
Our Approach to Responsible AI:
- Values-Based AI: Our Cloud solutions embed responsible AI principles directly into enterprise and government digitization efforts, helping organizations innovate responsibly. Our internal risk taxonomy, grounded in our AI Principles, ensures a shared understanding of responsible AI risk across teams.
- Rigorous Testing & AI-Assisted Red Teaming: We rigorously test our models and infrastructure at every layer, employing adversarial testing and AI-Assisted Red Teaming where AI agents compete against each other to identify and mitigate risks.
- Content Responsibility: We are investing in tools to help identify AI-generated content. Our SynthID technology embeds an imperceptible, digital watermark directly into AI-generated content, with over 10 billion pieces of content already watermarked. We also require creators to disclose AI-generated content on YouTube and label synthetic images in Google Search. Our 'double-check' feature in Gemini helps users identify potentially inaccurate statements by quickly cross-referencing with Google Search.
- Building for the Indian context: Our Gemini Language Testing and the IndicGenBench initiative specifically assess and fine-tune language models for more accurate and effective use across 29 Indic languages.
Safety is a shared responsibility
Google’s surfaces and platforms are just one component of a vast digital ecosystem and it is crucial for the entire ecosystem to become smarter and more resilient in lockstep, to plug points of failure.
To do this, it is critical to have cross-sector collaboration and exchange of information, from law enforcement to banks, civil society groups, and the government.
Consortiums like the Global Signals Exchange, a global clearinghouse for bad actor signals and Partnering for Protection which streamlines scam reporting from financial institutions globally are examples of centralized intel sharing with an intention of strengthening the entire web. We also work closely with government bodies like the Department of Telecom, the Ministry of Home Affairs, and the Securities and Exchange Board of India, understanding that this requires a whole-of-society approach.
Google’s Safety Charter for India’s AI-led transformation is our attempt to share how we are leveraging AI’s incredible potential to secure the foundation of India’s digital economy: trust. It is equally a call for the wider ecosystem to rally, partner and collaborate with us. Safety is a shared responsibility and we as Google are committed to sharing the best of our expertise and experience towards this effort.