Skip to main content
The Keyword
Our new report details the latest ways threat actors are misusing AI.
["What does AI mean for retail?", "How did Nano Banana get its name?", "How can AI help me plan travel?"]

Over the last few months, Google Threat Intelligence Group (GTIG) has observed threat actors using AI to gather information, create super-realistic phishing scams and develop malware. While we haven’t observed direct attacks on frontier models or generative AI products from advanced persistent threat (APT) actors, we have seen and mitigated frequent model extraction attacks (a type of corporate espionage) from private sector entities all over the world — a threat other businesses’ with AI models will likely face in the near future.

Today we released a report that details these observations and how we’ve taken action, including by disabling associated accounts in order to disrupt malicious activity. We’ve also strengthened both our security controls and Gemini models against misuse.

Read the full report on the Google Cloud Threat Intelligence blog.

Related stories

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe