Over the last few months, Google Threat Intelligence Group (GTIG) has observed threat actors using AI to gather information, create super-realistic phishing scams and develop malware. While we haven’t observed direct attacks on frontier models or generative AI products from advanced persistent threat (APT) actors, we have seen and mitigated frequent model extraction attacks (a type of corporate espionage) from private sector entities all over the world — a threat other businesses’ with AI models will likely face in the near future.
Today we released a report that details these observations and how we’ve taken action, including by disabling associated accounts in order to disrupt malicious activity. We’ve also strengthened both our security controls and Gemini models against misuse.
Read the full report on the Google Cloud Threat Intelligence blog.