Skip to main content
The Keyword
Safety & Security

Meet the team responsible for hacking Google

Three people are in front of a whiteboard with various notes on it.

Creating safe and secure products for everyone is the top priority for Google's security teams. We work across the globe to keep up with current threats, improve security controls, conduct attack detection/prevention, and eliminate entire classes of vulnerabilities by driving new and better frameworks. Our teams also actively monitor adversaries, making sure we have all the intelligence to be prepared for malicious activity and targeted campaigns against our Googlers or the people who use our services daily.

Today, we would like to shine a spotlight on one security team at Google — the Red Team — that supports all of these efforts in a way that might initially seem counterintuitive: by hacking Google.

The term “Red Team” came from the military, and described activities where a designated team would play an adversarial role (the “Red Team”) against the “home” team, who would seek to adapt to the Red Team’s activities and counteract them. Over the years, these terms have found their way into the information security (InfoSec) space.

Google’s Red Team is a team of hackers that simulate a variety of adversaries, ranging from nation states and well-known Advanced Persistent Threat (APT) groups to hacktivists, individual criminals or even malicious insiders. Whatever actor is simulated, we will mimic their strategies, motives, goals, and even their tools of choice — placing ourselves inside the minds of hackers targeting Google.

The benefits of Red Team exercises

Running these simulations provides value in various ways. To start, it offers our teams tasked with detecting and responding to actual attackers a unique opportunity to identify improvements. And it allows us to determine if an attack could have been detected earlier or responded to faster. Along with security and subject matter experts on rotation, the collective industry experience and diverse backgrounds of the Red Team’s members allow us to identify blind spots that can turn into actionable improvements.

From 20% project to established team

The Red Team started in 2010 as a “20% project” — an internal initiative where Googlers are free to pursue projects we feel are worth investing time in outside of our day-to-day responsibilities. The team quickly proved its worth, and leadership recognized its positive impact on Google’s infrastructure and the value in applying a hacker mindset to problems in the security space. Since then, the Red Team has become an integral part of the security engineering function, running multiple exercises in parallel and collaborating across multiple continents.

Collaborative adversity

While Red Team exercises conducted at Google simulate an actor that is in most cases hostile and/or disruptive, there is a very clear distinction between the simulated threat and the engineers that play their role. While the threat actor seeks to reach their nefarious goals, Red Team engineers are Googlers that keep people’s safety in mind.

There is very close collaboration between the team simulating the attackers and the teams acting as defenders (e.g., Threat Analysis Group (TAG) and Detection/Response teams), who might identify suspicious activities and respond to them. Since there are multiple exercises happening at any given time, we differentiate between several types of exercises and the response after detection. For most exercises, one of our primary goals is to test detection and make it as efficient as possible for defenders to verify that a signal is associated with an exercise. By doing this, we avoid using resources that could be used to thwart malicious activities targeting people using our services or our wider infrastructure. In other exercises, we want to make sure that the entire process of identifying, isolating and ejecting the attackers, works as intended and that we are able to improve processes.

Safety First

Given the sensitive nature of the work the Red Team does, safety protocols are key and all exercises are overseen by senior engineers. Making sure an exercise is conducted in a safe and responsible manner is as important as any other goal the team is trying to achieve. This may mean forgoing realistic simulation in favor of spending more time on making sure each action is documented, no sensitive data is accessed without proper oversight, and that laws and regulations are obeyed — which is traditionally not something that APT groups are overly concerned about. For the Red Team, accurately simulating the technical capabilities of highly advanced threat actors in a safe and responsible way is core to their mission.

For exercises focusing on detection, actions taken by the team are accessible at any time by the defenders to ensure that we can quickly rule out an external actor acting maliciously. Even if this does not become a necessity, the team will report their activities in detail to address any new findings discovered during the exercise.

Fostering change

In addition to testing and helping improve detection and response capabilities, we also actively research and identify new attack vectors based on adversarial research. It is critical to the Red Team's mission to ensure that any newfound attack surface is shared with both the responsible product teams and the larger security team as soon as possible so that Google can adapt defensive controls and implement improvements to remediate the root cause.

Since its inception over a decade ago, the Red Team has adapted to a constantly evolving threat landscape and been a reliable sparring partner for defense teams across Google. Yet, new challenges await every day and the Red Team continually works to make the job – the job of hacking Google – harder. It’s a challenge we happily accept to keep people safe.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe