Transparency in the shadowy world of cyberattacks
The following is adapted from remarks delivered by Kent Walker, President of Global Affairs, at the International Conference on Cyber Security 2022 on July 19, 2022.
Thank you for the chance to be a part of this important conversation about cybersecurity.
At Google we’re proud to say that we keep more people safe online than anyone else in the world. But that wasn’t always the case.
So let me start by telling you a story about how we got it wrong, and two things we all can learn from that experience. My dad always told me that it was cheapest to learn from the other guy’s mistake. So let me tell you about one of ours.
As some of you may recall, in late 2009, Google was the victim of a major cybersecurity attack, code named Operation Aurora.
We’ve long had some of the most attacked websites in the world. But Aurora was something special.
Aurora was an attack attributed to the Chinese government, a significant security incident that resulted in the theft of intellectual property from Google.
But Aurora wasn’t just any security incident. And it wasn’t just against Google.
As part of our investigation we discovered that several other high-profile companies were similarly targeted. Other companies either hadn’t discovered the attacks, or hadn’t wanted to disclose them. When I was a federal prosecutor specializing in technology crimes, one of the biggest challenges we encountered was getting companies to go public or even come to the authorities.
So we felt it was important to talk about the attack–to tell the world about its impact, the methods of the hackers, and the sectors at risk.
We worked with the US Government to share threat vectors and vulnerabilities.
And we didn’t stop there: After Aurora, we launched an entire team called Project Zero to find and promptly disclose previously undiscovered, zero-day vulnerabilities in our own and other companies’ software, raising the security bar for everyone.
And today, Google’s Threat Analysis Group, or TAG, works to counter a range of persistent threats from government-backed attackers to commercial surveillance vendors to criminal operators. TAG does regular public disclosures of foreign state actor attacks, including doing the difficult work of attribution.
So I’d say that the first lasting lesson from the Aurora attack is the need to weave openness and transparency into the fabric of a cybersecurity response. It’s not always comfortable work–we’ve had to have some tough conversations with partners and with our own teams along the way–but it’s necessary to move the industry forward and ensure bugs are getting fixed fast, before they can be exploited in the wild.
In the ensuing years, we’ve developed principles to ensure we can share learnings about vulnerabilities, cyber attacks (such as attacks on elections), and disinformation campaigns responsibly, transparently, and helpfully with the public, with our partners, and with law enforcement.
And the US government has in turn stood up its own process to facilitate more information sharing with industry partners in order to expedite patches that safeguard us all.
But the value of transparency isn’t the only reason I bring up the Aurora story.
Aurora not only taught us the need to embrace transparency, it also taught us a second, and even more important lesson: What works and what doesn’t when it comes to security architecture.
It’s possible to over-index on info sharing alone.
Focusing on the fundamentals of software security is in some ways more important to raise all of us above the level of insecurity we see today.
We curate and use threat intelligence to protect billions of users–and have been doing so for some time. But you need more than intelligence, and you need more than security products–you need secure products.
Security has to be built in, not just bolted on.
Aurora showed us that we (and many in the industry) were doing cybersecurity wrong.
Security back then was often “crunchy on the outside, chewy in the middle.” Great for candy bars, not so great for preventing attacks. We were building high walls to keep bad actors out, but if they got past those walls, they had wide internal access.
The attack helped us recognize that our approach needed to change–that we needed to double down on security by design.
We needed a future-oriented network, one that reflected the openness, flexibility, and interoperability of the internet, and the way people and organizations were already increasingly working.
In short, we knew that we had to redesign security for the Cloud.
So we launched an internal initiative called BeyondCorp, which pioneered the concept of zero trust and defense in depth and allowed every employee to work from untrusted networks without the use of a VPN. Today, organizations around the world are taking this same approach, shifting access controls from the network perimeter to the individual and the data.
If you fast forward to today’s hybrid-cloud environment, zero trust is a must.
At the core of zero trust is the idea that security doesn’t have a defined border. It travels with the user and the data. For example, as the Administration pushes for multi-factor authentication for government systems, we’re automatically enrolling users in two-step verification to confirm it’s really them with a tap on their phone when they sign into our products.
Practically, this means that employees can work from anywhere in the world, accessing the most sensitive internal services and data over the internet, without sacrificing security. It also means that if an attacker does happen to break through defenses, they don’t get carte-blanche to access internal data and services.
The most impactful thing a company, organization, or government can do to defend against cyber-attacks is to upgrade their legacy architecture.
Is it always easy? No, but when you consider that legacy architecture with its millions upon millions of lines of proprietary code, has thousands of bugs, each one a potential vulnerability, it’s worth it.
And beyond replacing existing plumbing, we need to be thinking about the next challenges, and deploying the latest tools.
In the same way the world is racing to upgrade encryption to deal with the threat of quantum decryption, we need to be investing in cutting-edge technologies that will help us keep ahead of increasingly sophisticated threats.
The good news is that cyber-security tools are evolving quickly, from artificial intelligence capabilities, to advanced cryptography, to quantum computing.
If today we talk about security by design, what comes next is security through innovation–security designed with AI and machine learning in mind–designed to counter bad actors using new tools to evade filters, break into encrypted communications, and generate customized phishing emails.
We’ve got some of the best AI work in the business, and we’re testing new approaches and using some of our leading-edge AI tools to detect malware and phishing at scale. AI allows us to see more threats faster, while reducing human error. AI, graph mining, and predictive analytics can dramatically improve our ability to identify and block phishing, malware, abusive apps, and code from malicious websites.
We look forward to sharing more of our findings so that organizations and governments can prepare. After all, this is no time for locking down learnings or successful techniques. Bad actors are not just on the lookout for ways to exploit your unknown vulnerabilities. As with Hafnium and SolarWinds, they are looking for the weak link in the security chain, letting them springboard from one attack to another. A vulnerability at one organization can do damage to entire industries and infrastructures.
Cybersecurity is a team sport, and we all need to get better together, building bridges not just within the security communities, but also between the national security community and academia and Silicon Valley.
Having started with one story, let me leave you with another—cybersecurity and Russia’s war in Ukraine.
A lot has changed in our approach since Aurora. And perhaps no example illustrates that shift more clearly than our response to the war in Ukraine.
Russia’s invasion sparked, not just a military and economic war, but also a cyber war and an information war. In recent months, we have witnessed a growing number of threat actors– state actors and criminal networks–using the war as a lure in phishing and malware campaigns, embarking on espionage, and attempting to sow disinformation.
But this time, we were ready with a modern infrastructure and a process for monitoring and responding to threats as they happened.
We’ve sent thousands of warnings to users targeted by foreign-state actors–a practice we pioneered after Aurora. And in the vast majority of cases, we’ve blocked the attacks.
We launched Project Shield, bringing not just journalists, but vulnerable websites in Ukraine under Google’s security umbrella against DDOS attacks. While you can DDOS small sites, it turns out that it’s pretty tough to DDOS Google. We disrupted phishing campaigns from Ghostwriter, an actor attributed to Belarus. And we helped the Ukrainian government modernize its cyber infrastructure, helping fortify it against attack.
We are proud that we were the first company to receive the Ukrainian government’s special peace prize in recognition of these efforts.
But the work is far from done.
Even now, we’re seeing reports that the Kremlin could be planning to ratchet up attacks and coordinated disinformation campaigns across Eastern Europe and beyond in an attempt to divide and destabilize Western support for Ukraine. In fact, just today, our TAG team published a new report on activity from a threat group linked to Russia’s Federal Security Service, the FSB, and threat actors using phishing emails to target government and defense officials, politicians, NGOs, think tanks, and journalists.
And, looking beyond Russia and Ukraine, we see rising threats from Iran, China, and North Korea.
Google is a proud American company, committed to the defense of democracy and the safety and security of people around the world.
And we believe cybersecurity is one of the most important issues we face.
It’s why we invested $10 billion over the next five years to strengthen cybersecurity, including expanding zero-trust programs, helping secure the software supply chain, and enhancing open-source security.
It’s why we’ve just created a new division–Google Public Sector–focused on supporting work with the US government. And it’s why we are always open to new partnerships and projects with the public sector.
In recent years, we’ve worked with the FBI’s Foreign Influence Taskforce to identify and counter align foreign influence operations targeting the U.S. We’ve worked with the NSA’s Cybersecurity Collaboration Center. And we’ve joined the Joint Cyber Defense Collaborative to help protect critical infrastructure and improve collective responses to incidents on a national scale.
Getting our whole digital economy on the front foot is essential. And there’s some encouraging progress. For example, we were glad to see last week’s Cyber Safety Review Board report deeply investigating the log4j vulnerability and making important recommendations about how to improve the ecosystem.
We need more of that.
Looking ahead, our collective ability to prevent cyber attacks will come, not only from transparency, but from a commitment to shoring up our defenses — moving away from legacy technology, modernizing infrastructure, and investing in cutting-edge tools to spot and stop tomorrow’s challenges.
We can’t beat tomorrow’s threats with yesterday’s tools. We need collective action to shore up our digital defenses. But by drawing on America’s collective abilities and advantages, we can achieve a higher level of collective security for all of us.
Thank you.