Evolving expectations of what’s possible
The following is adapted from remarks delivered by Kent Walker, President of Global Affairs, at IAPP Global Summit 2026, the world’s largest annual gathering of digital responsibility professionals.
Today I want to talk about meeting people’s needs — and how their fast-evolving expectations are shaping what we build.
But let’s start with some context.
We now have AI models that are 300 times more efficient than the state of the art from just two years ago.
Not 300 percent — 300 times.
Today’s models don’t just make predictions, they can…
- …work independently
- …take different actions in different environments
- …course correct when they run into dead ends
That’s leading to scientific breakthroughs — and even breakthroughs in how we make breakthroughs — across medicine, energy, materials science, and more.
Just as importantly, these models can now be helpful in ways that weren’t possible before.
People want technology that gets them and helps them in the moment.
Larry and Sergey always imagined Search evolving from responding to suggesting to helping.
And for the first time, we can now deliver.
To give you a sense of what I’m talking about, here’s a video from earlier this year when we first brought Personal Intelligence to AI Mode in Search in the United States.
We’ve come a long way since the days of ten blue links.
Personal Intelligence connects contextual information from apps like Gmail and Google Photos. This helps make Search feel, not just more proactive, but more tailored to you.
The old line is that the future is here, it’s just not evenly distributed yet. Technological innovations are letting us deliver on the vision of giving everyone a personal assistant.
We’re seeing remarkable progress in this AI era.
But our guiding philosophy remains the same: Focus on the people using the technology and all else follows from that.
And our approach to rolling out new capabilities also remains the same:
- We start with a limited set of trusted testers, to understand what people want.
- Once we’ve heard back from them, we start responsibly rolling out services to more people, getting feedback at every stage.
- That feedback — whether people like and benefit from the tool — shapes where we go next.
Right now, people are asking for more personalized experiences.
They don’t just want a chatbot — they want a trusted assistant who can help with daily tasks.
I understand that — I do too.
When you have an assistant that doesn’t just say things, but that can connect the dots and do things for you, it opens up a new world of possibilities.
For example, in Ukraine, which has become one of Europe’s most digitally advanced countries, we worked with the government to build the national AI assistant, Diia.AI.
Diia goes beyond answering questions — it provides government services tailored to each person’s needs — all within a chat interface.
So if you tell the agent: "I need an income certificate," you get it directly in your personal account on the Diia portal, with an email notification as soon as it’s ready.
As with any new technology, Personal Intelligence will be what we make of it.
Earlier I said people want a “trusted assistant.”
The trust part is key. Because above all else: People tell us they want to be in the driver’s seat.
So let’s talk about how industry and regulators can work together to meet people’s expectations and ensure they are always in control.
We start by focusing on the people using these tools, listening to them and meeting them where they are, providing a level of protection tailored to what they want to do and ensuring their information isn’t used in ways they don’t want.
We have to carefully assess where friction makes sense and where it doesn’t.
Sometimes a person might tell their agent to buy something and clearly want the agent to use relevant information to do that seamlessly, without interruptions or pop-ups.
But sometimes, say when someone has stored sensitive health data, they’ll rightly expect that information will not be disclosed at all; or only with tight limits on how, when, and to whom it is shared.
From a development perspective, it’s nuanced, but we can give these assurances in a few ways:
- First: By providing controls over agents’ access. We also have to make it easy for people to toggle connections on or off.
- Second: By setting up guardrails for sensitive areas. For example, Gemini generally avoids making proactive assumptions about sensitive topics.
- Third: By training agents only on what is needed to provide a quality service and improve usefulness over time.
We also protect privacy by ensuring our approach is grounded in self-determination and meets people's intuitive expectations for data protection in a given context.
To do that, just-in-time notices, consents, and dashboards have their place — an important place — but they can’t be the whole solution.
So we'll need to get creative with new ways of delivering transparency and control without overwhelming people.
We'll need to listen and learn to understand people's privacy expectations in context.
And we'll need to innovate, to deliver these protections in a way that supports new AI experiences.
Technology can work intuitively for people, delivering what they need, when and how they need it. With the protection that they need. That's what will fuel trust and the broad, safe use of these new tools.
Safeguards will be more important than ever.
And helpfully laws already take the concept of reasonable expectations into account—though reasonableness has always been an evolving concept.
What will reasonable look like in this new era?
How do we account for varying expectations?
And how do we understand and meet people’s rapidly evolving needs?
We’ll want to talk with all of you about the answers to these questions. And we’ll want to hear from the billions of people who use our products every day, voting with their voices and their feet, teaching us what works for them.
This isn’t just privacy by default, or even privacy by design — it’s privacy by innovation.
AI labs and developers should compete as much on demonstrable privacy techniques as they do on quality. As you all appreciate, privacy is an aspect of quality.
Privacy-enhancing technologies can give us valuable insights and make products more responsive, prevent data misuse and harm, help businesses access new markets, and reinforce trust in digital services and data flows critical to the global economy.
And governments can help by supporting global benchmarks and standards to reinforce trust in new technologies, and offering regulatory incentives and compliance benefits so that businesses feel that PETs are worth the investment.
Let me wrap up by telling you something you already know.
This is the biggest platform shift of our generation.
The magic is available now. But if we want to ensure people can fully tap into the benefits of this remarkable technology, then we need to work together to build a future where our data protection frameworks evolve along with our tools.
Where our privacy rules translate for the new generation of customized and context-aware services that people want.
That will take an ongoing dialogue. So let’s keep the conversation going and get it right. Together.