Skip to main content
The Keyword

AI

3 things privacy professionals should consider at the intersection of AI and data privacy

an illustration of people and screens and of buildings intersecting at a scroll

Over the past year the development and growth of AI has both captured the public imagination and expanded our sense of technology’s capacity to be helpful. It has also — importantly — sparked policy conversations on how to balance innovation with robust data privacy protections.

In the United States alone, we ended 2023 with seven new state privacy laws enacted, each of which will impact AI development. While we don't know all that 2024 will bring, we can be certain that new regulatory requirements related to privacy, as well as laws specifically focused on this burgeoning field, are imminent.

Our team runs Checks, Google’s compliance platform for app developers that helps simplify privacy and regulatory compliance for development teams, and as we contemplate new requirements in the year ahead, we see three areas that we believe privacy professionals should pay attention to:

1. The use of public personal data in training models

AI models are often trained on massive datasets of public data. This data can include personal information, such as names, addresses and phone numbers. As AI models become more sophisticated, existing privacy laws will need to evolve to account for new circumstances under which personal data might be collected and processed that haven’t been an issue or contemplated in the past. This may also include reconsidering established definitions of key terms, such as what constitutes processing, transparency or even when data is still considered personal.

2. Harmonizing privacy regulations with new AI regulation

Governments around the world are increasingly focusing on oversight of artificial intelligence. However, it is unclear how new laws and regulations will interact with existing privacy laws. That is why it’s vitally important to consider the practical application of new AI policy initiatives, including overlaps with existing law. The recent White House Executive Order on the development and use of AI, which emphasizes safety, security, innovation and equity, includes several initiatives related to privacy and highlights the importance of interactions between AI and privacy policy.

3. Protecting children’s privacy

The Children's Online Privacy Protection Act (COPPA) governs the collection and use of children’s (under the age of 13) data in the U.S. It has been in place for over 20 years, and rules promulgated in accordance with this law are periodically updated by the Federal Trade Commission (FTC). Last month, the FTC released a notice of proposed rulemaking that addresses some of the feedback provided by commenters, including companies, advocacy groups and creators. These proposed changes are particularly important to consider in the context of the expanded use of AI in products and services that are both child-directed and those that arguably aren’t fully child-directed. AI products and the developers at their helm should consider carefully whether or how they incorporate children’s data in their creation and deployment of AI systems, and how those systems may interact with children.

While the accelerated pace of innovation means that these could only be the tip of the regulatory iceberg in 2024, we know that there are some clear steps companies and privacy professionals responsible for AI development can take to prepare for what lies ahead.

  • Make privacy part of your company's DNA, now. Retrofitting privacy practices is difficult, but it’s never too late to start. Make privacy a core tenet of the product and business model by developing clear internal privacy principles and policies, and incorporating those principles into all levels of product development and deployment. If you’re a Google Cloud customer, there are resources to help you conduct data protection impact assessments (DPIAs). As you expand to incorporate AI into your products and services, conduct risk assessments using frameworks such as Google’s Secure AI Framework (SAIF), to ensure that the company is properly implementing appropriate protections. Companies should also prioritize employee privacy training and invest in technology that enables the implementation of privacy practices more efficiently and effectively.
  • Build a compliance-aware culture. Ensure that the entire company understands how crucial privacy is to the success of the organization. Foster an environment where people feel comfortable raising issues and equip them with the resources to address those issues. Privacy compliance is everyone's responsibility — and consistent company training and internal communications must regularly reinforce this maxim.
  • Use AI to simplify compliance. While the accelerated incorporation of AI into your offerings may make it difficult to keep up, AI can also help make compliance simpler and easier. The regulatory landscape will continue to shift for the foreseeable future, so look for AI-powered compliance solutions, like Checks, to help you manage shifting requirements, increase transparency across your teams, and allow you to react more easily and efficiently as changes come. Knowing precisely what data you are collecting and using can also be challenging, so AI solutions that provide a clear view into data management will help you nimbly respond to new requirements. AI can be an asset, not a hindrance in remaining compliant with ease.

By taking proactive steps to prioritize privacy in company culture and product design, organizations can tackle shifting regulations smoothly. The regulatory environment keeps changing, but those who prioritize transparency, collaboration and using smart technology are positioning themselves for what lies ahead in 2024 and beyond.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe