Skip to main content
India Blog

On policy development at YouTube

Article's hero media
Policy Updates

How do we determine what policy updates are needed?

The world moves quickly and our policies need to keep up. That’s why we regularly review our policies to make sure that — similar to the laws that govern civil society — they reflect the changes that occur both on and off our platform. To be clear: the vast majority of content on YouTube does not violate our guidelines. But we still check for gaps that may have opened up or hunt for emerging risks that test our policies in new ways.

As we work to keep our policies evolving with the current landscape, our guiding focus is around one major goal: preventing egregious real-world harm. This doesn’t mean that we remove all offensive content from YouTube, and we generally believe that open debate and free expression leads to better societal outcomes. But we’re careful to draw the line around content that may cause egregious harm to our users or to the platform.

This can include physical harm. For example, when claims that linked 5G technology to the spread of COVID-19 resulted in damage to cell towers across the United Kingdom, we moved quickly to make them violative. Or it could mean significant harm to democratic institutions, which is why we don’t allow claims that aim to mislead people about voting — including by promoting false information about the voting times, places or eligibility requirements.

We also work closely with NGOs, academics, and relevant experts from all sides and different countries to inform this policy review. They help flag new concerns, or bring a deep understanding to complex topics that are prone to consistent change. For example, we established our COVID-19 misinformation policy at the start of the pandemic alongside health authorities like the Center for Disease Control and World Health Organization. Later, as their guidance shifted to ease mask and social distancing restrictions, we updated our policies around content that questioned the efficacy of masks and social distancing.

Quote

How do we decide where to draw "the line"?

Once we’ve identified an area where a policy update is needed, that’s where our Trust & Safety team comes in to develop a tailored solution. We start by assessing a few things. How commonly found is this specific type of harmful content on YouTube (and what’s its potential to grow)? And how is it managed under our current Community Guidelines?

Then we watch dozens or even hundreds of videos to understand the implications of drawing different policy lines. Drawing a policy line is never about a single video; it’s about thinking through the impact on all videos, which would be removed and which could stay up under the new guideline. Following this comprehensive review, the team shares various options for policy lines, making sure to detail examples of videos that would be removed or approved for each (as well as different enforcement actions, like removal vs. age-restriction).

A top choice is selected from those draft options and then goes through further rounds of assessment. At this stage, we’re looking to understand whether the proposal can meaningfully achieve a few key goals:

  • Mitigate egregious real-world harm while balancing a desire for freedom of expression.
  • Allow for consistent enforcement by thousands of content moderators across the globe.

If we’re satisfied that we’re hitting these targets, an executive group made up of leads across the company reviews the proposal. Final sign-off comes from the highest levels of leadership, including YouTube’s Chief Product Officer and CEO. If at any point there is consistent disagreement between teams about where we’ve drawn the line, the policy is sent back to the drawing board.

Quote

Do we try to get ahead of emerging issues?

People often think about content moderation as reactive in nature — that we only take content down when it’s flagged by our systems or people. In reality, the bulk of our work focuses on the future. There’s a long process that’s designed to give our teams visibility into emerging issues before they reach, or become widespread on, our platform.

That valuable visibility is driven by our Intelligence Desk, a team within YouTube’s Trust & Safety organization. These specialized analysts identify potentially violative trends — whether new vectors of misinformation or dangerous internet challenges — and the risks they pose. They’re also regularly monitoring ongoing threats like extremist conspiracy theories, both tracking their prevalence across media and evaluating how they morph over time.

These insights then feed into thinking through how current or future policies would manage these new threats. For example, based on evidence gathered by the Intelligence Desk, we updated our hate and harassment policies to better combat harmful conspiracy theories on our platform.

Quote

How do people and machines work together to enforce our policies?

Once models are trained to identify potentially violative content, the role of content moderators remains essential throughout the enforcement process. Machine learning identifies potentially violative content at scale and nominates for review content that may be against our Community Guidelines. Content moderators then help confirm or deny whether the content should be removed.

This collaborative approach helps improve the accuracy of our models over time, as models continuously learn and adapt based on content moderator feedback. And it also means our enforcement systems can manage the sheer scale of content that’s uploaded to YouTube (over 500 hours of content every minute), while still digging into the nuances that determine whether a piece of content is violative.

For example, a speech by Hilter at the Nuremberg rallies with no additional context may violate our hate speech policy. But if the same speech was included in a documentary that decried the actions of the Nazis, it would likely be allowed under our EDSA guidelines. EDSA takes into account content where enough context is included for otherwise violative material, like an educational video or historical documentary.

This distinction may be more difficult for a model to recognize, while a content moderator can more easily spot the added context. This is one reason why enforcement is a fundamentally shared responsibility — and it underscores why human judgment will always be an important part of our process. For most categories of potentially violative content on YouTube, a model simply flags content to a content moderator for review before any action may be taken.