On policy development at YouTube
Since the earliest days of YouTube, we've had Community Guidelines to establish what's allowed on our platform. These rules of the road gave rise to creative expression while prioritizing the protection of the entire YouTube community from harmful content. This balance is critical for allowing new voices to participate and to promote the sharing of ideas. It’s also necessary to ensure YouTube’s long-term success as a business, as our advertising partners fundamentally do not want to be associated with harmful content.
A few questions have regularly cropped up around how we decide where to draw these lines or why it can take so long to develop and launch new policies (for a broader overview of how we think about our responsibility efforts, see here). So in this blog, we’re shedding more light on how we develop our policies and the processes that go into enforcing them.
How do we determine what policy updates are needed?
The world moves quickly and our policies need to keep up. That’s why we regularly review our policies to make sure that — similar to the laws that govern civil society — they reflect the changes that occur both on and off our platform. To be clear: the vast majority of content on YouTube does not violate our guidelines. But we still check for gaps that may have opened up or hunt for emerging risks that test our policies in new ways.
As we work to keep our policies evolving with the current landscape, our guiding focus is around one major goal: preventing egregious real-world harm. This doesn’t mean that we remove all offensive content from YouTube, and we generally believe that open debate and free expression leads to better societal outcomes. But we’re careful to draw the line around content that may cause egregious harm to our users or to the platform.
This can include physical harm. For example, when claims that linked 5G technology to the spread of COVID-19 resulted in damage to cell towers across the United Kingdom, we moved quickly to make them violative. Or it could mean significant harm to democratic institutions, which is why we don’t allow claims that aim to mislead people about voting — including by promoting false information about the voting times, places or eligibility requirements.
We also work closely with NGOs, academics, and relevant experts from all sides and different countries to inform this policy review. They help flag new concerns, or bring a deep understanding to complex topics that are prone to consistent change. For example, we established our COVID-19 misinformation policy at the start of the pandemic alongside health authorities like the Center for Disease Control and World Health Organization. Later, as their guidance shifted to ease mask and social distancing restrictions, we updated our policies around content that questioned the efficacy of masks and social distancing.
How do we decide where to draw "the line"?
Once we’ve identified an area where a policy update is needed, that’s where our Trust & Safety team comes in to develop a tailored solution. We start by assessing a few things. How commonly found is this specific type of harmful content on YouTube (and what’s its potential to grow)? And how is it managed under our current Community Guidelines?
Then we watch dozens or even hundreds of videos to understand the implications of drawing different policy lines. Drawing a policy line is never about a single video; it’s about thinking through the impact on all videos, which would be removed and which could stay up under the new guideline. Following this comprehensive review, the team shares various options for policy lines, making sure to detail examples of videos that would be removed or approved for each (as well as different enforcement actions, like removal vs. age-restriction).
A top choice is selected from those draft options and then goes through further rounds of assessment. At this stage, we’re looking to understand whether the proposal can meaningfully achieve a few key goals:
- Mitigate egregious real-world harm while balancing a desire for freedom of expression.
- Allow for consistent enforcement by thousands of content moderators across the globe.
If we’re satisfied that we’re hitting these targets, an executive group made up of leads across the company reviews the proposal. Final sign-off comes from the highest levels of leadership, including YouTube’s Chief Product Officer and CEO. If at any point there is consistent disagreement between teams about where we’ve drawn the line, the policy is sent back to the drawing board.
Who provides input on policy development and enforcement?
Throughout the policy development process, we partner closely with a range of established third-party experts on topics like hate speech or harassment. We also work with various government authorities on other important issues like violent extremism and child safety.
Experts help us forecast how global events could cause harmful content to spread across our platform, including uncovering gaps in our systems that might be exploited by bad actors, or providing recommendations for new updates. And like with COVID-19, they provide input that helps us adapt policies in situations where guidance can change quickly.
These partnerships are also especially critical to support policy enforcement for regional issues, where language or cultural expertise is often needed to properly contextualize content. For example, we worked closely with experts in 2021 during the coup d'état in Myanmar to identify cases where individuals were using speech to incite hatred and violence along ethno-religious lines. This allowed us to quickly remove the violative content from our platform.
Do we try to get ahead of emerging issues?
People often think about content moderation as reactive in nature — that we only take content down when it’s flagged by our systems or people. In reality, the bulk of our work focuses on the future. There’s a long process that’s designed to give our teams visibility into emerging issues before they reach, or become widespread on, our platform.
That valuable visibility is driven by our Intelligence Desk, a team within YouTube’s Trust & Safety organization. These specialized analysts identify potentially violative trends — whether new vectors of misinformation or dangerous internet challenges — and the risks they pose. They’re also regularly monitoring ongoing threats like extremist conspiracy theories, both tracking their prevalence across media and evaluating how they morph over time.
These insights then feed into thinking through how current or future policies would manage these new threats. For example, based on evidence gathered by the Intelligence Desk, we updated our hate and harassment policies to better combat harmful conspiracy theories on our platform.
How do we make sure policies are enforced consistently?
The implementation of a new policy is a joint effort between people and machine learning technology. In practice, that means in order for a policy to be successfully launched and enforced, people and machines need to work together to achieve consistently high levels of accuracy when reviewing content.
We start by giving our most experienced team of content moderators enforcement guidelines (detailed explanation of what makes content violative), and ask them to differentiate between violative and non-violative material. If the new guidelines allow them to achieve a very high level of accuracy, we expand the testing group to include hundreds of moderators across different backgrounds, languages and experience levels.
At this point, we begin revising the guidelines so that they can be accurately interpreted across the larger, more diverse set of moderators. This process can take a few months, and is only complete once the group reaches a similarly high degree of accuracy. These findings then help train our machine learning technology to detect potentially violative content at scale. As we do with our content moderators, we test models to understand whether we’ve provided enough context for them to make accurate assessments about what to surface for people to review.
After this testing period, the new policy can finally launch. But the refinement continues in the months that follow. Every week, our Trust & Safety leadership meet with quality assurance leads from across the globe (those responsible for overseeing content moderation teams) to discuss particularly thorny decisions and review the quality of our enforcement. If needed, guideline tweaks are then drafted to address gaps or to provide clarity for edge cases.
How do people and machines work together to enforce our policies?
Once models are trained to identify potentially violative content, the role of content moderators remains essential throughout the enforcement process. Machine learning identifies potentially violative content at scale and nominates for review content that may be against our Community Guidelines. Content moderators then help confirm or deny whether the content should be removed.
This collaborative approach helps improve the accuracy of our models over time, as models continuously learn and adapt based on content moderator feedback. And it also means our enforcement systems can manage the sheer scale of content that’s uploaded to YouTube (over 500 hours of content every minute), while still digging into the nuances that determine whether a piece of content is violative.
For example, a speech by Hilter at the Nuremberg rallies with no additional context may violate our hate speech policy. But if the same speech was included in a documentary that decried the actions of the Nazis, it would likely be allowed under our EDSA guidelines. EDSA takes into account content where enough context is included for otherwise violative material, like an educational video or historical documentary.
This distinction may be more difficult for a model to recognize, while a content moderator can more easily spot the added context. This is one reason why enforcement is a fundamentally shared responsibility — and it underscores why human judgment will always be an important part of our process. For most categories of potentially violative content on YouTube, a model simply flags content to a content moderator for review before any action may be taken.
How do we measure success?
We’re driven in all of our work to live up to our Community Guidelines and further our mission to allow new voices and communities to find a home on YouTube. Success on this front is hard to pin down to a single metric, but we’re always listening to feedback from stakeholders and members of our community about ways we can improve — and we continuously look to provide more transparency into our systems and processes (including efforts like this blog).
To measure the effectiveness of our enforcement, we release a metric called our violative view rate, which looks at how many views on YouTube come from violative material. From July through September of this year, that number was 0.10% – 0.11%, which means that for every 10,000 views, between 10 and 11 were of content that violated our Community Guidelines.
We also track the number of appeals submitted by creators in response to videos that are removed (an option available to any Creator on YouTube), as this helps us gain a clearer understanding about the accuracy of our systems. For example, during the same time period mentioned above, we removed more than 5.6 million videos for violating our Community Guidelines and received roughly 271,000 removal appeals. Upon review, we reinstated about 29,000 appeals.
And while metrics like appeals, reinstatements, and our violative view rate don’t offer a perfect solution to understand consistency or accuracy, they’re still pivotal in benchmarking success on an ongoing basis.
Community Guidelines are concerned with language and expression — two things that, by their very nature, evolve over time. With that shifting landscape, we’ll continue to regularly review our policy lines to make sure they’re drawn in the right place. And to keep our community informed, we’ll be sharing further how we’re adapting in the months ahead.