Skip to main content
Mideast & North Africa

How we're helping creators disclose altered or synthetic content

Image shows person looking at screen and holding mobile phone
Image showing window and pop up asking if content is synthetic.

Example of a label on the video player.

Example of a label on the video player.

The new label is meant to strengthen transparency with viewers and build trust between creators and their audience. Some examples of content that require disclosure include:

  • Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another's or synthetically generating a person’s voice to narrate a video.
  • Altering footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different than in reality.
  • Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.

Example of a label on the video player.

Example of a label on the video player.

Of course, we recognize that creators use generative AI in a variety of ways throughout the creation process. We won’t require creators to disclose if generative AI was used for productivity, like generating scripts, content ideas, or automatic captions. We also won’t require creators to disclose when synthetic media is unrealistic and/or the changes are inconsequential. These cases include:

  • Clearly unrealistic content, such as animation or someone riding a unicorn through a fantastical world
  • Color adjustment or lighting filters
  • Special effects like background blur or vintage effects
  • Beauty filters or other visual enhancements

You can see a longer list of examples in our Help Center. For most videos, a label will appear in the expanded description, but for videos that touch on more sensitive topics — like health, news, elections, or finance — we’ll also show a more prominent label on the video itself.