YouTube is updating its harassment and cyberbullying policies to clamp down on content that “realistically simulates” deceased minors or victims of deadly or violent events describing their death. The Google-owned platform says it will begin striking such content starting on January 16.
The policy change comes as some true crime content creators have been using AI to recreate the likeness of deceased or missing children. In these disturbing instances, people are using AI to give these child victims of high profile cases a childlike “voice” to describe their deaths.
In recent months, content creators have used AI to narrate numerous high-profile cases including the abduction and death of British two-year-old James Bulger, as reported by the Washington Post. There are also similar AI narrations about Madeleine McCann, a British three-year-old who disappeared from a resort, and Gabriel Fernández, an eight-year-old boy who was tortured and murdered by his mother and her boyfriend in California.
YouTube will remove content that violates the new polices, and users who receive a strike will be unable to upload videos, live streams or stories for one week. After three strikes, the user’s channel will be permanently removed from Youtube.
The new changes come nearly two months after YouTube introduced new policies surrounding responsible disclosures for AI content, along with new tools to request the removal of deepfakes. One of the changes requires users to disclose when they’ve created altered or synthetic content that appears realistic. The company warned that users who failed to properly disclose their use of AI will be subject to “content removal, suspension from the YouTube Partner Program, or other penalties.”
In addition, YouTube noted at the time that some AI content may be removed if it’s used to show “realistic violence,” even if it’s labelled.
In September 2023, TikTok launched a tool to allow creators to label their AI-generated content after the social app updated its guidelines to require creators to disclose when they are posting synthetic or manipulated media that shows realistic scenes. TikTok’s policy allows it to take down realistic AI images that aren’t disclosed.