Meta isn’t the only company grappling with the rise of AI-generated content and how it’s impacting its platform. YouTube quietly rolled out a policy change in June that will allow people to request the removal of AI-generated content or other synthetic content that mimics their faces or voices. The change allows people to request the removal of this type of AI-generated content under YouTube’s privacy request process. It’s an expansion of its previous privacy policy. Announces its approach to the responsible AI agenda It was first introduced in November.
Rather than requesting removal of content for being misleading, such as deepfakes, YouTube wants affected parties to directly request removal of content as a privacy violation. According to YouTube’s recently updated privacy policy, Help Documents In this regard, it requires first party claims outside of a few exceptions, such as when the affected individual is a minor, does not have access to a computer, is deceased, or other such exceptions.
However, simply submitting a removal request does not necessarily mean that the content will be removed. YouTube warns that it will make its own judgment on the complaint based on a variety of factors.
For example, the company may consider whether the content has been identified as artificial or AI-generated, whether it uniquely identifies an individual, and whether the content could be considered a parody, satire, or anything else of value and in the public interest. The company also notes that it may consider whether AI-generated content features a public figure or other well-known individual, and whether it shows them engaging in “sensitive behavior” such as criminal activity, violence, or endorsing a product or political candidate. The latter is particularly concerning in an election year, when AI-generated endorsements could sway votes.
YouTube says it will also give the person who uploaded the content 48 hours to act on the complaint. If the content is removed before that time, the complaint will be closed. Otherwise, YouTube will begin reviewing the complaint. The company also warns users that removal means the video will be removed entirely from the site, and if necessary, the individual’s name and personal information will be removed from the video’s title, description, and tags as well. Users can also blur people’s faces in their videos, but they can’t simply make a video private to comply with a removal request, as the video can be reverted to public at any time.
The company has not widely announced the change in policy, though. In March, I introduced the tool In Creator Studio, which allowed creators to reveal when they created realistic content using modified or synthetic media, including generative AI. As recently I started testing the feature. This would allow users to add group notes that provide additional context about videos, such as whether they are intended to be parodies or misleading in some way.
YouTube isn’t averse to using AI, and has experimented with generative AI itself, including a comment summarization tool and a conversational tool to ask questions about a video or get recommendations. However, the company has Forewarned Just labeling AI content as such won’t necessarily protect it from removal, as it still has to comply with YouTube’s Community Guidelines.
In the event of privacy complaints regarding AI-generated material, YouTube will not take action to penalize the original content creator.
“For creators, if you receive a notification of a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines violations, and receiving a privacy complaint will not automatically result in a violation,” a company representative said last month. subscriber On the YouTube Community site where the company updates creators directly on new policies and features.