Technology

Meta to label more AI-generated content

Meta will begin to label more content generated by artificial intelligence (AI) and only remove AI content that violates other policies, according to an update released Friday.

Meta said it plans to start labeling AI-generated content next month, and will stop removing content “solely on the basis of our manipulated video policy” in July.

The company said the timeline aims to let users understand the self-disclosure process before the company stops removing the “smaller subset of manipulated media.”

The update comes after the company announced in February it was working with industry partners on standards to identify AI content, and after a recommendation to update its policy from Meta’s Oversight Board.

The Oversight Board, which is funded by a grant from Meta but run independently of the company, underscored the urgency for Meta to address its policies about AI as elections near in the U.S. and abroad in 2024.


“We agree that providing transparency and additional context is now the better way to address this content. The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling,” Monika Bickert, vice president of content policy for Meta, said in a blog post.

If digitally-created or altered images are deemed to create a “particularly high risk of materially deceiving the public on a matter of importance,” Meta “may add a more prominent label so people have more information and context.”

If content generated with AI violates other Meta policies — such as voter interference, bullying and harassment or violence and incitement — the company will still remove it, Bickert wrote.

The policy adds to rules Meta and other tech companies have put in place to address AI, as the technology grows more prevalent and sophisticated. Meta began to enforce a policy in January requiring advertisers to disclose when they digitally create or alter a political ad.