The update, announced Wednesday morning, comes amid a rise in generative AI technology that can create realistic depictions of public figures’ appearance, voice or likeness.
Advertisers will have to disclose whenever a social issue, electoral or political ad contains a “photorealistic image or video, or realistic sounding audio” that was digitally created or altered for seemingly deceptive means, Meta said in a blog post.
The rule will apply in cases where AI is used in an ad to:
- “Depict a real person as saying or doing something they did not say or do,”
- “Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened,”
- Or “depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”
Advertisers do not have to disclose the use of digital alterations in ways that are “inconsequential or immaterial to the claim” raised in the ad.
Meta, which recently began rolling out its own generative AI tools for advertisers, also said in a note on Monday that political advertisers would not be allowed to use the new features.
“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries,” the company said.
Read more in a full report at TheHill.com.