(Reuters) – Google Inc (Nasdaq:) said on Monday it will require advertisers to disclose election ads that use digitally altered content to depict real or real-life people or events, in its latest move to fight election misinformation.
The update to the disclosure requirements under the Political Content Policy requires marketers to select a checkbox in the “Altered or Synthetic Content” section of their campaign settings.
The rapid growth of generative AI, which can generate text, images and videos in seconds in response to prompts, has raised concerns about its potential misuse.
The rise of deepfakes, content that is convincingly manipulated to distort someone’s image, has further blurred the lines between real and fake.
Google said it will create an in-ad disclosure for feeds and short films on mobile and live streaming on desktop and TV. For other formats, advertisers will be required to provide a “prominent disclosure” that is visible to users.
Google said that “acceptable disclosure language” will vary depending on the context of the ad.
In April, during India’s ongoing general elections, two fake videos of Bollywood actors appearing to criticize Prime Minister Narendra Modi went viral online. The two AI-generated videos asked people to vote for the opposition Congress party.
Separately, Sam Altman-led OpenAI said in May it had disrupted five covert influence operations that sought to use its AI models in “deceptive activity” online, in an “attempt to manipulate public opinion or influence political outcomes.”
Meta Platforms Inc. (NASDAQ:) said last year it would make advertisers disclose whether artificial intelligence or other digital tools are being used to alter or create political, social or election-related ads on Facebook and Instagram.

.jpg)

