Google Will Require Advertisers To Add Disclosures For AI-Generated Election Ads

Published 1 year ago
Antonio Pequeño IV
In this photo illustration, a woman holds a smartphone with

TOPLINE

Google plans to require verified election advertisers to make “clear and conspicuous” disclosures when their advertisements contain AI-generated content, according to a blog post from the tech giant, which will institute the changes as artificial intelligence becomes more commonplace in the leadup to the 2024 presidential election.

KEY FACTS

The update to Google’s political content policy will arrive in mid-November and will require disclosures to be placed in clear locations on images and videos.

Audio content is included in the disclosure requirements, though AI generated content “inconsequential to the claims made in the ad,” such as color corrections, image resizing and cropping, are exempt.

Advertisement

Videos uploaded to YouTube that aren’t paid advertising are also exempt, even if the content is uploaded by political campaigns.

Election advertisers who want to run election ads through Google are already required to go through a verification process requiring basic information from applicants.

Loading...

A mix of human and tech tools will be used to enforce the disclosure requirements.

KEY BACKGROUND

Google is one of several tech companies that has faced scrutiny over its handling of misinformation. YouTube, which was acquired by Google in 2006, faced backlash after announcing this year it would stop taking down content containing false claims about the 2020 presidential election—when former President Donald Trump baselessly claimed voter fraud was rampant in an election he still considers rigged. YouTube said in a blog post it made the decision to protect its “community” and provide “a home for open discussion and debate.” The company was also criticized in 2020 for slowly removing or labeling videos with election misinformation.

Advertisement

TANGENT

X, formerly known as Twitter, does not have specific guidelines for AI-generated ad content. Meta, the owner of Instagram and Facebook, does not have similar policies either, though it does have a ban against “manipulated media” such as deepfakes—videos that typically use digital alterations to copy someone’s likeness and convey false information.

Forbes Daily: Get our best stories, exclusive reporting and essential analysis of the day’s news in your inbox every weekday.Sign Up

By signing up, you accept and agree to our Terms of Service (including the class action waiver and arbitration provisions), and Privacy Statement.

FURTHER READING

YouTube will stop removing false claims about 2020 election fraud (NBC News)

Advertisement

Loading...