Monday, June 17, 2024

From the AP: FEC moves toward potentially regulating AI deepfakes in campaign ads

More potential regulations in response to a perceived problem.

- Click here for the article.  

The Federal Election Commission has begun a process to potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election, a move advocates say would safeguard voters against a particularly insidious form of election disinformation.

The FEC’s unanimous procedural vote on Thursday advances a petition asking it to regulate ads that use artificial intelligence to misrepresent political opponents as saying or doing something they didn’t — a stark issue that is already being highlighted in the current 2024 GOP presidential primary.

Though the circulation of convincing fake images, videos or audio clips is not new, innovative generative AI tools are making them cheaper, easier to use, and more likely to manipulate public perception. As a result, some presidential campaigns in the 2024 race — including that of Florida GOP Gov. Ron DeSantis — already are using them to persuade voters.

The Republican National Committee in April released an entirely AI-generated ad meant to show the future of the United States if President Joe Biden is reelected. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic.

In June, DeSantis’ campaign shared an attack ad against his GOP primary opponent Donald Trump that used AI-generated images of the former president hugging infectious disease expert Dr. Anthony Fauci.

SOS America PAC, which supports Miami Mayor Francis Suarez, a Republican, also has experimented with generative AI, using a tool called VideoAsk to create an AI chatbot in his likeness.

Thursday’s FEC meeting comes after the advocacy group Public Citizen asked the agency to clarify that an existing federal law against “fraudulent misrepresentation” in campaign communications applies to AI-generated deepfakes.

The panel’s vote shows the agency’s intent to consider the question, but it will not decide whether to actually develop rules governing the ads until after a 60-day public comment window, which is likely to begin next week.

In June, the FEC deadlocked on an earlier petition from the group, with some commissioners expressing skepticism that they had the authority to regulate AI ads. Public Citizen came back with a new petition identifying the fraudulent misrepresentation law and explaining it thought the FEC did have jurisdiction.

A group of 50 Democratic lawmakers led by House Rep. Adam Schiff also wrote a letter to the FEC urging the agency to advance the petition, saying, “Quickly evolving AI technology makes it increasingly difficult for voters to accurately identify fraudulent video and audio material, which is increasingly troubling in the context of campaign advertisements.”

Republican Commissioner Allen Dickerson said in Thursday’s meeting he remained unconvinced that the agency had the authority to regulate deepfake ads.

“I’ll note that there’s absolutely nothing special about deepfakes or generative AI, the buzzwords of the day, in the context of this petition,” he said, adding that if the FEC had this authority, it would mean it also could punish other kinds of doctored media or lies in campaign ads.

Dickerson argued the law doesn’t go that far, but noted the FEC has unanimously asked Congress for more authority. He also raised concerns the move would wrongly chill expression that’s protected under the First Amendment.

Public Citizen President Robert Weissman disputed Dickerson’s points, arguing in an interview Thursday that deepfakes are different from other false statements or media because they fraudulently claim to speak on a candidate’s behalf in a way that’s convincing to the viewer.

“The deepfake has an ability to fool the voter into believing that they are themselves seeing a person say or do something they didn’t say,” he said. “It’s a technological leap from prior existing tools.”

Weissman said acknowledging deepfakes are fraud solves Dickerson’s First Amendment concerns too — while false speech is protected, fraud is not.