As long as there are elections, there will be attempts by some to manipulate the outcome. In earlier generations, election rigging used to consist of simpler and more brazen schemes: efforts to stuff ballot boxes, buy votes or distribute advertisements, flyers, and newsletters with false information. But as election technology has advanced, so too have the means of election interference. Today, we find ourselves confronted with much more pernicious and sophisticated threats to free and fair elections – deepfakes.
The recent breakthroughs in generative artificial intelligence (AI) have made it possible for anyone’s voice or image to be manipulated to say or depict something that never happened. The resulting digitally manipulated audio and video are referred to as “deepfakes.” While Photoshop doctoring has been around for the last few decades, there is something much more jarring and believable about hearing a candidate say something in what sounds like their own words.
Anyone can now sit down at their computer and, in a few clicks, use publicly available platforms to create deepfakes. Such digital work used to only be the province of CGI wizards working on big budget blockbuster movies. However, the current low cost and ease of creating deepfakes is particularly troubling in the context of elections. Now any individual with basic tech skills and a malicious desire and intent to harm, can sow chaos and potentially influence the outcome of an election. The deepfake could be a video of a president postponing the election due to a cybersecurity attack, a candidate accepting a bribe, or an election official stuffing a ballot box. These deepfakes have the potential to be seen by millions of people around the globe within hours via social media and can easily find fertile ground in an electorate that already harbors a growing distrust in its political leaders. In close elections, a seemingly plausible and well-timed deepfake could make the difference between who wins and who loses.
The threat is not just theoretical. Deepfakes have started popping up in elections in the United States and around the world. In 2023, a Twitter account began circulating a fake audio recording of a top Chicago mayoral candidate condoning police violence. That same year, during the Republican presidential primary race, Ron DeSantis’ campaign shared a video showing fake images of Donald Trump hugging Dr. Anthony Fauci. Currently, the New Hampshire Attorney General’s office is investigating robocalls featuring what sounds like the voice of President Biden discouraging voters from participating in the primary election.
The threat of deepfakes in our elections has spurred policymakers to respond. At least a dozen states have passed legislation to require political ads to label deepfake content, and Alabama went one step further by banning deepfakes in campaign ads. Similarly, Governor Newsom recently signed two new laws sponsored by Common Cause to limit the spread — and in certain contexts ban the distribution — of deepfakes intended to manipulate voters and influence elections.
AB 2655, authored by Assemblymember Marc Berman, bans the distribution on major online platforms of AI-generated deepfakes that falsely portray a candidate, elected official, or elections official for a limited time before and after an election. AB 2839, authored by Assemblymember Gail Pellerin, bans the use of deepfakes falsely portraying candidates, elected officials, and elections officials in offline political communications such as campaign mailers and radio and TV ads. Both laws provide exceptions for candidates who use AI-generated images in their own ads if it is labeled as such. There are also provisions for parody — in the case of AB 2655, deepfake content can be distributed via social media if it includes a disclosure label and for AB 2839, deepfakes used as parody in offline communication are allowed without any disclosure statement. NextGen Policy supported both bills.
Unfortunately, AB 2839 is already being challenged in the courts on the grounds that it violates First Amendment protections. The creator of a YouTube video lampooning Vice President Kamala Harris using what appears to be an AI-generated version of her voice has filed a suit contending that AB 2839 infringes upon First Amendment rights. The lawsuit alleges that the labeling requirement for parody videos is too onerous, and a U.S. District Court judge has agreed, for the moment, issuing a preliminary injunction blocking the implementation of the law while the court case proceeds.
State legislators aren’t the only ones trying to address this challenge. The Federal Communications Commission (FCC) has stepped up to the challenge as well. Not only did the FCC unanimously vote to ban AI-generated voices in robocalls (as was alleged to have occurred in New Hampshire this year), but it also announced in late July that it would advance a proposal to require disclosure of AI used in political ads on TV and radio.
At the same time, social media companies are also looking for ways to tackle the challenges posed by deepfakes. TikTok and Meta (the parent company of Instagram, Threads and Facebook) will begin labeling AI content, and YouTube will require creators to disclose when videos use AI.
Information, debate and the free expression of ideas are the lifeblood of democracy. We must expose and disarm the threat of deepfakes created with the clear intention of deceiving voters and manipulating elections while also protecting the First Amendment right to express ideas and opinions. In general, policy and regulatory actions can often lag a step — or in some cases, a mile — behind the latest technological innovations. For once, however, it appears that policymakers and tech companies are united in tackling the problem of election-related deepfakes before they are able to jeopardize something as fundamental as our free and fair elections process.
Thanks for reading,
Amy Hamblin