ݮƵAPP

Table of Contents

ݮƵAPP opposes Virginia’s proposed regulation of candidate deepfakes

AI Artificial Intelligence woman using AI technology for data analysis coding computer language with digital brain

Shutterstock.com

Last year, California passed restrictions on sharing AI-generated deepfakes of candidates, which a court then promptly  for violating the First Amendment. Virginia now looks to be going down a similar road with a new  to penalize people for merely sharing certain AI-generated media of political candidates.

This legislation, which has been in  and , would make it illegal to share artificially generated, realistic-looking images, video, or audio of a candidate to “influence an election,” if the person knew or should have known that the content is “deceptive or misleading.” There is a civil penalty or, if the sharing occurred within 90 days before an election, up to one year in jail. Only if a person adds a conspicuous disclaimer to the media can they avoid these penalties.

The practical effects of this ban are alarming. Say a person in Virginia encounters a deepfaked viral video of a candidate on Facebook within 90 days of an election. They know it’s not a real image of the candidate, but they think it’s amusing and captures a message they want to share with other Virginians. It doesn’t have a disclaimer, but the person doesn’t know it’s supposed to, and doesn’t know how to edit the video anyway. They decide to repost it to their feed.

That person could now face jailtime.

The ban would also impact the media. Say a journalist shares a deepfake that is directly relevant to an important news story. The candidate depicted decides that the journalist didn’t adequately acknowledge “in a manner that can easily be heard and understood by the average listener or viewer, that there are questions about the authenticity of the media,” as the bill requires. That candidate could sue to block further sharing of the news story.

The First Amendment safeguards expressive tools like AI, allowing them to enhance our ability to communicate with one another without facing undue government restrictions.

These illustrate the startling breadth of /’s regulation of core political speech, which makes it unlikely to survive judicial scrutiny. Laws targeting core political speech  passing constitutional muster, even when they involve false or misleading speech. That’s because there’s no general First Amendment exception for misinformation, disinformation, or other false speech. That’s for good reason: A general exception  to suppress dissent and criticism.

Abstract graphic coding artificial intelligence US flag and skyline background

Wave of state-level AI bills raise First Amendment problems

News

There’s no ‘artificial intelligence’ exception to the First Amendment.

Read More

There are narrow, well-defined categories of speech not protected by the First Amendment — such as fraud and defamation — that Virginia can and does already restrict. But SB 775/HB 2479 is not limited to fraudulent or defamatory speech.

For laws that burden protected speech related to elections, it is a very high bar to pass constitutional muster. This bill doesn’t meet that bar. It restricts far more speech than necessary to prevent voters from being deceived in ways that would have any effect on an election, and there are other ways to address deepfakes that would burden much less speech. For one, other speakers or candidates can (and do) simply point them out, eroding their potential to deceive.

The First Amendment safeguards expressive tools like AI, allowing them to enhance our ability to communicate with one another without facing undue government restrictions.

We urge the Virginia General Assembly to oppose this legislation. If it gets to his desk, Virginia Gov. Glenn Youngkin should veto.

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share