
8:55 a.m., Wednesday. Welcome back to Professor Farahany’s AI Law & Policy Class.
In Quebec, a man named Jean Laprade was just fined C$5,000 for filing legal documents with AI-generated fake citations. Justice Luc Morin described the move as “highly reprehensible,” declaring that filing procedures are a “solemn act” requiring “rigorous human control” over AI-generated content.
Laprade apologized, saying the documents were “probably not perfect,” but that he wouldn’t have been able to properly represent himself without artificial intelligence.
What the court didn’t address? That what Laprade discovered wasn’t a bug. It was math.
Meanwhile, OpenAI researchers published a paper in September 2025 proving that AI hallucinations—nonsensical or inaccurate outputs from AI models—are mathematically inevitable. Not bugs to be fixed, but fundamental features of how AI works.
And globally, hundreds of deepfake laws have been enacted in the last two years. In the US alone: 15 in 2023, 80 in 2024, and over 25 so far in 2025. Yet a federal judge temporarily enjoined California’s approach, calling it “a hammer instead of a scalpel,” while commentators express concerns about the overbreadth of these laws and their clash with the First Amendment.
Today, we’re tackling whether you govern can something that’s impossible to fix and constitutionally dangerous to regulate. And our answers today may make you uncomfortable. We’ll see why AI lies (it’s math, not malice), why California’s attempt to ban deepfakes crashed into the First Amendment, and why Sarah—our fictional candidate destroyed by a viral fake video—helps us see how every regulatory chokepoint for synthetic media is a challenge. Fair warning: We cover a lot of material today. But if you can understand why both the math and the Constitution are working against the governance of synthetic media (including deepfakes and cheapfakes), you’ll understand why this is such a hard problem.
Consider this your official seat in my class—except you get to keep your job and skip the debt. Every Monday and Wednesday, you asynchronously attend my AI Law & Policy class alongside my Duke Law students. They’re taking notes. You should be, too. And remember that the live class is 85 minutes long. Take your time working through this material.
Just joining us? Go back and start with Class 1 (What is AI?), Class 2 (How AI Actually Works), and check out the full syllabus. You especially need Class 14 on deepfakes (Part 1) to understand today’s material.
I. Hallucinations vs. Deepfakes: It’s all in the Intent
Everyone take out your phones. Open ChatGPT. I want you to type exactly this:
“How many Es are in DEEPSEEK?”
In my live class, I watched the room fill with confusion as different answers emerged on different screens. The correct answer is 4. Count them yourself: D-E-E-P-S-E-E-K. Four Es.
Your Duke Law counterparts got 3, 4, even 2 across different attempts.
These aren’t broken phones or old versions. This is ChatGPT, right now, unable to count letters reliably.
The research shows this gets worse. When asked “How many Ds are in DEEPSEEK?”—the correct answer is 1—DeepSeek-V3 itself answered “2” or “3” across ten trials. Meta AI and Claude 3.7 Sonnet performed similarly.
Loading...
One of the live students said that he was unsurprised because it’s just predicting the next token. That’s exactly right. And there’s a mathematical reason why they never will be perfect in that prediction.
The Mathematical Proof of Inevitable Error
On September 4, 2025, researchers from OpenAI and Georgia Tech published research that really should have been front-page news given the ubiquity of AI usage in the world. They proved—mathematically—that AI hallucinations have irreducible lower bounds.
The paper’s core finding:
(generative error rate) ≳ 2 · (IIV misclassification rate)
Let’s translate that. Even if an AI system can correctly identify whether something is true or false 95% of the time when checking, when it’s generating new content, it will produce false information at least 10% of the time.
The act of creation doubles the error rate.
The researchers identified three mathematical factors that create these lower bounds: when information appears rarely in the training data, the model approximates its output, some patterns exceed current architecture’s capacity, and computational intractability (some problems are mathematically hard to solve so the model guesses rather than computing the output).
While the researchers said that better engineering can reduce errors, they will never be eliminated, because the bounds are mathematical.
And as budding or current lawyers, or just someone interested in AI, I want you to really understand this issue about hallucinations. And also why the law is primarily targeting deepfakes and not all misinformation (including hallucinations).
AI hallucinations are inevitable unintentional errors, while synthetic media (including deepfakes and cheapfakes) are deliberately fabricated media. The key difference lies in intent. Hallucinations are bugs in the system. Deepfakes can be used as weapons.
Which helps us understand why the hundreds of new laws in this space focus on intent rather than the falsehood of the information itself. And, as we’ll see, why the First Amendment has a role to play in all of this.
II. The Legal Landscape of Synthetic Media
Before we dive into legal responses to synthetic media, we should be clear about the harm we’re trying to address through governance of synthetic media (and then ask if the laws that are cropping up actually address those harms).
In my live class, the on-call students for today named issues like non-consensual pornography (which we saw on Monday makes up 90-95% of deepfakes), election interference, financial fraud, reputational damage, and erosion of trust in media.
And we see that these harms primarily cluster around four categories: psychological damage and reputational harm, undermining democracy, economic loses, and erosion of trust in media.
But here’s where constitutional law creates a fundamental challenge: Are all these harms equally worth restricting speech to prevent? Remember, we’re in First Amendment territory now.
The First Amendment doesn’t just protect true speech—it protects false speech too. As the Supreme Court held in United States v. Alvarez (2012), even deliberate lies receive First Amendment protection unless they fall into narrow exceptions like defamation, fraud, or incitement. The Court recognized that “some false statements are inevitable if there is to be an open and vigorous expression of views in public and private conversation.”
This means the government can’t simply ban speech because it’s false or harmful. Content-based restrictions on speech—laws that regulate what can be said based on the subject matter or message—are presumptively unconstitutional and trigger strict scrutiny, the most demanding standard of judicial review. So when we examine each stage of deepfake regulation, we’re not just asking “would this work?” We’re asking “would this survive constitutional scrutiny?”
Loading...
Meet Sarah: A Case Study
Let me introduce you to our hypothetical political candidate who will help us understand these six stages. Sarah is running for state legislature. Three weeks before the election, a deepfake video appears showing her apparently saying racist remarks at what looks like a private fundraiser. The video goes viral—2 million views in 48 hours.
The video is fake. But Sarah’s campaign is in free fall (Here’s my Sora2 version of it):
On Monday, we examined the DHS report on The Increasing Threat of Deepfake Identities, which maps six intervention points for mitigating deepfakes: Intent, Research, Creating the Model, Dissemination, Detection, and Victim Response.

Now let’s walk through Sarah’s case at each stage, examining what over 100 new laws are trying to do—and asking the hard questions about whether they’ll work and whether they’re constutional.
As we go through each stage, think about three issues:
- Effectiveness: Would this intervention actually have stopped Sarah’s deepfake?
- Constitutional: Does this approach avoid government overreach and survive strict scrutiny? Does the law use the least restrictive means to achieve a compelling government interest?
- Innovation Balance: Does this strike the right balance between preventing harm and allowing beneficial uses?
Stage 1: Intent—Criminal Deterrence
If we make creating and sharing deepfakes a crime, people won’t do it. Make the punishment severe enough, and you deter bad actors before they start.
Someone (we don’t know who yet) decided to create a fake video of Sarah. Could existing federal, state, or international laws have stopped them?
The U.S. Federal TAKE IT DOWN Act makes it a criminal act to knowingly publish intimate visual depictions without consent, and for minors, to use services to publish intimate depictions with intent to abuse, humiliate, harass
Would this help Sarah? No. This law only covers “intimate visual depictions”—not political deepfakes.
Most U.S. state laws on deepfakes like California SB-926won’t address Sarah’s predicament either because they focus on intentional distribution of AI-generated/deepfake intimate imagery.
But Hawaii’s SB 2687 could (if there is distribution in Hawaii), as it prohibits the distribution of materially deceptive media in reckless disregard in reckless disregard of the risk of harming the reputation of a candidate during an election cycle and provides that a court may issue a temporary or permanent injunction or restraining order to prevent further harm to the plaintiff.
(And while Sarah is here in the United States, it’s worth noting that laws like France’s Penal Code Article 226-8-1 (Adopted 2024) and the UK Online Safety Act of 2023, Sexual Offences Act of 2023 (and proposed 2025 Amendments)) also target intent but in non-consensual sexual deepfakes, rather than election interference).
And pause here and think about whether this mitigation strategy, of criminalizing deepfake creators is effective. Will the person who made Sarah’s deepfake—knowing they face criminal penalties—decide not to do it? Or will they just be more careful about hiding their identity? (Your Duke Law counterparts thought just the latter). And what about the fact that criminal penalties only work if you can catch and identify the perpetrator? Sarah’s deepfake was posted from an anonymous account, using a VPN, from a public library computer. Who do you prosecute? And does this address the psychological and reputational harm she is facing?
And then there’s the constitutional problem. Criminalizing the creation or distribution of synthetic media is a content-based restriction that must survive strict scrutiny. The government must prove that criminal penalties are the least restrictive means of preventing harm.
In Reed et al. v. Town of Gilbert (2015), the Supreme Court held that “a law that is content-based on its face is subject to strict scrutiny regardless of the government’s benign motive.” Even laws aimed at preventing election interference or protecting victims face this demanding standard.
Some criminal deepfake laws may survive if narrowly tailored—for example, laws specifically targeting non-consensual intimate images where there’s a compelling interest in preventing sexual exploitation. But broad criminal bans on “deceptive” political content? Those face serious constitutional problems.
Consider California AB 2839, which created criminal liability for distributing “materially deceptive” political deepfakes within 120 days before an election. Federal Judge John Mendez struck it down in October 2024, writing: “Most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”
The judge recognized that AI poses serious risks, but emphasized that “counter speech is a less restrictive alternative to prohibiting videos... no matter how offensive or inappropriate someone may find them.” This is the core First Amendment principle: more speech, not less, is the remedy for false speech.
So even if criminal penalties could deter deepfakes, they must clear this constitutional hurdle. And as Judge Mendez noted, existing defamation law already provides remedies for false statements that harm reputation—we don’t need new categories of prohibited speech.
Loading...
Stage 2: Research—Organizational Preparedness
If organizations are prepared—by training their employees to spot deepfakes and are ready to respond quickly—could the harms (such as economic and reputational) be mitigated?
Imagine Sarah’s campaign had trained campaign staff to monitor for synthetic media and had a crisis response plan. When the video dropped, they could have responded within hours instead of days. This means training organizations on deepfake detection, adopting threat monitoring systems, having incident response planning, and developing partnerships with fact-checking organizations.
The economic harms from deepfakes are already mounting. Arup lost $25 million to a deepfake CFO. Ferrari faced attempted fraud via voice clone (which failed when the executive cleverly asked verification questions).
Would preparedness prevent harm? Even if Sarah’s campaign detected the deepfake in one hour instead of one day, the video already had 100,000 views. But perhaps they could have mounted a counter-campaign much earlier to mitigate the fallout.
Stage 3: Creating the Model—Developer Responsibilities
Instead of chasing bad actors after the fact, what if we make it harder to create deepfakes in the first place? At this stage, laws target developer responsibilities, requiring AI tools to refuse certain requests, to embed visible and invisible watermarks into synthetically created content, and the maintain audit trails.
Consider Sarah’s case. Someone used an AI tool to generate that fake video of Sarah. What if the tool had refused the request? What if it automatically watermarked the output as “AI-generated”? What if it logged who created what? Would that have addressed the harm to her reputation?
Three different regulatory approaches have emerged that target this stage (along with a number of voluntary efforts with C2PA and CAI). The EU AI Act (Articles 50(2) and 50(4)) requires providers and deployers to mark outputs in machine-readable format and make them detectable as artificially generated, while deployers must disclose synthetic media. The EU doesn’t ban deepfakes—it requires labeling, where companies creating AI systems must build in technical markers; and the people using those systems must disclose synthetic media creation.
China’s Administrative Measures (effective September 2025) require clear identification with visible labels that ordinary users see (like “AI生成” meaning “AI-generated”) plus technical identifiers like metadata or watermarks that platforms detect.
But Denmark and the Netherlands have taken a completely different approach, treating deepfakes as intellectual property violations. Denmark provides 50-year post-mortem protection extending performers’ rights to digital imitations. The Netherlands provides 70-year post-mortem protection applying copyright-contract rules. Under this approach, creating a deepfake of Sarah without permission violates her rights to her own likeness—which makes it not not about fraud or harm, it’s about property.
Bernt Hugenholtz critiques this approach, arguing it commodifies persona and creates a licensing market for violations, when privacy, media, and election law would better fit the actual harms. The harm isn’t that someone “stole” Sarah’s likeness—it’s that they destroyed her reputation.
Loading...
Is this an effective choke point in mitigation chain? Does it appropriately balance First Amendment concerns against the harms from synthetic media?
Whoever made Sarah’s deepfake could use an open-source tool with no restrictions, access a tool from a country requiring no watermarks, strip out watermarks from a compliant tool, or write custom code. Would EU or China’s labeling requirements stop this? You can regulate commercial AI tools, but you can’t regulate open-source code, academic research, or determined individuals.
Stage 4: Platform Duties—The Dissemination Chokepoint
Even if we can’t stop deepfakes from being created, could we stop them from spreading, by making platforms—like YouTube, X, Facebook, Instagram, TikTok, Truth Social—the chokepoints?
In Sarah’s case, let’s assume that the deepfake is posted to X. Within one hour, 50,000 people have shared it. By hour 12, it has appeared on every major platform. Sarah’s campaign reports it. Now the clock starts ticking:
- Hour 1: 50,000 shares.
- Hour 3: YouTube, Facebook, TikTok.
- Hour 6: Sarah discovers it, files reports.
- Hour 12: 500,000 views.
- Hour 24: 2 million views.
- Hour 48: under some laws, platforms must remove it by now.
- Hour 72: three weeks until election, Sarah polls 8 points down.
The U.S. TAKE IT DOWN Act requires platforms to remove content within 48 hours on notice—but only for “intimate visual depictions,” not Sarah’s political deepfake. California passed election laws requiring platforms to remove election-related deepfakes or face penalties. But AB 2655, California’s “Defending Democracy from Deepfake Deception Act,” required “large online platforms” to remove “materially deceptive content” during election season based on takedown requests from anyone. Judge Mendez struck this one down on Section 230 grounds, finding that the platforms are shielded from liability and “don’t have anything to do with these videos that the state is objecting to.” While there’s ongoing debate about Section 230’s scope, courts have consistently held that it bars state laws that would impose liability on platforms for failing to remove user-generated content.
France’s pending Bill No. 675 would fine users up to €3,750 and platforms up to €50,000 for failing to label AI-altered images. China pairs labeling duty with platform traceability, where unlabeled content triggers either user declaration or a “suspected synthetic” label.
Loading...
You fight exponential spread with linear enforcement. By the time platforms remove content in 48 hours, millions saw it. The harm happened. And this raises a bigger question: do we trust Instagram, YouTube, X, and Truth Social to decide what’s true enough to be shared?
Stage 5: Detection and Verification—Disclosure Requirements
Instead of removing deepfakes, just label them. Let people know what they’re watching is synthetic. Trust voters to make informed decisions.
In Sarah’s case, imagine the video included a prominent disclaimer: “This video contains AI-generated content.” Would 2 million viewers react differently?
Utah’s SB 131 requires audio to state specific words at beginning and end, with visuals displaying labels “during the segments that include synthetic media.” Indiana’s HB 1133 requires campaign communications with fabricated media to include distinct disclaimers. Michigan’s HB 5141 requires “wholly or largely” AI-generated political ads to carry clear disclosure. New York’s S 9678 creates a disclosure duty for deceptive political media, while Colorado HB24 1147 prohibits deepfakes about political candidates close to an election unless there is a clear and conspicuous disclosure that it is synthetic media.
The EU AI Act (Art. 50(4)) requires deployer/user disclosure that content is artificially created or manipulated but doesn’t ban deepfakes. China requires visible plus invisible labels, bans removal of labels, and uses a “suspected synthetic” fallback if source is unclear.
But the devil is in the details. A small disclaimer in the corner? Most viewers don’t notice. Videos still go viral. A big warning before playback? Fewer people will watch it, but those who do think “There’s something true here they don’t want you to see.” Conspiracy theories flourish. And can you really tell when you have developed an unconscious bias from a deepfake video you’ve watched?
Loading...
And disclosure requirements face their own First Amendment challenges. In NIFLA v. Becerra (2018), the Supreme Court held that compelled speech—forcing speakers to convey messages they wouldn’t otherwise convey—is subject to scrutiny, though the level depends on whether the requirement is content-based or content-neutral.
Which means that effective disclosures must be prominent enough to work, but prominent disclosures may unconstitutionally burden expression. A small disclaimer gets ignored. A large disclaimer suppresses the speech itself.
In my live class, one student also pointed out a paradox: once we mandate disclosures, won’t bad actors use false disclosures to discredit real videos?
Stage 6: Victim Response—Remedies and Takedown
When all else fails, give victims tools to fight back. Let them sue, demand takedowns, get injunctions.
Let’s assume that after 72 hours, Sarah’s campaign team tracks down who posted it. There are three weeks left until election. She’s eight points down. What legal remedy helps now?
The federal TAKE IT DOWN Act defines “digital forgery” as intimate visual depictions created through AI, requires platforms to remove within 48 hours, and allows FTC enforcement—but once again, it only covers “intimate visual depictions,” not Sarah’s political deepfake. Some states try to help. Idaho’s HB 664 (FAIR Elections Act) allows injunctive relief against deceptive election deepfakes. Florida’s CS/HB 919 mandates disclaimers, creates criminal and civil penalties, and provides expedited hearings. But “expedited” still means days or weeks, not hours.
The First Amendment doesn’t prohibit all civil remedies—defamation law has coexisted with free speech for centuries. But defamation law requires proof of falsity, fault, and harm. Creating new statutory causes of action for “deepfakes” that bypass these traditional requirements risks creating an end-run around First Amendment protections.
As David Loy of the First Amendment Coalition noted when California’s laws were enacted: “If something is truly defamatory, there’s a whole body of law and established legal standards for how to prove a claim for defamation consistent with the First Amendment. The government is not free to create new categories of speech outside the First Amendment.”
This is crucial: Laws that create expedited takedown procedures or lower the proof burden for deepfake claims may unconstitutionally chill protected speech. Satire, parody, and political commentary—all core First Amendment activities—could be swept up in broad anti-deepfake laws.
And it’s true that prosecutors already have tools without new deepfake laws, such as wire fraud statutes for deepfake-based scams, stalking laws for harassment campaigns, the Computer Fraud and Abuse Act for unauthorized access, and FTC consumer protection authority for unfair or deceptive practices. Could any help Sarah? Maybe. But they all require identifying the perpetrator, building a case, and going through a legal process.
Loading...
By the time Sarah gets a court order, the election is over. Legal remedies require time. Deepfakes require speed. The mismatch is fundamental.
III. Sarah’s Election Day—The Uncomfortable Tension
It’s now election day. Sarah lost by 6 points. Exit polls show the deepfake video was a major factor—32% of voters said it influenced their decision not to vote for her.
Looking back at all six stages, let’s be honest with ourselves. Which stage could have actually prevented Sarah’s harm?
- Stage 1 (Criminal deterrence): The person who made it used an anonymous account and VPN. They were never identified, and never prosecuted.
- Stage 2 (Organizational preparedness): Sarah’s campaign didn’t have the budget for 24/7 monitoring.
- Stage 3 (Tool restrictions): The deepfake was made with open-source software that had no restrictions.
- Stage 4 (Platform duties): By the time platforms removed it, 2 million people had seen it.
- Stage 5 (Disclosures): The person who made it didn’t add any labels.
- Stage 6 (Victim remedies): Sarah filed for emergency relief. The hearing was scheduled for two days after the election.
Given this, in the live class, I asked your Duke Law counterparts, if you had to pick ONE stage to invest all our governance efforts, which would it be? And why? Before I tell you how they voted, what do you think?
Loading...
I’d probably pick the last answer. And if that’s where you’ve landed, you’re thinking like I’m thinking about this. Your Duke Law counterparts? I asked them to select the stage they’d focus on. The majority of students chose to intervene at stage 3, several chose stage 2, stage 4, and stage 5.
This leaves us with an uncomfortable tension. If mathematical constraints make hallucinations inevitable, and constitutional constraints make comprehensive regulation nearly impossible, how do we protect people like Sarah?
The answer might be what Justice Louis Brandeis wrote in 1927: “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”
More speech. Not less. Even when it’s insufficient. Even when it’s slow. Even when Sarah loses. Because the alternative—empowering government to decide what speech is “true” enough to be shared—creates dangers that may outlast any single election.
Your Homework
- The Thought Experiment: You’re appointed to draft federal deepfake legislation. Draft the core requirements of your law. Which of the six stages do you target? What exceptions do you include? How do you handle the speed mismatch? You have three constraints:
- Constitutional: Must survive First Amendment scrutiny
- Effective: Must actually reduce harm, not just create compliance paperwork
- Balanced: Must not chill beneficial uses of AI
If you’re a paid subscriber (thank you!), share your approach in the comments. Let’s crowdsource better governance together.
- Click “like” if you liked this post, share it with one person to bring more people into the conversation, and upgrade to paid if you learned something new, and want to buy me a cup of coffee.
Class dismissed.
The entire class lecture is above, but for those of you who found today’s lecture valuable, and want to buy me a cup of coffee (THANK YOU!), or who want to go deeper in the class, the class readings, video assignments, and virtual chat-based office-hours details are below.