
This past week, Google released their new Gemini 2.5 Flash Image model (aka “nano banana”), now integrated into the Gemini app. People have been showing off pretty stunning results swapping faces, colors, themes and more. The character consistency across images - you can alter outfits or backgrounds without distorting faces - allows for more sophisticated applications but also makes it more useful for fakery deep or otherwise. It’s fast too: innocuously “editing” this 2019 news image took me about 30 seconds:

Google adds both visible and invisible SynthID watermarks to the images as safety measures. Watermarking is far from perfect, however. Researchers at the University of Waterloo presented a paper at the IEEE Symposium on Security and Privacy in May 2025, showing a tool they built that can circumvent watermarks, including SynthID. Policing creation will be hard or impossible as tools like Google’s gain inevitable traction. So as I’ve said a few times previously, a lot will depend on how well distribution platforms can detect, and how exactly they treat, confirmed or suspected AI-doctored content.
To wit - Senator Amy Klobuchar (pictured above) wrote an opinion piece in the New York Times last week about experiencing the dangers of AI deepfakes firsthand. After a fabricated video of her making vulgar comments about actress Sydney Sweeney went viral on multiple platforms, Klobuchar argues that current legal protections are inadequate. She advocates for stronger federal legislation via the bipartisan NO FAKES Act, which she is co-sponsoring. It would give Americans the right to demand social media companies remove deepfakes of their likeness while preserving First Amendment protections. She also calls for labeling of AI-generated content. “The internet has an endless appetite for flashy, controversial content that stokes anger,” she adds.
GovTrack.us only gives the bill a 5% chance of being enacted in current form, but it’s worthwhile looking at some of the bill’s details and some recommendations for improving it!
Very Large Platforms can and should do more
Before I explain more about it and what might need to be changed, I’ll flag that the EFF has raised the concern that NO FAKES compliance costs would be a significant burden on new and smaller online services. It is plausible that the Act could be altered to bifurcate requirements for larger platforms, similar to how the EU's Digital Services Act (DSA) designates "Very Large Online Platforms" (VLOPs). The DSA, for instance, designates platforms with over 45 million monthly active users in the EU as a VLOP. Similarly for NO FAKES, larger platforms could be subject to the full range of compliance requirements including greater transparency and efficacy reporting requirements, with smaller firms being subject to a lighter set of obligations.
“Nurture Originals, Foster Art, and Keep Entertainment Safe”
NO FAKES - S.1367 in the Senate (and H.R.2794) - would create a federal right to control digital replicas of a person’s voice and visual likeness.
At its core, the bill establishes a “digital replication right.” It classifies the right as property, non‑assignable during life, and licensable (exclusive or non‑exclusive). The bill’s definition of a digital replica is a newly created, computer‑generated, highly realistic representation readily identifiable as an individual’s voice or visual likeness, either in a work where the person did not appear, or as a materially altered version of a real performance. It expressly excludes authorized remixing, mastering, and similar uses permitted by the copyright holder.
It’s unlawful to make available an unauthorized digital replica, or to distribute a product/service primarily designed or marketed to create unauthorized replicas of specifically identified people. Platforms face liability only after they receive a compliant notice or court order, or willfully avoid it. Non‑platform actors must have actual knowledge (or willful avoidance) that the replica is unauthorized.
To safeguard speech, NO FAKES excludes bona fide news, public affairs, sports, and documentary/historical/biographical uses (with conditions), as well as commentary, criticism, scholarship, satire, and parody, and fleeting/negligible uses. Disclaimers (“this is AI”) are not a defense.
The bill pairs rights with platform safe harbors. Providers that adopt a repeat‑violator policy and, upon valid notice, remove or disable access to flagged material as soon as is technically and practically feasible qualify for protection. For user‑generated video/audio services and digital music providers, the safe harbor also requires a limited stay‑down: removing matching re‑uploads that share a digital fingerprint of the notified work (including future uploads that match). Platforms must designate an agent with the Copyright Office and notify both the right holder and uploader of takedowns.
Courts can grant injunctions, pretty hefty statutory damages of $5,000+ per instance/work or actual damages plus profits, and punitive damages for willful misconduct. It preempts overlapping state claims about digital replicas in expressive works (with exceptions for pre‑existing state laws, and for statutes targeting sexual or election‑related deepfakes).
What I Like (but could be tightened)
- A federal, output‑focused right against unauthorized “digital replicas” is the right backbone, with two refinements: (a) narrow digital replica to a realistic, machine‑assisted depiction that an ordinary viewer/listener would likely mistake for the real person, and (b) expressly exclude caricature, obvious stylization, and other non‑photorealistic depictions.
- National uniformity to avoid a patchwork is welcome clarity. Having one set of federal rules instead of 50 different state laws would be helpful, but only if Congress keeps it narrow. Federal law should only override state laws when they specifically conflict about AI-generated faces and voices. All the existing state laws about using someone's real photo in ads, fake celebrity endorsements, or other non-AI privacy violations should stay exactly as they are through an explicit "savings clause" that protects state authority over traditional cases.
- Explicit First Amendment carve‑outs are essential guardrails, yet they should be sharpened by codifying a clear safe harbor for expressive works (news, documentary, commentary, criticism, parody/satire, biography), adding explicit protections for incidental and de minimis uses, and asking whether the news content serves a legitimate public interest and isn't deliberately misleading.
- Intermediary safe harbors tied to notice‑and‑counter‑notice are the right compliance architecture, but takedown requests should require real identification of who's complaining, specific links to the problematic content, timestamped proof, and sworn statements. Anyone whose content gets removed must have a clear way to dispute it, platforms must restore content quickly if the dispute seems valid, and there should be penalties for people who knowingly file false complaints. Platforms should also publish detailed reports showing how many takedowns they get, how often they act on them, and how many get reversed on appeal.
- Modest, bounded protection for digital replicas after a person’s death is a reasonable compromise, as long as it is limited to commercial advertising/endorsement uses, the term is clarified and shortened, and requires estates to prove clear authority and identity before enforcement.
- Research and infrastructure safe-harbors are smart, for firms who are building and testing tools that prevent, trace, or detect deepfakes (e.g., content credentials like C2PA, watermarking, hashing/fingerprinting, authenticity logs, and deepfake detectors), but there should be stronger clarification that these carve‑outs don’t excuse distribution of harmful replicas (posting, advertising, or sending to targets) which should void the safe harbor protection. Naturally, large technology companies who operate on multiple sides of this problem will have to tread carefully.
What I’d Change (more substantive fixes, worthy of discussion)
- Stand up a real claimant‑verification standard. Bad actors will make fake claims. We’ll need KYC and other identity/authority verification. Platforms have presumably learned some of these lessons already. On Amazon’s Brand Registry, for example, bad actors learned to file or acquire registrations for brands already in use by others on the platform and then wield those rights with Amazon to exclude or extract concessions. Any central likeness registry or platform-mediated process is equally vulnerable unless it requires high‑assurance identity proofing, documented authority especially for the deceased (e.g., power of attorney, executor letters), auditable provenance of claims, and penalties for fraud (account bans and robust referrals to regulators).
- Create a duty of care for model/tool, and API providers, not just end‑users. We need to implement reasonable measures: content‑credential support by default; API keys with rate‑limits; anomaly/abuse monitoring; log retention for potential auditing; rapid key revocation; and human escalation paths. All tool providers need to do “Know‑your‑customer” or “Know-your-developer” work, and must implement risk limits or gates for developers wanting to use bulk or scaled features.
- Draw a bright line between training and outputs. Make it clear that training on lawfully acquired data isn’t a “replica”; and that liability hinges on outputs that are likely to be mistaken for a real person. On the latter, require visible synthetic‑media disclosures when a real person is depicted, with reasonable UX prominence. BTW I’d love to see regular publicly-disclosed testing or surveys, by the large platforms like Meta or Google perhaps alongside or within their ‘transparency reports’, of what consumer comprehension of synthetic media labeling is!
- Tighten the scope of preemption to the digital replica problem. Preempt only conflicting state rules on digital replicas; preserve state right‑of‑publicity and privacy torts for non‑digital conduct. Include robust federal anti‑SLAPP provisions allowing defendants to quickly dismiss meritless lawsuits and recover attorney fees. This can deter frivolous lawsuits claiming digital replica violations when the use might actually be protected speech.
- Fix consent & contracting to prevent coercive or perpetual assignments of identity. Prevent people from being tricked or pressured into signing away their digital identity forever. Ban companies from claiming they own someone's face or voice just because they hired them, and require contracts about digital likenesses to be written in plain English with mandatory waiting periods before they take effect. For entertainment industry deals, there should be certain basic protections that couldn't be waived no matter what someone agrees to sign. We need to protect child performers and other vulnerable workers who might not understand what they're agreeing to, or have little choice in the matter. This would also support unions in negotiating industry-wide standards for how AI versions of people can be used, rather than leaving individual workers to fend for themselves against powerful studios and labels.
- Strengthen takedown process quality to handle abuse at scale. Current takedown systems (like YouTube's) are easily gamed and often wrong, and propose making them more accurate, transparent, and fair - especially important as AI makes it easier to both create problematic content and to abuse takedown processes at massive scale. This is one of the more challenging areas, so I expect it to be debated by trust & safety professionals. Some ideas here include:
- Instead of letting people file vague complaints, require specific information and proof when claiming someone used their likeness illegally.
- Don't automatically block or fingerprint content just because someone complained. Only use automated systems to prevent re-uploads after a human or proper process has actually determined the content was problematic.
- Set strict time limits for how quickly platforms must review appeals and restore wrongly removed content.
- Limit how much platforms can rely on bots to make decisions. Force human review for a certain percentage of cases (which would be examined over time based on industry-wide error rates etc.).
- Give both accusers and creators clear interfaces/dashboards to track their cases and see what's happening as the process plays out.
- Require platforms to publicly report on bad actors who exploit the system - like people who hack accounts to file fake takedowns, serial abusers of the complaint process, or coordinated campaigns to manipulate the system.
- Disclosure & labeling that actually helps consumers. Require clear, proximate labels for synthetic depictions in ads and paid promotion. Allow platform UX experimentation but set a minimum effectiveness bar (e.g., visibility and persistence standards) and as mentioned before, require the “very large” platforms conduct and publish their testing of the comprehension and effectiveness of labels among real users including vulnerable sub-populations most at risk like the elderly.
- Resource the system, measure and standardize it. Give the FTC and Justice Department proper funding to handle these cases, and create a special help desk for small creators who can't afford lawyers. Require yearly reports on how well the system is working - tracking how often it gets things wrong, how long cases take, and how much it reduces lawsuits. Build in automatic expiration dates so Congress has to regularly review and fix problems. Allow platforms to share fingerprints of confirmed illegal AI replicas (with privacy protections) so they can automatically catch the same content elsewhere, similar to how they currently share known child abuse images via PhotoDNA.
Sorry this post was so long, but it’s an important and challenging problem and I have no doubt we will all spend a lot more time discussing it! I’d love to hear your feedback and thoughts.
Thanks for reading Rob’s Notes! Subscribe for free to receive new posts and support my work.