A Primer on Cross-Border Speech Regulation and the EU’s Digital Services Act

Some U.S. politicians have recently characterized European platform and social media regulation laws as “censorship” of speech in the U.S. If this claim were true, it would be a very big deal. As someone with a twenty-year history of resisting cross-border Internet speech suppression demands, I would be up in arms.
It isn’t true, though. This blog post explains why. It starts with a big picture overview of how Internet lawyers and online platforms deal with varying national speech laws. Then it reviews the EU law currently at the center of this controversy, the Digital Services Act (DSA). Finally, it goes over some actual examples of real or attempted cross-border speech restrictions. Super wonks might find nothing new in this summary, but new-comers to cross-border speech regulation will likely find it a helpful touchstone and guide. If you’re neither though, all you really need to know is this: Nothing about the EU’s Digital Services Act (DSA) requires platforms to change the speech that American users can see and share online.

The Big Picture

The DSA is a major overhaul to EU platform regulation. It has some parts that I like, including clearly defined platform immunities and unprecedented rights for platforms’ users to understand and challenge content removal decisions. It has parts I worry about, including grants of regulatory authority that could have been better drafted to prevent abuse. Still, the EU equivalent of constitutional law should ensure that regulators can’t use the DSA to suppress lawful but state-disapproved speech. Recent political signs are also encouraging. When an EU Commissioner actually tried to abuse power – by falsely claiming that the DSA gave him authority to restrict legal speech on platforms like X – he swiftly found himself without a job.
EU Executive Vice President Henna Virkkunen recently explained the DSA’s territorial reach in a letter to U.S. Representative Jim Jordan. The law, she said, “applies exclusively within the European Union to all services provided therein” and “has no extraterritorial jurisdiction in the U.S. or any other non-EU country.” (Bold-formatted text here and throughout this post is my addition.) I’ll dig deeper into the DSA’s statutory language below, but this is the bottom line.
Using national law to restrict what people see inside a country’s territory is normal. Every country does it, including the U.S. This territory-based restriction on speech is the basic framework that Internet law has settled on – sometimes through explicit legislation or court rulings, more often by unacknowledged convention.
In practice, big platforms typically geo-block users within a country from seeing content that is known to be illegal in that country. This approach has been endorsed by the EU’s highest court, and is also a standard commercial practice for things like territorially limited copyright licenses. Users who truly want to see material that is blocked in their country can generally use VPNs to do that, for better or for worse. The same VPN tools used by people seeking seriously illegal content and by your friends and family members who want to see sports or TV dramas from other countries are also essential for researchers, business travelers concerned about security, and persecuted groups in authoritarian regimes. The law about VPNs and tools to bypass geo-blocks is complicated and evolving. But the European Court of Human Rights has said that Russia violated Internet users’ free expression rights by trying to prevent them from getting access to VPNs.
Every country, including the U.S., has its own laws about what speech is illegal. The U.S. bans some speech for being defamatory, fraudulent, or too likely to incite violence, for example. The details of these speech-restricting laws vary considerably across borders. In the U.S., different states have their own defamation laws. In the EU, Member State laws diverge when it comes to Holocaust denial, publication of Mein Kampf, the proper balance between news reporting and individual privacy rights, and more.
The U.S. has historically been more permissive about speech than most countries, but that seems to be changing. Over the past century, U.S. courts interpreted the First Amendment to protect a wide array of expression – like hate speech, falsehoods, and invasion of privacy – that might be illegal in other countries. The U.S. isn’t always on the vanguard of speech rights, though. At times Sweden has allowed more pornography, for example. And Brazil can be more permissive of parodies. The U.S. in 2025 has also taken a sharp turn toward increasing state restrictions on speech. Major media organizations have settled frivolous defamation lawsuits brought by President Trump, paying out millions of dollars for speech and news reporting that was almost certainly legal. News media and advertisers have also given up on First Amendment rights in order to secure merger approval from the FTC and FCC.
International human rights law does a lot of hidden work to smooth over countries’ inevitable differences in speech law. We may not like each others’ choices, but international free expression lawyers generally don’t bother fighting much about them as long as a country’s rules are democratically enacted, reviewable by courts for any violations of rights, and fall in the range of rules that are permitted under international human rights law. Plenty of state censorship violates even the more flexible standards of human rights law. International free speech advocates often reserve their firepower for that.
People around the world complain that platforms censor their speech at the behest of the U.S. government. The U.S. mostly doesn’t have laws that explicitly require platforms to limit speech in other countries, though sanctions or export control laws like those administered by the Office of Foreign Asset Control are a possible example, and ICE engaged in some troubling domain name seizures in the 2010s. But U.S.-based platforms often globally enforce U.S. laws anyway. That practice is likely backed by assumptions that U.S. courts might, if asked, say that our laws governing things like child abuse content or copyright really do require global compliance by companies that are based in this country. Free speech activists in other countries often complain that their own lawful speech has been suppressed under U.S. laws, particularly the Digital Millennium Copyright Act. Experienced European lawyers also often have complaints about the U.S.’s historically aggressive extraterritorial enforcement of non-speech laws, like antitrust laws.
Internet jurisdiction law is full of interesting and unresolved questions about national courts’ and lawmakers’ authority to regulate speech outside their borders. I’ll say more about this in the final section of the post. Few or none of these interesting edge-case questions are about the DSA, though.

The EU Digital Services Act

The EU’s Digital Services Act does not currently require platforms to suppress speech outside of EU borders. If hypothetical future regulators tried to interpret the law differently, European courts would have reason to stop them. This section will go into a potentially tedious degree of detail about what the DSA actually says.
To be clear, however, it is possible – and has always been possible – that individual EU countries might try to mandate global content removals under their own laws. Examples have to date been rare, but certainly not unknown. Two such cases have gone to the EU’s highest court, the Court of Justice of the EU (CJEU). It ruled that EU-level laws – including the DSA’s predecessor, the eCommerce Directive – did not require global removal of platform content, but also did not prevent individual EU countries from issuing global takedown orders to the extent consistent with national and international law.
The DSA is relatively grabby about asserting jurisdiction and saying that the DSA applies to companies based in other countries. But its compliance obligations for those companies are only for their services in the EU. The DSA’s language is fairly dense EU legalese, but the point is communicated relatively clearly in Recital 7: “rules should apply to providers of intermediary services irrespective of their place of establishment or their location, in so far as they offer services in the Union, as evidenced by a substantial connection to the Union.”
In the context of varying laws between EU Member States, DSA Recital 36 says that “[t]he territorial scope of such orders … should not exceed what is strictly necessary[.]” The Commission has explained that this means “Where a content is illegal only in a given Member State, as a general rule it should only be removed in the territory where it is illegal.” (That hedging “as a general rule” language leaves room for the Member State authority that the CJEU said already existed.)
The Articles of the DSA say the same thing at greater length.
  • “This Regulation lays down harmonised rules on the provision of intermediary services in the internal market” (meaning the market area of the EU). Art. 1.2.
  • “This Regulation shall apply to intermediary services offered to recipients” when those recipients are commercially established in or “located in” the EU. Art. 2.1
    • A service is “offered” in the EU if it enables users in the EU to use an online service that “has a substantial connection to” the EU. Art. 3(d).
    • The service has a “substantial connection” to the EU (1) if it has a place of commercial establishment in the EU, or (2) based on “factual criteria” such as having a significant number of users in the EU or actively targeting its service toward the EU or a particular Member State. Art. 3(e).
    • If a platform has a service version that clearly is “targeted” to the EU or a Member State (like a Lithuanian version of YouTube), it will be much easier to argue that the company’s other service versions are not targeted there. Indicators of nationally “targeting” a service can include use of a “language or a currency generally used in that Member State,” the “possibility of ordering products or services, or the use of a relevant top-level domain” for that country (like .fr for France), “provision of local advertising or advertising in a language used in that Member State, or from the handling of customer relations” including offering customer service in the local language. But the “mere technical accessibility of a website” within the EU is not, on its own, enough to establish a substantial connection. Recital 8.
One of the main concerns identified by recent DSA critics is one I have myself raised in the past: the worry that platforms will voluntarily change their global speech rules to accommodate powerful regulators in particular countries. As a conspicuous recent example of such political appeasement, Meta loudly announced a shift to speech rules preferred by President Trump in January 2025.
When a platform globally implements speech rules, though, that is typically for economic reasons – not political ones. Platform operations are simpler, cheaper, and face lower risk of errors and technical failures if the company can use the same content moderation systems everywhere in the world. That is why major platforms’ speech rules under Terms of Service or Community Guidelines are almost always global. It is also why platforms may have economic reason to expand their own speech-prohibitions, rather than pay lawyers to identify which content violates national laws around the world.
notion image
A diagram of a diagram AI-generated content may be incorrect.
Historically, U.S. platforms’ rules often exported American speech norms and legal standards, to the dismay of many users in other countries. Today, Europe may be, as a practical matter, a greater net exporter of speech rules. But that’s not because Europe is requiring anything, or trying to use its laws to shape speech in other places. It’s just the unsurprising result of market forces and technical and economic decisions made within platforms.
Like any law, the DSA could be interpreted more aggressively by future enforcers – particularly if they believed that content like incitement to violence outside of Europe was leading to greater risks inside the region. Both the letter of the DSA and Virkkunen’s strong recent statements should make that harder, though. And, just as in the U.S., such an interpretation would be contestable in court based on both free expression rights and standards of international comity.
Every now and then, a regulator or national court asserts that it does have authority to demand global takedowns of speech. This is relatively rare, mostly because courts and regulators don’t want to be on the other side of that issue – they don’t want to have another country try to regulate speech inside their borders. The jurisdictional doctrine of “comity” is the formal legal tool used by courts acknowledging the sovereignty and divergent laws of other countries. If courts decide that comity concerns don’t tie their hands, though, things can get very interesting.
Some of the hardest cross-border issues in this regard are not about censorship, but surveillance. Important litigation, legislation, and international negotiations have focused on whether law enforcement can demand user data from platforms in other countries. Those legal issues are dissimilar in subtle but important ways from the ones about speech and censorship. (If that interests you, check out pages 26-31 in the proceedings of my Stanford conference on these topics.)
  • Brazil: The most conspicuous global takedown orders in the past year have come from Brazil. It can be difficult to untangle the substantive disputes over speech in these cases – which often involve content related to former Brazilian President Jair Bolsonaro, and to upheaval and serious political violence in the country – from questions about courts’ formal authority to issue the orders. Several platforms are said to have received global takedown orders from Brazil. Critics of those orders argue – legitimately, in my opinion – that the procedure by which these orders were issued is irregular and would not meet due process standards in many countries, given the unusual amount of authority exercised by a single Supreme Court Justice, Alexandre de Moraes. At the same time, vociferous criticisms of his orders from U.S. politicians often appear to be motivated by political alignment with Bolsonaro and his supporters. Brazilian developments include:
    • U.S. stripping visas from Brazilian Supreme Court justices who had sided with de Moraes, but not other justices
    • Trump Media and Rumble attempted lawsuit against Justice de Moraes in U.S. federal court (complaint, my social media thread, dismissal)
    • Brazil’s protracted showdown with X (unclear if/how much this related to compliance outside Brazil – the main issue was compliance inside Brazil)
    • Google lawsuit with LatAm Airlines. The airline obtained a global takedown order from a lower Brazilian court in a dubious-sounding defamation case; Google is challenging it in the U.S.
  • High courts in the EU and Canada: Three of the most important settled cases are a Canadian Supreme Court ruling upholding a global injunction directed at Google, and two Court of Justice of the EU cases holding that EU laws did not prevent countries in the EU from issuing global injunctions, if those injunctions were consistent with national and international law.
    • Google v. Equustek (Canada): The Canadian Supreme Court in 2017 affirmed a global takedown order to Google in this trade secret case. The Court rebuffed concerns about consequences for speech, reasoning that the intellectual property dispute at issue didn’t affect speech and that its legal resolution was likely to be similar around the world. Plaintiffs should not, it said, have to go to court in every country. Importantly, in its comity analysis, the Court said that if Google could establish a hard conflict – meaning that complying with Canada’s order would force Google to violate another country’s laws – then Canadian courts might amend the global order. Google subsequently obtained a U.S. court order affirming that it had no obligation to remove the content under U.S. law. But since U.S. law did not prohibit removing the content, a lower Canadian court concluded that no hard conflict existed, and declined to amend the global order. (I had a role in this case at an earlier stage as a Google lawyer.)
    • Facebook Ireland v. Glawischnig-Piesczek (EU): This 2019 case involved a proposed global takedown of speech on Facebook under Austrian defamation law. The case was noteworthy in part because the speech at issue – calling a prominent politician a “corrupt oaf” and a “traitor” – would be legal in many other countries, including some European countries. (My article on the case has more detail.) Legally, the only question to the CJEU was whether the eCommerce Directive – the DSA’s predecessor law – prohibited global takedown orders. The Court said it did not. Thus, the possibility of a global injunction would depend on principles of international law and Austria’s national law. As far as I know, Austrian courts ultimately did not issue a global order in this case, though they may have done so in an unrelated copyright case.
    • Google v. CNIL (EU): In this 2019 case, the French data protection regulator argued that Google should be compelled to remove or de-list search results globally to comply with European “Right to Be Forgotten” law. The CJEU held that EU-level law in the GDPR and previous Data Protection Directive did not require global de-listing. However, it noted, EU law did not prevent authorities in Member States from issuing global orders under national law in specific cases. (I worked on this general issue for Google until 2014, but I don’t recall any role in this specific dispute.)
  • Australia: Australia has often been on the cutting edge of global speech jurisdiction cases, going back to the Dow Jones case of 2002. Most recently, in 2024, a court rejected the eSafety Commissioner’s attempt to compel X to globally delete posts. X had geo-blocked the material – posts linking to footage of a high-profile knife attack – for Australian users. The court held that geoblocking was sufficient under the language of the applicable Australian statute, and noted that global enforcement would fundamentally clash with the “comity of nations[.]”
  • India and other countries: When last I checked, some lower courts in India had ordered U.S. platforms to remove content globally, but those cases were ongoing and have likely since been appealed. I wish I had time to track those cases! Presumably other countries have gotten in on the global injunction game, too. If someone has a good list I will add it here later.
  • United States:
    • As a source of global takedowns: As mentioned above, I can’t think of clear examples of U.S. law expressly requiring global removals, unless export control laws or DNS seizures count. But U.S. courts might well conclude that U.S. law on things like copyright, child abuse material, or terrorism nonetheless require global removal by U.S.-based companies. In practice, platforms often apply U.S. law globally.
    • As a target of global takedowns: U.S. courts and lawmakers have a long history of skepticism toward speech-suppression orders from foreign courts. The federal SPEECH Act affirmatively bars U.S. courts from enforcing foreign “defamation” orders (with “defamation” defined very broadly) if those orders do not comport with the First Amendment, Section 230, or jurisdiction rules. U.S. platforms including Yahoo in 2000, Trump Media and Rumble in 2025, Google in 2017 and 2025, and 4chan and Kiwifarms in 2025 have complained to U.S. courts about foreign orders to remove content. These are odd cases, because none of them involve a foreign court or party asking for enforcement by U.S. courts. Some of these U.S. claims have been dismissed for that reason. Realistically, the power behind takedown orders comes from the countries that issue them. Authorities in those countries can threaten to arrest platform employees (as Brazil has repeatedly done), seize assets, or direct national ISPs to cut off U.S. platforms’ access to Internet users and revenue in their territory. A U.S. court ruling confirming that U.S. authorities will not enforce the foreign order is, in my experience, important only if it can be used collaterally in some other proceeding.
    • As affected by platforms’ compliance with other countries' laws or pressures: The relatively obscure 2014 Zhang v. Baidu case illustrates an issue in U.S. law that my students often find troubling. The plaintiffs were U.S.-based proponents of democracy in China. They alleged that Baidu, China’s main search engine, had suppressed their website in search results at the behest of the Chinese government. The Baidu court ruled that the platform had a First Amendment right to set its own content policies and exclude speech or speakers. (That conclusion was since affirmed by the Supreme Court in Moody.) This was the case, Baidu held, even if the platform’s editorial choice was to do the Chinese government’s bidding. A similar issue was lurking but unresolved in the Supreme Court TikTok case: If a U.S. company chooses to let a foreign government shape its editorial decisions or ranking algorithm, what leeway does Congress have to override that choice? These cases, like Murthy and other rulings around the world, tee up hard questions about platform volition and state power. If platforms want to do the government’s bidding, does that mean no state action has occurred?
The law of Internet speech jurisdiction gets snarlier, and more fundamental to Internet users’ rights, the deeper you dig. But the EU’s Digital Service Act is not the problem. American politicians should stop pretending it is.