This newsletter is a ~8 minute read and includes 81 links. I ❤️ to hear from readers: please leave comments or reach out via email.
Last week’s Faked Up analysis of AI nudifier ads on Meta was featured on The Washington Post! More about that further down.
THIS WEEK IN FAKES
The US Department of Justice seized 32 domains associated with the Doppelganger influence operation. Google’s Reimagine is a ticking misinformation bomb (see also FU#15). An Australian MP made deepfakes of the country’s prime minister to prove a point. Nikki Haley released a grudge. Police in Springfield, Ohio, said there was no evidence to back up a Facebook post claiming immigrants were eating local pets (to little avail. Very little avail). Musk may get summoned by British MPs about hateful misinformation. Plus: can you beat my 9/10 score at this deepfake-spotting quiz?
TOP STORIES
DISINFORMATION AND BRAZIL’S X-IT
In the immediate sense, that’s not quite right. As the think tank InternetLab put it in an emailed brief, the ban follows X’s non compliance with a legal order related to the “intimidation and exposure of law enforcement officers” connected to the Supreme Court’s inquiry into the Jan. 8 attacks.
At the same time, the ban is the end result of five years of judicial actions targeting disinformation about the Supreme Court and the electoral process.
To try and understand how online disinformation removals work in Brazil, I consulted fact-checkers Cristina Tardáguila and Tai Nalon, tech law scholars Carlos Affonso de Souza and Francisco Brito Cruz, and law student Vinicius Aquini Goncalves.
Supreme Court justice Alexandre de Moraes. Source: Antonio Augusto/Secom/TSE
The legal grounds
The 2014 Marco Civil da Internet, Brazil’s Internet Bill of Rights, makes internet providers liable for harmful content on their platforms if they don’t remove it following a court order. As Souza told me, the law is “not a guidance … It is binding, so judges need to apply that.” At the same time, he thinks it is “in dire need of some updates, especially concerning issues of content moderation.”
A legislative attempt to provide this update came in 2020, with the “fake news” bill (PL/2630). The draft bill would have defined several terms, including inauthentic accounts, fact-checking, and disinformation (content that is “verifiable, unequivocally false or misleading, out of context, manipulated or forged, with the potential to cause individual or collective damage”).
For a variety of reasons, including real flaws in scope, the fake news bill never passed, leaving digital disinformation undefined and unregulated by Brazilian legislators.
The judicial branch filled this void. In 2019, the Supreme Court opened an inquiry into online false news about the institution and its members. According to legal scholars Emilio Peluso Neder Meyer and Thomas Bustamante, this relied on an “unusual interpretation” of the court’s internal rules whereby because it can “investigate crimes committed inside the tribunal’s facilities,” it can investigate crimes on the internet.
Even as this inquiry pursued what several viewed as legitimate harms, it has also been described to me as “highly unusual” and “very heterodox.”
Another key element of the online anti-disinformation puzzle is the October 2022 resolution by the Supreme Electoral Court (TSE). This gave the TSE president the unilateral power to request takedown of disinformation identical to that previously removed under previous court orders and the ability to fine platforms ~$20K for every hour the content stays online after the second hour of notification.
In February of this year — with local elections coming up in October — the TSE also banned the use of deepfakes in political campaigns.
The TSE building in Brasilia. Source: Alberto Ruy/Secom/TSE
The takedowns
The first thing to note is that takedown requests made by the Supreme Court and the TSE are confidential. “The press can’t see anything; it’s very opaque,” Tardáguila says.
What little information we have on individual takedowns is what is being shared by recipients of the orders, like X. Brito Cruz says even that information is incomplete because it doesn’t contain the full reasoning of the court.
In addition, decisions are often taken at the account level rather than at the individual URL level (Aos Fatos published an overview of some of the targeted X accounts here.)
Back in April, Lupa reviewed social media content related to 37 TSE takedown requests released by X for a report by the US House Judiciary Committee. To date, I think it is the most comprehensive independent analysis of the merit of individual takedowns that is available. Together with what is being selectively disclosed in the Alexandre Files, this gives us a very partial picture of the content: Debunked theories about voter fraud, misleading attacks against President Lula and high-voltage criticism of the Supreme Court. Clearly, not all of this is disinformation; but then again, not all of it was actioned on those grounds.
Other transparency reports by targeted platforms provide a sense of the scale of requests.
TikTok claims to have removed 222 links in response to 90 court orders in 2022, the year of the most recent presidential election. This pales in comparison with the 66,000 videos the platform claims to have deleted of its own volition for violating its electoral disinformation policies. However, without data on the relative reach of these two sets, these figures are not quite fair to compare.
Google’s transparency report is also helpful in that it clusters takedowns by reason. Electoral law was the grounds for 36% of removal requests in the six months to December 2022; that was figure was just 3% in H2 2023.
Looking at the data by number of items removed, the overall share of electoral takedowns is reduced but nontrivial — Google reported 1,043 items removed in the second half of 2022. But given that some disinformation requests may fall under the defamation category, that, too, is an incomplete picture.
Aos Fatos, for one, is trying to compensate by tracking disinformation and AI-related keywords in judicial decisions to track how the deepfake ban is being applied by regional electoral courts (across all platforms and the open web, not just X). Nalon says they are building up an automated system with a view to better monitoring the 2026 presidential election.
But at least at this stage, the information shared by Brazil’s courts and gleaned from platform transparency reports is scarce and scattered. This makes it very hard to make an honest assessment of the scale and fairness of Brazil’s anti-disinformation decisions.
As Brito Cruz told me, “the lack of transparency is a concern for all of us.”
RT WAS NOT IN IT FOR THE ROI
The US Department of Justice claims that Konstantyin Kalashnikov and Elena Afanasyieva, two employees of Russian state-controlled media outlet RT, channeled nearly $10 million to “covertly finance and direct a Tennessee-based online content creation company.” The company has been identified as Tenet Media, run by far-right Canadian influencer Lauren Chen and her husband.
In turn, the indictment alleges, Tenet paid hundreds of thousands of dollars to sign American right-wing influencers including Benny Johnson, Dave Rubin and Tim Pool (each claimed victimhood).
The indictment is a riveting read and CBS, NBC and WaPo have done a great job covering the fallout. But if you take only three things out of it, let it be these: