(DALL-E)
The attempted assassination of former President Trump on Saturday was a grim day in the country’s history. It also left in its wake an information vacuum that endures more than two days later. What was the shooter’s motive? What shaped his ideology? Was he working alone?
Social media abhors a vacuum, and over the past 48 hours users of every big platform have raced to fill it with news, commentary, analysis, conspiracy theories, fabrications and jokes. But where once platforms attempted to intervene swiftly to prevent obvious falsehoods from spreading, this weekend found the role of trust and safety teams largely diminished.
This was most obvious on X, formerly Twitter, where top trending spots after the shooting were held by hashtags promoting the idea that the event had been staged. (There is no evidence to support this.) In 2020, Twitter led its peers in labeling the former president’s tweets about mail-in ballots as being misleading. But after Elon Musk bought the company and rebranded it as X, the company laid off most of its trust and safety workers and stopped adding labels to posts from elected officials.
Musk has since courted right-wing users to the platform by hosting audio events with Republican candidates, warning users that “cisgender” is considered a slur on X, and continuously posting Republican talking points about immigration and other issues. Given that ideological shift, then, it was notable that the conspiracy theories spreading on X over the weekend came from the left: terrified that the shooting would give Trump an insurmountable advantage in the election, liberals began posting that Trump’s campaign itself must have been behind the incident.
But the right had conspiracy theories of its own to offer, and they played out across Trump-backed Truth Social and other online spaces friendly to conservatives. Here’s Taylor Lorenz in the Washington Post:
On X, Trump’s Truth Social and the pro-Trump message board Patriots.win, the shooting was portrayed without evidence as a failed execution attempt by shadowy Democrats or an “inside job” by the “deep state” to protect its grip on Washington. Some right-wing posters with millions of online followers shared theories that the Secret Service’s failure to stop the attack was preplanned, or that the agency had been weakened or distracted by diversity initiatives. Musk himself questioned whether the error was “deliberate.”
Right-wing influencers and provocateurs, including Trump’s longtime confidant Roger Stone, shared names and photos alleging that the shooter was in fact an anti-Trump protester, an “antifa extremist” or — in an odd turn — an Italian soccer journalist. They also widely shared a video from an online troll who said he fired the bullets because he hates Republicans, and that he got away with the attack. Conservative conspiracy theorist Mike Cernovich also alleged that the shooting was part of an FBI plot to inspire “copycat attacks.”
It’s tempting to lay all the blame for this at platforms’ feet. During the Trump Administration, platforms faced enormous pressure from lawmakers, regulators, and journalists to restrict the spread of theories like these.
And yet of all the takes that landed over the weekend, none stuck with me more than this post from comedian Josh Gondelman. “I know people are saying not to spread conspiracy theories right now,” he wrote on Saturday in a viral X post, “but I would like to read them.”
Gondelman’s post, which has received nearly 70,000 likes, resonated because it speaks to a hugely important and rarely discussed aspect of the misinformation problem: the huge consumer demand for it. The rise of social media and parallel decline of mainstream journalism have enabled us to create what researcher Renee DiResta calls “bespoke realities”: custom versions of the truth that reflect what we already want to believe. As David French wrote last year in the New York Times: “We’re misinformed not because the government is systematically lying or suppressing the truth. We’re misinformed because we like the misinformation we receive and are eager for more.”
This is particularly true for the attempted assassination of Trump, which instantly became the world’s biggest news story despite the fact that very little was known about what had happened. Unlike many mass shooters, Trump’s would-be assassin apparently left no manifesto or trail of social media posts. He had a Discord account that was mostly dormant, and seems not to have been used to plan his crime. He wore a shirt branded with the name of a pro-gun YouTube channel named Demolition Ranch, but the channel itself largely eschews politics in favor of making viral videos of tanks shooting things.
Given the importance of the story and the near-total lack of information about the events leading up to the shooting, it was inevitable that people would speculate wildly. And especially in the immediate aftermath, it’s not clear that platforms should have done much to stop them. Citizen reports about mass shootings and other major news events have often turned out to be true, after all, and asking platforms to divine the truth about what happened in real time and stop potentially false hashtags from being promoted seems just as likely to repress true speech as it does falsehoods.
Platforms shouldn’t take a totally hands off approach, of course. In the event that the shooting had been falsely attributed to some minority group, for example, and a platform’s users were attempting to foment violence against that group, platforms should intervene. But in this case, however wrong users may have been in their conspiracy theorizing, ultimately nothing promoted on X’s trending page was any crazier than the ideas that are promoted on Fox News every day. And it seems worth noting that an earlier conspiracy theory about the Kennedy assassination was nominated for eight Academy Awards.
As theories about the shooting flew, tech CEOs including Mark Zuckerberg, Sundar Pichai, Tim Cook, Satya Nadella, and Andy Jassy condemned the violence. (Musk went a step further, endorsing Trump and donating to his campaign.) It was a welcome affirmation of the rule of law, and a reminder of the role tech platforms often played in supporting democratic ideals during the Trump administration.
Whatever happens in the weeks ahead, it’s clear that those ideals will soon be tested again and again. And unlike in 2020, platforms showed this weekend that they are increasingly comfortable sitting on the sidelines of contentious news stories, content to let users seek out whichever versions of the truth most appeal to them.
On Monday, Trump named Ohio Sen. J.D. Vance, a former venture capitalist, as his running mate. Vance is an investor in Rumble, the right-wing YouTube alternative where conspiracy theories thrive. Vance, like Trump, has also endorsed the repeal of Section 230 of the Communications Decency Act, the law that grants platforms legal immunity in most cases for what their users post.
Vance has said that smaller platforms should retain legal liability to help them compete against larger companies. If his view becomes law, the internet and its bespoke realities would splinter once again. Among large platforms, new legal liabilities would prompt them to restrict more speech for fear of getting sued. On smaller ones, continued immunity would allow them to continue hosting and promoting a much broader range of speech — including the conspiracy theories that so many Americans cherish.
The day before the attempted assassination, Meta said it would roll back the remaining restrictions on Trump’s Facebook and Instagram accounts, which his campaign is now heavily using. Trump had seen his account suspended for two years after he led an insurrection at the Capitol, which caused several to die and led to 174 police officers being injured.
From roughly 2017 to 2023, a broad consensus held among social platforms that tech policy could and should be used to promote high quality information and support democratic principles. By this summer, though, that consensus had broken. The trust and safety era peaked and is now in decline. Tech executives seem increasingly resigned to the idea that Trump will once again be president. And whatever ultimately happens in the 2024 election, it’s now clear that how it plays out online will look very different than it did in 2020.
Sponsored
Extremely Hardcore Sale
The e-book version of Extremely Hardcore: Inside Elon Musk's Twitter, is on sale this week for $1.99. Buy your copy today! It's the inside story of how Twitter ceased to exist, as told by the people who worked there.
Governing
- “Nearly all” of AT&T customers were affected by a data breach that allowed criminals to steal phone records, the company says. (Zack Whittaker / TechCrunch)
- OpenAI illegally prohibited its employees from warning regulators about the dangers of its technology, whistleblowers alleged to the SEC. (Pranshu Verma, Cat Zakrzewski and Nitasha Tiku / Washington Post)
- Some in the OpenAI safety team reportedly felt pressured to speed through a new testing protocol to meet a launch date in May, going against a safety promise the company made to the White House. (Pranshu Verma, Nitasha Tiku and Cat Zakrzewski / Washington Post)
- Meta is expanding its policy to remove more posts attacking “Zionists” when the term is used to refer to Jewish people or Israelis broadly, it says. (Kurt Wagner / Bloomberg)
- Watermelon cupcakes from a club for Muslim workers at Meta reportedly sparked internal turmoil over the company’s response to the war in Gaza. (Paresh Dave and Vittoria Elliott / WIRED)
- The Oversight Board is considering three new cases involving Meta’s removal of footage of a terrorist attack in Moscow posted on Facebook. (Oversight Board)
- A Mississippi judge ordered a woman to shut down her social media accounts accusing her daughter’s schoolmates of bullying her to death. (Will Oremus / Washington Post)
- X does not have to pay $500 million to former employees in a severance suit, a district judge ruled, as their claims were not covered under ERISA. (Robert Burnson / Bloomberg)
- X’s blue checkmark policy (allowing users to buy verification) is in violation of the EU’s Digital Services Act, regulators say, because it causes users to question the authenticity of accounts. Sorry but this is very silly — if X wants to have a bad verification system it does not require regulatory intervention. (Javier Espinoza / Financial Times)
- A Q&A with Imran Ahmed, founder of the Center for Countering Digital Hate, on how the spread of disinformation threatens our shared sense of reality. (Jason Parham / WIRED)
- Data from Disney’s internal Slack channels, including discussions of ad campaigns, studio technology and interview candidates, were leaked online. (Sarah Krouse and Robert McMillan / Wall Street Journal)
- A new bipartisan bill in the Senate, the COPIED Act, is aiming to protect artists, songwriters and journalists from having their work used to train AI models without their consent. (Aisha Malik / TechCrunch)
- The FTC banned anonymous messaging app NGL from serving minors, alleging that the platform exaggerated its ability to use AI to combat cyberbullying. (Cristiano Lima-Strong / Washington Post)
- The FCC’s reinstatement of net neutrality rules was temporarily halted by an appeals court as it considers legal challenges. (David Shepardson / Reuters)
- UK regulators are probing Apple, Google and PayPal’s digital wallets and looking into the competitiveness and risks of the technology. (Aisha S Gani / Bloomberg)
- Amazon launched an anti-union charm offensive at UK warehouses as a key UK workers’ vote approaches. (Delphine Strauss / Financial Times)
- The EU’s AI Act is now in the Official Journal, which means the bloc’s rolling deadlines for AI developers take effect in August. (Natasha Lomas / TechCrunch)
- Microsoft is settling an antitrust complaint in the EU about its cloud computing licensing practices, for 20 million euros. (Foo Yun Chee / Reuters)
- Apple agreed to open up its tap-and-pay technology to other providers for free for a decade, avoiding possible fines from EU regulators. (Samuel Stolton and Jennifer Surane / Bloomberg)
Industry
- OpenAI shared a new classification system with employees – a five-tier method to track the company’s progress towards building artificial general intelligence. (Rachel Metz / Bloomberg)
- A new project, internally codenamed “Strawberry,” is OpenAI’s latest attempt to build an AI that can navigate the internet autonomously and do “deep research,” an internal document shows. (Anna Tong and Katie Paul / Reuters)
- Sam Altman and Ariana Huffington say generative AI can help millions of people. But the industry has become a faith-based one, this author argues, with the possibility of people being strung along by promises. (Charlie Warzel / The Atlantic)
- Microsoft and Apple will no longer have observer seats on OpenAI’s board. (Camilla Hodgson and George Hammond / Financial Times)
- Microsoft gave up its observer seat because it no longer believed its role was necessary, according to a letter to OpenAI. It's also increasingly under antitrust scrutiny over its AI investsments. (Ina Fried / Axios)
- X is falling short of its TV and video goals, as several high-profile deals fizzled out. (Kurt Wagner / Bloomberg)
- Meta is reportedly planning on releasing the largest version of its Llama-3 model on Jul. 23. (Sylvia Varnham O’Regan and Stephanie Palazzolo / The Information)
- Google is ranking AI plagiarized content above original news articles on Search despite adjusting its policies to target AI spam. (Reece Rogers / WIRED)
- Gemini 1.5 Pro is making its robots smarter, Google says, by improving navigation and planning. (Jess Weatherbed / The Verge)
- YouTube Shorts has a bunch of new features, including a TikTok-style AI voice narration function. (Wes Davis / The Verge)
- The Apple Vision Pro is now available in the UK, Canada, France, Germany and Australia. (Tim Hardwick / MacRumors)
- Apple approved UTM SE, the first PC emulator app for iOS, weeks after rejecting it in the EU. (Wes Davis / The Verge)
- Amazon’s AI chatbot Rufus is now available for US customers in its app. (Sarah Perez / TechCrunch)
- Newsletter platform Ghost has now federated its first newsletter, a milestone in its push to join the fediverse. (Sarah Perez / TechCrunch)
- AI can help boost creativity for people who are less naturally creative, a new study suggests, but dampens creativity for the group as a whole. (Devin Coldewey / TechCrunch)
- A look at Metafilter and how it seems to capture the philosophy of the early Internet days, on its 25th anniversary. (Steven Levy / WIRED)
- Asking regular social media accounts to “Ignore all previous instructions” can reveal AI bots in disguise. (David Ingram / NBC News)
Those good posts
We're going to be staying away from posts about the assassination here.
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and an end to political violence: casey@platformer.news and zoe@platformer.news.