Art by Somnath Bhatt
It was around noon on Friday, January 8, 2021. I was sitting at a folding table in my apartment, in Harlem; my laptop displayed a blank, untitled Google document. Ambulance sirens wailed outside—the sound of the pandemic. I closed my eyes, inhaled, and placed my hands on my knees; my palms pressed into the fabric of my joggers. For years, I’d been doing breathing exercises at yoga class to still my mind and calm my nervous system. I drew upon those lessons now, as I exhaled through my nose. With the next inhale, I took notice of the thoughts spinning through my brain. I was a senior policy official at Twitter, and it had been the worst workweek of my decade-long career: I’d endured a terrifying insurrection, and was now desperate to prevent another.
Days before—January 6—when a violent mob stormed the Capitol, Twitter had played a part, as a forum for Donald Trump, the aggrieved president, and his radical fans. As rioters descended, I advised Twitter’s leadership; based on my arguments, the company decided to put Trump in a time-out and warn him of permanent suspension. Now he was emerging from his forced silence to defend the actions of his supporters and say that he wouldn’t attend the inauguration. Many took that as a refusal to commit to a peaceful transition of power.
With the force of my next exhale, I sank my shoulders away from my ears. I’d gone to journalism school and to law school; for years, I’d been obsessed with figuring out how rules based on the technology of the printing press would evolve for the social-media age. I’d wanted to be in the middle of the action. Now it was go time. I began to type. What appeared on-screen was a set of assessments and arguments; I linked to tweets in which Trump’s phrase “American Patriots” was interpreted as glorifying violence at the Capitol and I showed how Twitter users were planning future armed protests—including a second attack on the Hill and state capitol buildings on January 17. Then I drafted a recommendation: “We have determined that these Tweets are in violation of the Glorification of Violence Policy and the user @realDonaldTrump should be immediately permanently suspended from the service.”
I spent the next five hours in chaotic Google Meets conversations with Twitter’s leadership, elaborating on my stance. Then it was out of my hands. The sun began to set; I closed my laptop and moved to my sofa. I had no idea which way the decision would go. I ordered delivery from my neighborhood pizza spot. As I dipped a garlic knot in red sauce, my work phone lit up with a Slack notification. I grabbed a paper towel and clicked the link I was sent. It wasn’t my byline but, mutatis mutandis, the words I was reading in Twitter’s blog post announcing Trump’s suspension were mine.
Before joining Twitter, I’d watched from afar as social platforms struggled with their involvement in major world events. There was the disinformation-riddled 2016 presidential election in the United States. Then came the 2017 Rohingya genocide in Myanmar. Twitter and Facebook were accused of accelerating violence by failing to curb hate speech on their platforms. Companies began investing in Trust and Safety divisions, which would establish new limitations on what people could and could not say on their platforms. I joined Twitter’s Trust and Safety team in 2019; we were responsible for writing and enforcing rules for the most prominent users. No one was higher-profile than Trump.
In 2020, I watched as the pandemic, Black Lives Matter protests, and the presidential campaign radicalized the American right. Fringe ideas—the violent overthrow of the government, a second civil war—had long reverberated within Twitter’s echo chamber; now they drifted into the mainstream. As the election neared, Trump’s tweets were a near-daily topic of conversation among my colleagues. That September, at the first presidential debate, Chris Wallace, the moderator, asked whether Trump would denounce white-supremacist groups such as the Proud Boys; Trump refused, saying instead, “Stand back and stand by.” Inside the Trust and Safety teams at Twitter, that call to action set off alarm bells.
Everyone agreed that his explicit directives to a designated hate group went too far, but the words Trump used were arguably abstract, and it was hard to make definitive claims about their intent. My team and I drafted a policy-and-strategy memo meant to address coded language that incites violence; we recommended that Twitter remove the most egregious phrases and still leave room for commentary and analysis. That November, we presented our proposed policy to Del Harvey, who was then Twitter’s head of Trust and Safety, for her sign-off. Along with our guidelines, we included hundreds of tweets amounting to coded incitement. I was clear about what I believed would happen if Twitter didn’t intervene: “People are going to start shooting each other,” I warned.
Sign up for CJR's daily email
Harvey came back with her answer: she told us not to implement the policy or remove any of the tweets we’d identified. Her view, she said later, was that she wanted to avoid overstepping, initiating a process by which tweets with flagged terms would be removed when they were used in an innocuous context, like someone drinking before a concert and saying they were “locked and loaded” for the show. (No one on my team ever saw examples of people on Twitter talking about drinking that way, nor did we advocate taking down posts of that nature.) Over the next few months, my team could do little but watch as the tenor of Twitter’s conversation grew increasingly tumultuous. Trump lost the election, then summoned his disillusioned supporters to DC to protest the certification of the vote, on January 6. “Be there, will be Wild!” he proclaimed. They got the message.
On the morning of the certification, I paced the steps between my folding table and my sofa, waiting for something bad to happen. When I logged in to work, I anxiously followed the progression of the day on Twitter. The Capitol was breached; people began calling for Mike Pence’s execution. I was rushed into a meeting. My bosses were in a panic. “Make the insurrection stop,” I remember being told. I dusted off the never-implemented policy about coded language and updated it for the current crisis. My team provided examples of tweets that were inciting violence and gave recommendations for how to defuse the situation offline. Within hours, moderators had guidance that allowed us to act.
Throughout the day, I’d forced myself to breathe. Afterward, I was left in a tearful daze. My worst predictions had come true in a grimly spectacular fashion; five people were killed at the Capitol. Once Trump’s time-out ended, and he picked up where he left off, riling his base, I knew that I couldn’t let it happen again.
When I did the work of Trust and Safety at Twitter—and subsequently, at Twitch—I never thought of myself as a “real” journalist, despite my training in the field and the volumes of writing I produced. But later, when I came forward as a Twitter whistleblower and testified before Congress about the lead-up to January 6, I saw a tweet by a former J-school classmate: he said he was proud to see me using our journalism education to do exactly what we had been taught to do—hold power to account. I realized that my Trust and Safety team functioned much like a newsroom, striving to respond quickly to new information, interrogate power imbalances, challenge dominant narratives in pursuit of truth. And we wrote our asses off to make it all happen.
Looking back, that period was the heyday of our work. When Elon Musk took over Twitter, in late 2022, he purged the Trust and Safety team; other tech companies followed suit. Last year, departments at Amazon, Google, and Meta were hit with layoffs. And Donald Trump was reinstated to social media platforms. The moves against Trust and Safety felt like a crusade: tech investors labeled my colleagues and me “The Enemy.” Musk publicly attacked and mocked Trust and Safety workers, me included. Going into the 2024 election season, that has left social media—and, by extension, American democracy—more fragile than it has ever been in the digital era. We face adversaries hell-bent on destroying the integrity of our elections, forces that have used the past two presidential cycles to hone their manipulation strategies. Russian election-influence operations have become astonishingly effective. There is evidence that China has ramped up its social media efforts to interfere in the upcoming election. And while Iran’s late-game-arrival strategy will likely be affected by the war in Gaza, unprecedented levels of disinformation may be supercharged by the rise of artificial intelligence technology. I am convinced, too, that we’ll see a false-death story circulate about a presidential candidate, of the kind that Russian state outlets pushed this spring, about King Charles.
The demise of Trust and Safety teams at social media companies does not mean the end of our work, however. There is a mighty band of humans who remain on the front lines of information warfare as defenders of democracy. Some in the industry have found a thriving new home at companies producing the next generation of AI and Web3 technologies. Others are doing the work of technology accountability as academic researchers, independent writers, and members of government. As for me: I’m back at journalism school, now as a senior fellow at Columbia’s Tow Center, where I spend my time writing critiques of misinformation on social media platforms and making recommendations for meaningful reform and regulation that can come from outside tech companies.
I haven’t given up on Trust and Safety. In fact, I believe that, as my own pathway shows, journalism school is the perfect place to train the next generation of accountability experts. Just recently, when police descended on protesters encamped at Columbia, student journalists sprang into action to bear witness and fact-check in real time. Watching them work brought me back to January 8. It confirmed to me that in the moments when history is being made, a foundation in journalism builds the muscle memory to breathe—and to go write the words that will change the trajectories of our collective futures, online and off.
Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Public Voices Fellow on technology in the public interest with the OpEd Project. She previously held senior policy positions at Twitter and Twitch.