Fixing the Web by... giving up (at Meta)?

I started writing this piece at least once per day for each of the last 10 days, but had to start over because Meta and Mark Zuckerberg would announce something big and important each subsequent day, and I felt not including the next big announcement would be incomplete. At this point, it feels like the big announcements are starting to slow, and I’ve already seen so much excellent reporting on the changes. I’ll summarize some of that and include links to the best reports here, but offer (at least) one observation I haven’t seen written anywhere else yet.
“New research shows that lead consumption increases belief in God, support for Donald Trump, and trust in Mark Zuckerberg” is now allowable on Facebook and Instagram, despite the fact that those assertions are not supported by any data I’ve seen. Likewise, users are now welcomed to post that “women are household objects,” “immigrants are trash,” and “a trans person isn’t a he or she, it’s an it.” And, to remove any appearance of the company caring about truth, they decided to deprecate the network of independent third-party fact checkers and rely instead on Random People on the Internet to label content on Meta’s platforms to “add context” to claims like those made in this post’s lede. What could possibly go wrong?
notion image
Now, as a behavioral data scientist, my commitment is to data, regardless of how easy it is to get sucked into the drama and politics of these decisions. So, I’ll be the first to admit that the fact-checking program had problems.
The main problems were not so much political bias against conservatives because some of the fact-checkers in the network were conservative organizations (e.g., Weekly Standard, Daily Caller), and conservative users tended to agree with fact-checks made by fact-checkers (even when the fact-checkers are liberal). Yet, perception matters more than reality, and Republicans do perceive that fact-checkers, in the abstract, are liberally biased. Therefore, it makes political sense to eliminate the program, even if its biggest critics agreed with many of the specific fact-checking decisions because the fact-checking program had been crafted into a political bogeyman.
Instead, the problems with the fact-checking program were more technical, and fall into these two main buckets:
  1. Fact-checkers were paid for the number of fact-checks they did, which led to fact-checks on the easiest items in the massive queue. For example, it’s much easier to confirm that some celebrity is still alive than it is to confirm whether a novel vaccine platform is effective in protecting people from a novel virus sweeping the globe. So, fact-checkers would rush to fact-check the former to get quick compensation, and let other fact-checking organizations tackle the more complex claims. (I will say that while I worked at Meta, we did try to combat this by increasing compensation for more complex checks… though, it didn’t eliminate this problem.)
  1. Most views on content occur within the first day of the content being posted, meaning that anything done to that content outside of that window will have very little impact. The majority of people who will ever see that content have already been exposed to it, even if it is later deemed to be false. That said, there is still value in labeling content because some people will still see it after the initial tsunami of views, and it may also be useful when AI models inevitably are trained using data scraped from social media. Without the label, it feels like a large language model ingesting that data may be more inclined to regurgitate that to unsuspecting users somewhere else.
So, fact-checking was never going to solve the whole problem, or maybe even most of it… but it’s still important and can be effective if used properly. I personally don’t feel strongly that fact-checkers need to be confirming whether celebrities are alive or if some author really did say some quote that is attributed to them. Though, fact-checking high-risk, high-reach content that could really cause harm should be done.
Yet, this is something that Meta historically wasn’t doing anyway. Instead they had a “whitelist” of politicians who were exempt from fact-checking; they also created a “newsworthiness exception”, which allowed politicians to say whatever hateful or incorrect things they wanted without recourse. For example, following the murder of George Floyd, then-President Donald Trump stated “when the looting starts, the shooting starts.” Twitter decided that this was too likely to incite offline violence, and they removed it. Mark Zuckerberg justified not taking it down by saying, “the reference is clearly to aggressive policing -- maybe excessive policing -- but it has no history of being read as a dog whistle for vigilante supporters to take justice into their own hands.” Yet, director of product design at Facebook, David Gillis, responded by saying that Trump’s message “encourages extrajudicial violence and stokes racism.”
And that was back when Meta had many subject matter experts and content moderators to try to push back against this kind of content. The decisions from the past week paired with several rounds of layoffs of trust and safety specialists, these types of hateful and incorrect content will be under far less scrutiny, and most likely thrive (more than it did in the past, which is all-too impressive). Perhaps Zuckerberg’s latest attempt at copycatting Elon Musk will result in a similar 80% decline in the company’s valuation and nearly a 50% decline in revenue. Though, Zuckerberg may be able to rest a little easier that President-elect Trump’s administration may be less inclined to continue its ongoing anti-monopoly lawsuit against Meta now that Meta gave Trump’s FCC nominee exactly what he wanted.

What I’m reading

If you haven’t subscribed to my YouTube channel yet, please do. It’ll be a tremendous help to me, and I’ll be forever grateful to you.