Editor’s summary
The COVID-19 pandemic was exacerbated by poor utilization of vaccines caused by the spread of misinformation. Fortunately, the impact of flagrant vaccine misinformation on Facebook was greatly attenuated once such posts were flagged and debunked as false by third-party fact-checkers. However, ambiguous misinformation remained unflagged. Allen et al. examined a gray area that eluded fact-checkers: factually accurate but deceptive content (see the Perspective by van der Linden and Kyrychenko). This unflagged content cast doubts on vaccine safety or efficacy and was 46-fold more consequential for driving vaccine hesitancy than flagged misinformation. Tech companies assert that unflagged content must be protected by First Amendment free speech rights. However, this study’s causal evidence that unflagged content posed the greatest threat to public health carries policy implications for government regulation needed to save lives. —Ekeoma Uzogara
Structured Abstract
INTRODUCTION
Researchers and public health officials have attributed much of the low uptake of the COVID-19 vaccine in the US to misinformation on social media. However, it is unclear whether misinformation had (i) the widespread exposure and (ii) the causal impact on vaccination intentions required to meaningfully alter the trajectory of US vaccination efforts. Moreover, content that raises questions about vaccines without containing outright falsehoods (which we term “vaccine-skeptical”) might also play a role in driving vaccine refusal. In this work, we examine to what extent misinformation flagged by fact-checkers on Facebook (as well as content that was not flagged but is still vaccine-skeptical) contributed to US COVID-19 vaccine hesitancy.
RATIONALE
We posit that two conditions must be met for content to have widespread impact on people’s behavior: People must see it, and, when seen, it must affect their behavior. That is, we define “impact” as the combination of exposure and persuasive influence.
We apply this framework to quantify the impact that (mis)information on Facebook had on COVID-19 vaccination intentions in the US by combining experimental estimates of persuasive effects with Facebook exposure data. To begin, we conducted two experiments (total N = 18,725) on the survey platform Lucid measuring the causal effect of 130 vaccine-related headlines on vaccination intentions. We then used Facebook’s Social Science One dataset to measure exposure to all 13,206 vaccine-related URLs that were popular on Facebook during the first 3 months of the vaccine rollout (January to March 2021). Finally, we developed a pipeline that incorporates the wisdom of crowds and natural language processing (NLP) to predict the persuasive effect of each Facebook URL from our survey estimates.
RESULTS
Analyzing our survey experiments, we found that while exposure to fact-checked misinformation can cause vaccine hesitancy, the degree to which a story implies health risks from vaccines best predicts negative persuasive influence. Our first experiment showed that misinformation containing false claims about the COVID-19 vaccine reduced vaccination intentions by 1.5 percentage points (P = 0.00004). Our second experiment tested both true and false claims and found that content suggesting that the vaccine was harmful to health reduced vaccination intentions, irrespective of any potential effect of the headline’s veracity.
Examining exposure on Facebook, we found that flagged misinformation URLs received 8.7 million views during the first 3 months of 2021, accounting for only 0.3% of the 2.7 billion vaccine-related URL views during this time period. In contrast, stories that were not flagged by fact-checkers but that nonetheless implied that vaccines were harmful to health—many of which were from credible mainstream news outlets—were viewed hundreds of millions of times.
We then used our crowdsourcing and NLP procedure to extrapolate the treatment effects of the 130 items to the 13,206 vaccine-related Facebook URLs. The URLs flagged as misinformation by fact-checkers were, when viewed, more likely to reduce vaccine intentions (as predicted by our model) than unflagged URLs. However, after weighting each URL’s persuasive effect by its number of views, the impact of unflagged vaccine-skeptical content dwarfed that of flagged misinformation. Subsetting to those URLs predicted to induce hesitancy, we estimated that unflagged vaccine-skeptical content lowered vaccination rates by −2.28 percentage points {confidence interval (CI): [−3.4, −0.99]} per US Facebook user, compared with −0.05 percentage points (CI: [−0.07, −0.05]) for flagged misinformation—a 46-fold difference. Even though flagged misinformation had more of an impact when viewed, the differences in exposure were so large that they almost entirely determined the ultimate impact. For example, a single vaccine-skeptical article published by the Chicago Tribune titled “A healthy doctor died two weeks after getting a COVID vaccine; CDC is investigating why” was seen by >50 million people on Facebook (>20% of Facebook’s US user base) and received more than six times the number of views than all flagged misinformation combined.
CONCLUSION
We find that flagged misinformation does causally lower vaccination intentions, conditional on exposure. However, given the comparatively low rates of exposure, this content had much less of a role in driving overall vaccine hesitancy compared with vaccine-skeptical content, much of it from mainstream outlets, that was not flagged by fact-checkers. Our work suggests that while limiting the spread of misinformation has important public health benefits, it is also critically important to consider gray-area content that is factually accurate but nonetheless misleading.
Impact of flagged misinformation versus unflagged content.
Abstract
Low uptake of the COVID-19 vaccine in the US has been widely attributed to social media misinformation. To evaluate this claim, we introduce a framework combining lab experiments (total N = 18,725), crowdsourcing, and machine learning to estimate the causal effect of 13,206 vaccine-related URLs on the vaccination intentions of US Facebook users (N ≈ 233 million). We estimate that the impact of unflagged content that nonetheless encouraged vaccine skepticism was 46-fold greater than that of misinformation flagged by fact-checkers. Although misinformation reduced predicted vaccination intentions significantly more than unflagged vaccine content when viewed, Facebook users’ exposure to flagged content was limited. In contrast, unflagged stories highlighting rare deaths after vaccination were among Facebook’s most-viewed stories. Our work emphasizes the need to scrutinize factually accurate but potentially misleading content in addition to outright falsehoods.
Fig. 1. Effect of vaccine-related headlines on vaccination intentions as a function of perceived harm.
Fig. 2. Exposure to vaccine-related content on Facebook that was publicly shared >100 times on Facebook during the first 3 months of 2021.
Fig. 3. Treatment effect on vaccination intentions as a function of the crowdsourced aggregate score.