New Yorker’s ‘Social Media Is Killing Kids’ Article Waits 71 Paragraphs To Admit Evidence Doesn’t Support The Premise

notion image

from the the-new-yorker:-where-nuance-and-tradeoffs-go-to-die dept

These days, there’s a formula for articles pushing the unproven claims of harm from social media. Start with examples of kids harming themselves, insist (without evidence) that but for social media it wouldn’t have happened. Throw some shade at Section 230 (while misrepresenting it). Toss out some policy suggestions without grappling with what those policy suggestions would actually mean in practice, and never once deal with the actual underlying issues regarding mental health.
It’s become so easy. And so wrong. But it fits the narrative.
I enjoy Andrew Solomon’s writing and especially found his book, Far From the Tree, an exceptional read. So, when I saw that he had written a big story for the New Yorker on social media and teens, I had hoped that it would approach the subject in a manner that laid out the actual nuances, trade-offs, and challenges, rather than falling for the easy moral panic tropes.
Unfortunately, it fails woefully in that endeavor, and in the process gets a bunch of basic facts wrong. For all of the New Yorker’s reputation for its great fact-checking efforts, in the few stories where I’ve been close enough to know what’s going on, the fact-checking has been… terrible.
The whole article is somewhere around 10,000 words long, so I’m not going to go through all of the many problems with it. Instead, I will highlight some major concerns with the entire piece.
The typical moral panic tropes
The article follows the typical trope of many similar articles by framing it around a series of truly tragic and absolutely heart-wrenching stories of teenagers who died by suicide. These are, absolutely, devastating stories. But the supposed “connection” to social media is never actually established.
Indeed, in reading some of the profiles, they reminded me of my own friend from high school, who took his own life in an era before social media. My friend had written notes that came out later about his own feelings of inadequacy, depression, and loneliness, that sound similar to what’s in the article.
It is undeniable that we do not do enough to help everyone improve their mental health. We are especially terrible at teaching young people the nature of how much everyone’s mental health matters, and how it can surprise and subvert you.
But the vast majority of the article is just tragic story after tragic story. Then, it is followed by a “and, then, we found out they used Instagram/TikTok/etc. where they (1) saw some bad content or (2) expressed themselves in ways that vaguely suggested they were unhappy.”
Again, though, none of that suggests a causal relationship, and the real issues are much more complex. Nearly everyone uses social media today. Teenagers writing angsty self-pitying works is… part of being a teenager. As for social media sharing similar posts, well, that’s also way more complicated. For some, seeing that others are having a rough time of it is actually helpful and makes people realize they’re not alone. For others, it can be damaging.
The problem is that there are so many variables here that you can’t say there’s a reasonable approach.
Take, for example, eating disorder content. Reading through some of it is absolutely awful and horrifying. But it’s also nearly impossible to moderate. When platforms try to, it was found that teen girls very quickly work out code words and other language to get around any such moderation. Furthermore, it was found that when such content appeared on major platforms (i.e., Instagram and TikTok), it often includes comments/responses from people trying to help guide those engaged in such practices towards recovery resources. When the content moved to darker parts of the web, that was less likely.
That is to say, all of this is complicated.
But, Solomon barely touches on that. Indeed, he only kinda throws in a “well, who can really say” bit way, way down in the article, after multiple stories all painting the picture that social media causes kids to take their own lives.
Unless you read to the 71st paragraph (no kidding, I counted), you won’t find out that the science on this doesn’t really support the claims that social media is the driving force causing kids to be depressed. Here are the 71st and 72nd paragraphs of the piece, which few readers are likely to ever actually reach, after a whole bunch of stories of people dying by suicide and parents blaming social media:
Solomon gives a few paragraphs to the researchers saying, “hey, this is more complicated and the evidence doesn’t really support the narrative,” including a few paragraphs about how social media and smartphones can actually be super helpful to some kids:
He even notes that everyone blaming social media companies for these things can actually make it much harder to then use the technology to put in place interventions that actually have been shown to help:
But literally two paragraphs later, the article is back to blaming social media companies for the grief families feel for kids who died.
There are interesting, thoughtful, nuanced stories to tell about all of this that explain the real tradeoffs, the ways in which kids are different from one another, and how they can deal with different issues in different ways. There are stories to be told (like that bit quoted above) about how companies are being pushed away from doing useful interventions because of the moral panic narrative.
But Solomon does… none of that?
The policy recommendations
This part was equally frustrating. Solomon does discuss policy, but in the most superficial (and often misleading to incorrect) way. Of course, there’s a discussion about Section 230, but it’s… wrong?
So, that first paragraph claims that social media companies can’t “be held responsible for the harm they cause,” but again, that’s wrong. The problem is that it’s not clear in these cases what is actually “causing” the harm. The research, again, does not support the claim that it’s all from social media. And even if you argue that the content on social media is contributing, then again, is it the social media app itself, or the content? How do you disambiguate that in any useful manner?
For years, before the internet, people would blame fashion and celebrity magazines for making young girls have an unhealthy image of what women should look like. But we didn’t have giant articles in the Conde Nast-owned New Yorker bemoaning the First Amendment for allowing Conde Nast to publish titles like Vogue, Glamour, Jane, Mademoiselle and others.
Yet here, Solomon falsely seems to think that the main issue is Section 230, rather than the lack of actual traceability. Indeed, while he mentions some lawsuits challenging Section 230, he leaves out how the courts have struggled with that lack of traceability from the platform itself to the harm alleged. And that’s kinda important?
It’s the sort of thoughtful nuance you would hope a publication like the New Yorker would engage in, but it doesn’t here.
Its description of Section 230 is also just super confused.
The bookstore/publisher distinction is a weird twist on the typical wrong claim of “platform/publisher,” but it’s no less wrong.
And here’s where it would help if the New Yorker’s fact-checkers did, well, any research. They could have read Jeff Kosseff’s book on Section 230, which even starts off by explaining an early lawsuit that involves a bookstore. There is no question as to whether a website is more like a “bookstore” or a “publisher.” The whole point of Section 230 is that a website isn’t held liable for third party content even as its publisher.
That’s the part everyone misses, and Solomon’s piece confuses readers about.
As for the claim that Section 230 feels “distant from today’s digital reality,” even the authors of Section 230 have called bullshit on the claim. You’d think maybe the fact checkers at the New Yorker might have asked them? Here are Ron Wyden and Chris Cox disputing the claim that the internet is somehow so different that 230 no longer applies.
But what would they know?
The failure to understand the policy issues goes deeper than misunderstanding Section 230.
The article effectively endorses the concept of a “duty of care” for social media services. This is the kind of solution lots of people who have never dealt with the nuances or tradeoffs think is clever, and which everyone who has any responsibility for an internet service knows is one of the dumbest ideas imaginable.
The second paragraph’s “gotcha” is frustrating because it’s so stupid. Of course the law doesn’t require a duty of care, nor could it do so effectively, because the problems are speech. This is the point that Solomon fails to grapple with.
As we discussed above, with almost every kind of “bad content,” the reality is way more complicated than most people believe. With “eating disorder” content, removing it made eating disorders worse. With “terrorist content,” the very first bit of content that was taken down was a human rights group tracking war crimes. There are studies detailing similar difficulties in dealing with suicide ideation, the very topic that Solomon centers the article on. Efforts to remove such content haven’t always been effective and sometimes target people expressing distress, taking away their voices in ways that could be dangerous.
All it really does is sweep content under the rug, brushing it into darker places. That’s not dealing with the root causes of depression and mental challenges. It’s saying “we don’t talk about that.”
Because the duty of care only serves to put potential liability on websites, they will take the lazy way out and remove all discussion of eating disorders, even if it’s content to help with recovery. They will remove all discussion of self-harm, even if it’s guiding people towards helpful resources.
It’s sweeping the real problems under the rug. Why? To make elites feel satisfied that they’re “dealing with the problem of kids on social media.” But it won’t actually help those kids. It just takes away the resources that would help many of them.
If anyone ever recommends a “duty of care,” ask them to explain how that actually works. They can’t answer. Because it doesn’t work. It’s a magic potion pushed by those who don’t understand how things work, telling tech companies “magically fix societal issues around mental health that we’ve never done shit to actually deal with, or we’ll fine you.”
It’s not a solution. It’s a way to make the elite New Yorker reader feel better about sweeping real problems under the rug.
A serious analysis by people who understand this shit would grapple with those problems.
Solomon and the New Yorker did the opposite. They took the easy way out. The way that leads to more harm and less understanding.
But I’m sure that we’ll be seeing this article cited, repeatedly, as evidence as to why these ignorant policies must be put in place.