Has Wired Given Up On Fact Checking? Publishes Facts-Optional Screed Against Section 230 That Gets Almost Everything Wrong

notion image

from the that's-not-how-any-of-this-works dept

What is going on at Wired Magazine? A few years ago, the magazine went on a bit of a binge with some articles that completely misrepresented Section 230. While I felt those articles were extraordinarily misleading, at least they seemed to mostly live in the world of facts.
Its latest piece goes so far beyond all facts that it’s on another plane of existence, where facts don’t exist and vibes rule the world. Wired has published an article that either wasn’t fact-checked or edited, or if it was, whoever is responsible failed at their job. The piece is by Jaron Lanier and Allison Stanger. Jaron Lanier being wrong about things online is nothing new. It’s kinda become his brand. Stanger, apparently, has a new book coming out entitled “Who Elected Big Tech?” and based on the overview of a course she taught under the same name, it does not inspire confidence:
I mean, we’ve discussed all three of those things in great detail on Techdirt, and all of them require a lot more nuance than is presented here. The sites did not “silence” a President. They suspended his account for violating their rules (after years of bending over backwards to pretend he did not violate their rules). And it did not silence him in any way. He was still able to speak, and whenever he did, his words were immediately conveyed on those same platforms.
And the whole “knowingly causing harm to teenagers” thing is, yet again, ridiculously misleading. Meta did internal research to try to find out if it was causing harm in order to stop causing harm. It researched 24 categories, in which 23 of them suggested no significant harm, and only one raised concerns, which Facebook’s internal research highlighted so that the company could try to address the harm and minimize it. And the government has been trying to penalize them ever since, but has failed, because the “penalties” are unconstitutional.
Either way, let’s return to this article. The title is “The One Internet Hack That Could Save Everything.” With the provocative subhed: “It’s so simple: Axe 26 words from the Communications Decency Act. Welcome to a world without Section 230.”
Now, we’ve spent the better part of the last 25 years debunking nonsense about Section 230, but this may be the worst piece we’ve ever seen on this topic. It does not understand how Section 230 works. It does not understand how the First Amendment works. It’s not clear it understands how the internet works.
But also, it’s just not well written. I was completely confused about the point that the article is trying to make, and it was only on the third reading that I finally understood the extraordinarily wrong point that is at the heart of the article: that if you got rid of Section 230, websites would have to moderate based on the First Amendment — but also they would magically get rid of harassment and other bad content, but be forced to leave up the good content. It’s magic fairytale thinking that has nothing to do with reality. There’s also some nonsense about privacy and copyright that have nothing to do with Section 230 at all, but the authors seem wholly unaware of that fairly basic fact.
I’m going to skip over the first section of the article, because it’s just confused babble, and move onto some really weird claims about Section 230. Specifically, that it somehow created a business model:
First of all… what? Literally none of that makes sense, nor is any citation or explanation given for what is entirely a “vibes” based argument. Section 230 has nothing to do with the advertising market directly. Advertising existed prior to Section 230 and has been a way to subsidize content going back centuries. It’s unclear how the authors think Section 230 is somehow responsible for internet advertising as a business model, and the article does nothing to clarify why that would be the case. Because it’s just wrong. There is no way to support it.
Second, the claim that “algorithms optimize for engagement” is also simply false. Some algorithms definitely do optimize for engagement. Many do not. Neither the ones that do, nor the ones that don’t, have much (if anything) to do with Section 230. They kinda have to do with capitalism and the demands of investors for returns. That’s not a Section 230 issue at all.
Furthermore, as tons of research keeps showing, if you only optimize for engagement, it just leads to anger and nonsense to the point that it drives both advertisers and users away over time. And that’s why sites like Facebook and YouTube have both spent much of the past decade toning down those algorithms to be less about “engagement,” because they realized it was long-term counterproductive. The idea that algorithms are inherently about engagement is outdated thinking that is at least a decade obsolete.
The idea that algorithms were brought about because of Section 230 is easily debunked by the simple fact that the first company that really focused on algorithmically recommending content to people was not hosting user-generated content, but rather was Netflix, trying to better recommend movies to people (remember that?).
The reason we have algorithms is not Section 230, but because without algorithms there’s so much junk on the internet it’s hard to find what you want. Recommendation algorithms exist because they’re useful and because of the sheer amount of content online.
Taking away Section 230 wouldn’t change that one bit. Because recommendations are inherently First Amendment-protected speech. It’s an opinion.
The authors seem wholly confused about what Section 230 actually does. Like the following paragraph makes no sense at all. It’s in the “not even wrong” category, it so defies explaining how nearly every part of it is wrong.
There is no “economic imbalance” in those who use 230. Section 230 protects any interactive computer service or user (everyone always forgets the users) for sharing third-party content. It has protected Techdirt in court, and under no standard anywhere would anyone ever argue that Techdirt has an “economic imbalance.” It has protected people for forwarding emails. It has protected people for retweeting content. It doesn’t just protect big companies.
The discussion about copyright and personal data is not just wrong, but simply, obviously, wholly unrelated to Section 230. Section 230 explicitly exempts intellectual property law. There is no issue whatsoever with copyright-covered content somehow being impacted by Section 230. That’s just not how it works.
The statement that “Section 230 often effectively places the onus on the violated party through the requirement of takedown notices” is even dumber because there are no takedown notices under 230. I’m guessing the authors of the piece probably mean DMCA 512, which is about copyright and does have takedown notices, but that has fuck all to do with Section 230. This is the sort of thing that a fact checker would normally catch. If Wired had one.
Similarly, data protection/privacy laws are unrelated to Section 230. The only times Section 230 comes up in relationship to privacy laws is when state legislatures (hello California!) try to pass a law about speech which they pretend is a privacy law.
Literally nothing in this paragraph makes any sense at all. You have to deliberately work hard to misunderstand Section 230 this badly.
The authors of this piece basically misrepresent Section 230 at every opportunity. They don’t understand what it does and how it works. They blame it for things it has nothing to do with (advertising business models? algorithms?) and then associate it with things very clearly beyond its purview (copyright, takedown notices, and data protection).
Honestly, if Wired had any integrity at all, they’d pull this piece and admit it wasn’t even half-baked yet.
Then, finally, we get to… I guess you’d call it the point of the article? Apparently the authors don’t like content moderation and claim that content moderation is “beholden to the quest for attention and engagement” and I have no idea what that even means. If the concern before was that algorithms were driven by that quest for attention, why would moderation also be driven by that? Isn’t content moderation an attempt to push back on that trend by making sure that content follows rules? Not according to the authors of this article who seem to think the efforts of trust & safety teams to make sure users follow the rules is… somehow driven by attention and engagement?
It’s almost as if the authors have no experience in trust & safety and have never spoken to anyone in trust & safety, yet pretend to understand it. The claim that the rules are “arbitrary” or that enforcing rules has something to do with either “doxing practices” or “cancel culture” suggests people who have never, not once, been in a position where they had to moderate any online conversation.
From there, the piece goes even further off the rails, arguing (for no clear reason) that YouTube is in a post-230 world and that ExTwitter is being destroyed by Section 230. Why? They don’t explain.
The alternative business models that YouTube has created, yet again, have nothing to do with Section 230. It’s such a weird, nonsensical point, I’m honestly beginning to wonder if the piece was written about something else entirely, and at the last minute they tried to shove 230 in there. Whether or not you build alternative income streams to advertising is wholly unrelated to Section 230. Again, Techdirt, whose comments are protected by 230, does not make money from advertising (we make money from user support, thank you very much, please support us). Having user support doesn’t put us in a post-230 world, it shows why 230 is so important.
As for ExTwitter, the destruction of its value can easily be placed on one person, not Section 230.
And the line “relying on a 230-style business model” makes no sense. There is no 230-style business model.
All of this seems based on the blatantly incorrect belief that Section 230 encourages an advertising-based business model. But that’s never even been close to correct. I mean, the first big Section 230 case was Zeran v. AOL in which AOL (which made money on subscription fees more than advertising at the time) was found to be protected. And, I mean, Section 230 was written in response to lawsuits against Compuserve and Prodigy, two other subscription-based services.
The idea that 230 creates an advertising or data-based business model is not just ahistorical, it’s provably false.
The article then “returns” to the question of online speech and shows how incredibly confused its authors are:
Wait, what? Literally three paragraphs earlier you were complaining that content moderation is evil “censorship” driven by “engagement.” Now, you’re saying without Section 230, magically, websites would be “compelled to prevent” harassment.
This gets the law backwards. Under Section 230, websites have the freedom to quickly respond to harassment. That’s what content moderation is. Without Section 230 (as we know from pre-230 cases), it would hinder sites’ ability to do that.
Underpinning all of this — which the authors seem wholly ignorant of — is the way the First Amendment works. The First Amendment in a pre-230 world made it clear that a distributor could only be held liable for speech if (1) they knew about it and (2) the speech violated the law. That means without Section 230, most platforms’ best move would be to avoid knowledge. That means less moderation, not more. It means more harassment, not less.
Also, nearly all “harassment” outside the most extreme cases is protected by the 1st Amendment as well.
It’s almost as if the authors have no idea what they’re talking about.
Lol, what? “Viral harassment is tamped down but ideas are not?” What the fuck do these people think every trust & safety team in the world is doing right now? They’re trying to tamp down harassment, not ideas. And the reason they can do so cleanly, without having to involve lawyers at every move, is because Section 230 protects them in making those decisions.
And then…. it gets dumber.
No one — and I do mean no one — wants a website where companies can only moderate based on the First Amendment. Such a site would almost immediately turn into harassment, abuse, and garbage central. Most speech is protected under the First Amendment. Very, very, very little speech is not protected. The very “harassment” that the authors complain about literally one paragraph above is almost entirely protected under the First Amendment.
Also, if you could only moderate based on the First Amendment, all online forums would be the same. The wonder of the internet right now is that every online forum gets to set its own rules and moderate accordingly. And that’s because Section 230 allows them to do so without fear of litigation over their choices.
Under this plan, you couldn’t (for example) have a knitting community with a “no politics” rule. You’d have to allow all legal speech. That’s… beyond stupid.
And, as if to underline that the authors, the fact checkers, and the editors, have no idea how any of this works, they throw this in:
The first sentence is partially right. There is jurisprudence establishing exceptions to the First Amendment. Though it’s very narrow and very clearly defined. Indeed, the inclusion of “fighting words” in the list of exceptions above shows that the authors are unaware that over the past 50 years the fighting words doctrine has been effectively deprecated as an exception.
It’s also just blatantly, factually, incorrect that 230 has somehow “impeded” the development of First Amendment exceptions. It’s as if the authors are wholly unaware of myriad attempts in the decades since Section 230 went into effect for people to convince courts to establish new exceptions. Most notably was the US v. Stevens, in which the Supreme Court made it clear that it wasn’t really open to adding new exceptions to the First Amendment.
That was the case about “animal crush” videos showing cruelty to animals. The court ruled that it was a violation of the First Amendment to make such videos illegal. And if the Supreme Court is saying that “animal crush” videos are protected by the First Amendment, they seem highly unlikely to include the rando exception for “people were mean to me online!” (I mean, Clarence Thomas might, but he’s not enough).
Again, this is the opposite of reality. The “sewer of least-common-denominator content” is what you get without 230, when you encourage websites to look the other way to avoid liability for any content. How could the authors not have done the most basic of research to understand this?
And if you take away 230 you get the brawl, because you limit the ability of websites to moderate.
Honestly, this entire article seems based on the wholly backwards belief that getting rid of 230 leads to better content moderation, even as the authors complain about content moderation. They don’t seem to understand any of it.
Section 230 does not treat websites as common carriers. It’s literally the opposite of that. It’s saying (correctly) that they’re not common carriers, and that they need the right to set rules and enforce them in order to enable “conversations without a mad race for total attention.”
The article then goes off on some (again, nonsensical) tangent about AI, and then once again shows that the authors know nothing about how the First Amendment works:
The law on this stuff is pretty clear.
The Supreme Court made it clear that broadcast TV and radio could be regulated only because it used public spectrum. It could not (and does not) regulate cable TV or the internet, because they do not.
There’s even more but I need to end this piece before I bang my head on the desk one more time.
The authors do not understand Section 230, the First Amendment, or how content moderation works. Yet they position themselves as experts. They get the law backwards, upside down, and twisted inside out.
Normally, an editor or a fact-checker would maybe catch those things, but apparently Wired will let just anyone spew nonsense on their pages these days. And, yes, I get it that these are complex topics. But they’re also topics where there are dozens of actual experts available who could take one look at the claims in this piece and point out just how wrong almost every confident claim is. I get that Lanier is “internet famous,” but that doesn’t make him worth publishing without someone who actually knows what they’re talking about reviewing his work to call out the myriad factual errors.