Ex-Congressmen Pen The Most Ignorant, Incorrect, Confused, And Dangerous Attack On Section 230 I’ve Ever Seen

notion image

from the this-is-not-how-anything-works dept

In my time covering internet speech issues, I’ve seen some truly ridiculous arguments regarding Section 230. I even created my ever-handy “Hello! You’ve Been Referred Here Because You’re Wrong About Section 230 Of The Communications Decency Act” article four years ago, which still gets a ton of traffic to this day.
But I’m not sure I’ve come across a worse criticism of Section 230 than the one recently published by former House Majority Leader Dick Gephardt and former Congressional Rep. Zach Wamp. They put together the criticism for Democracy Journal, entitled “The Urgent Task of Reforming Section 230.”
There are lots of problems with the article, which we’ll get into. But first, I want to focus on the biggest, most brain-numbingly obvious problem, which is that they literally admit they don’t care about the solution:
I have pointed out over and over again through the years that I am open to proposals on Section 230 reform, but the specifics are all that matter, because almost every proposal to date to “reform Section 230” does not understand Section 230 or (more importantly) how it interacts with the First Amendment.
So saying “well, any reform is what matters” isn’t just flabbergasting. It’s a sign of people who have never bothered to seriously sit with the challenges, trade-offs, and nuances of changing Section 230. The reality (as we’ve explained many times) is that changing Section 230 will almost certainly massively benefit some and massively harm others. Saying “meh, doesn’t matter, as long as we do it” suggests a near total disregard for the harm that any particular solution might do, and to whom.
Even worse, it disregards how nearly every solution proposed will actually cause real and significant harm to the people reformers insist they’re trying to protect. And that’s because they don’t care or don’t want to understand how these things actually work.
The rest of the piece only further cements the fact that Gephardt and Wamp have no experience with this issue and seem to simply think in extremely simplistic terms. They think that (1) “social media is kinda bad these days” (2) “Section 230 allows social media to be bad” and thus (3) “reforming Section 230 will make social media better.” All three of these statements are wrong.
Hilariously, the article starts off by name-checking Prof. Jeff Kosseff’s book about Section 230. However, it then becomes clear that neither former Congress person read the book, because it would correct many of the errors in the piece. Then, they point out that both of them voted for CDA 230 and call it their “most regrettable” vote:
While that’s the title of Jeff’s book, he didn’t coin that phrase, so it’s even more evidence that they didn’t read it. Also, is that really such a “regrettable vote”? I see both of them voted for the Patriot Act. Wouldn’t that, maybe, be a bit more regrettable? Gephardt voted for the Crime Bill of 1994. I mean, come on.
Section 230 has enabled the internet to thrive, helped build out a strong US innovation industry online, and paved the way for more speech online. How is that worth “regretting”?
These two former politicians have to resort to rewriting history:
When 230 was passed, it was in response to lawsuits involving two internet giants of the day (CompuServe, owned by accounting giant H&R Block at the time, and Prodigy, owned by IBM and Sears at the time), not some tiny startups. And yes, those companies also had advertisements and “put a premium on growth.” So it’s not clear why the authors of this piece think otherwise.
The claim that “the most engaging content is often the most harmful” has an implicit (obsolete) assumption. The assumption is that the companies Gephardt and Wamp are upset about optimize for “engagement.” While that may have been true over a decade ago when they first began experiments with algorithmic recommendations, most companies pretty quickly realized that optimizing on engagement alone was actually bad for business.
It frustrates users over time, drives away advertisers, and does not make for a successful long-term strategy. That’s why every major platform has moved away from algorithms that focus solely on engagement. Because they know it’s not a good long-term strategy. Yet Gephardt and Wamp are living in the past and think that algorithms are solely focused on engagement. They’re not because the market says that’s a bad idea.
Where to begin on this nonsense? No, social media is not “addictive” like tobacco. Tobacco is a thing that includes nicotine, which is a physical substance that goes into your body and creates an addictive response in your bloodstream. Some speech online… is not that.
And, no, the internet is not “fueling a national epidemic of loneliness, depression, and anxiety among teenagers.” This has been debunked repeatedly. The studies do not support this. As for the stat that “three out of five teenage girls say they have felt persistently sad or hopeless” well… maybe there are some other reasons for that which are not social media? Maybe we’re living through a time of upheaval and nonsense where things like climate change are a major concern? And our leaders in Congress (like the authors of the piece I’m writing about) are doing fuck all to deal with it?
Maybe?
But, no, it couldn’t be that our elected officials dicked around and did nothing useful for decades and fucked the planet.
Must be social media!
Also, they’re flat out lying about what Haugen found. She found that the company was studying those issues to figure out how to fix them. The whole point of the study that everyone keeps pointing to was because there was a team at Facebook that was trying to figure out if the site was leading to bad outcomes among kids in order to try to fix it.
Almost everything written by Gephardt and Wamp in this piece is active misinformation.
Blaming all of the above on Section 230 is literal disinformation. To claim that somehow what’s described here is 230’s fault is so disconnected from reality as to raise serious questions about the ability of the authors of the piece to do basic reasoning.
First, nearly all disinformation is protected by the First Amendment, not Section 230. Are Gephardt and Wamp asking to repeal the First Amendment? Second, threats towards election officials are definitely not a Section 230 issue.
But, sure, okay, let’s take them at their word that they think Section 230 is the problem and “reform” is needed. I know they say they don’t care what the reform is, just that it happens, but let’s walk through some hypotheticals.
Let’s start with an outright repeal. Will that make the US less polarized and stop disinformation? Of course not. It would make it worse! Because Section 230 gives platforms the freedom to moderate their sites as they see fit, utilizing their own editorial discretion without fear of liability.
Remove that, and you get companies who are less able to remove disinformation because the risk of a legal fight increases. So any lawyer would tell company leadership to minimize their efforts to cut down on disinformation.
Okay, some people say, “maybe just change the law so that ‘you’re now liable for anything on your site.’” Well, okay, but now you have a very big First Amendment problem and, again, you get worse results. Because existing case law on the First Amendment from the Supreme Court on down says that you can’t be liable for distributing content if you don’t know it violates the law.
So, again, our hypothetical lawyers in this hypothetical world will say, “okay, do everything to avoid knowledge.” That will mean less reviewing of content, less moderation.
Or, alternatively, you get massive over-moderation to limit the risk of liability. Perhaps that’s what Gephardt and Wamp really want: no more freedom for the filthy public to ever speak. Maybe all speaking should only occur on heavily limited TV. Maybe we go back to the days before civil rights were a thing, and it was just white men on TV telling us how everyone should live?
This is the problem. Gephardt and Wamp are upset about some vague things they claim are caused by social media, and only due to Section 230. They believe that some vague amorphous reform will fix it.
Except all of that is wrong. The problems they’re discussing are broader, societal-level problems that these two former politicians failed to do anything about when they were in power. Now they are blaming people exercising their own free speech for these problems, and demanding that we change some unrelated law to… what…? Make themselves feel better?
This is not how you solve problems.
Again, airplanes are not speech. Just like tobacco is not speech. These guys are terrible at analogies. And yes, every other industry that involves speech does work like this. The First Amendment protects nearly all the speech these guys are complaining about.
Section 230 has never been a “get out of jail” card. This is a lazy trope spread by people who never have bothered to understand Section 230. Section 230 only says that the liability for violative content on an internet service goes to whoever created the content. That’s it. There’s no “get out of jail free.” Whoever creates the violative content can still go to jail (if that content really violates the law, which in most cases it does not).
If their concerns are about profits, well, did Gephardt and Wamp spend any time reforming how capitalism works when they were lawmakers? Did they seek to change things so that the fiduciary duty of company boards wasn’t to deliver increasing returns every three months? Did they do anything to push for companies to be able to take a longer term view? Or to support stakeholders beyond investors?
No? Then, fellas, I think we found the problem. It’s you and other lawmakers who didn’t fix those problems, not Section 230.
If you remove Section 230, they will have even less incentive to remove that content.
This is now reaching levels of active disinformation. Yes, companies do, in fact, seek to remove that content. It violates all sorts of policies, but (1) it’s not as easy as people think to actually deal with that content (because it’s way harder to identify than ignorant fools with no experience think it is) and (2) studies have shown that removing that content often makes problems like eating disorders worse rather than better (because it’s a demand-side problem, and users looking for that content will keep looking for it and find it in darker and darker places online, whereas when it’s on mainstream social media, those sites can provide better interventions and guide people to helpful resources).
If Gephardt and Wamp spoke to literally any actual experts on this, they could have been informed about the realities, nuances, and trade-offs here. But they didn’t. They appear to have surrounded themselves with moral panic nonsense peddlers.
They’re former Congressmen who assume they must know the right answer, which is “let’s run with a false moral panic!”
Of course, you had to know that this ridiculous essay wouldn’t be complete without a “fire in a crowded theater” line, so of course it has that:
Yup. These two former lawmakers really went there, using the trope that immediately identifies you as ignorant of the First Amendment. There are a few limited classes of speech that are unprotected, but the Supreme Court has signaled loud and clear that it is not expanding the list. The “fire in a crowded theater” line was used as dicta in a case that was about locking up someone protesting the draft (do Gephardt and Wamp think we should lock up people for protesting the draft?!?) in a case that hasn’t been considered good law in seven decades.
Yes, it literally is. I mean, there’s no two ways around it. All that content, with a very, very few possible exceptions, is protected under the First Amendment.
You absolute chuckleheads. The only reason sites can do “freedom of speech, but not freedom of reach” is because Section 230 allows them to moderate without fear of liability. If you remove that, you get less moderation.
First of all, that ruling is extremely unlikely to stand because even many of Section 230’s vocal critics recognize that the reasoning there made no sense. But second, the court said that algorithmic recommendations are expressive. And the end result is that while it may not be immune under 230 it remains protected under the First Amendment because the First Amendment protects expression.
This is why anyone who is going to criticize Section 230 absolutely has to understand how it intersects with the First Amendment. And anyone claiming that “you can’t shout fire in a crowded theater” is good law is so ignorant of the very basic concepts that it’s difficult to take them seriously.
I’m sorry, but are they claiming that “vitriol” is not protected under the First Amendment? Dick and Zach, buddies, pals, please have a seat. I have some unfortunate news for you that may make you sad.
But, don’t worry. Don’t blame me for it. It must be Section 230 making me make you sad when I tell you: vitriol is protected by the First Amendment.
The changes you suggest are not going to help advertisers come back to ExTwitter. Again, they will make things worse, because Elon is not going to want to deal with liability, so he will do even less moderation because the changes to Section 230 will increase liability for moderation choices you make.
How can you not understand this?
Which is protected by the First Amendment, and which won’t change if Section 230 is changed.
Which also has got nothing to do with Section 230 and won’t change no matter what you do to Section 230?
Also, um, have you tried… parenting?
This may really be the worst piece on Section 230 I have ever read. And I’ve gone through both Ted Cruz and Josh Hawley’s Section 230 proposals.
This entire piece misunderstands the problems, misunderstands the law, misunderstands the constitution, then lies about the causes, blames the wrong things, has no clear actual reform policy, and is completely ignorant of how the changes they seem to want would do more damage to the very things they’re claiming need fixing.
It’s a stunning display of ignorant solutionism by ignorant fools. It’s the type of thing that could really only be pulled off by overconfident ex-Congresspeople with no actual understanding of the issues at play.
notion image

from the it's-your-computer dept

One of the reasons that today’s copyright is such a bad fit for the modern digital world is that its roots lie deep in 18th-century law and analogue objects like books. This fact has created a kind of legislative drag that means copyright is always decades behind the latest technological developments. A case in point is the phenomenon of “cheating” in video games. Despite the negative connotations of the name, “cheating” has a remarkably rich and interesting culture. It is about extending the capabilities of a computer game, often through add-on software. That, of course, raises the hackles of companies that sell computer game software; for them, complete control over what a player does is paramount. An important legal dispute in this area, discussed on the Lexology blog, involves Sony Computer Entertainment Europe and Datel Design and Development:
Since it was filed, Sony’s legal action has been bouncing around the German legal system. Sony won initially, but that decision was later overturned. The case then passed up to the German Federal Court of Justice. Recognizing that the dispute raised important questions about copyright protection, the federal court requested an interpretation from the EU’s top legal body, the Court of Justice of the European Union (CJEU). As is usual, a preliminary opinion has been offered by one of the CJEU’s Advocates General, in this case Maciej Szpunar. Such opinions are not binding, but often indicate what the court’s thinking might be. The Lexology blog reports that Szpunar made the following important comments:
If the CJEU agrees with this line of thinking, it would lay down a new and extremely important aspect of copyright in the digital context. It would create a distinction between the software code, whose copyright belongs to its author, and the temporary data that is produced by the user when running that code, which is not. As the Lexology post points out, that could have immediate ramifications for fields outside gaming. For example, it might confirm that plug-ins blocking ads, over which a fierce battle has been waged by a publisher against the idea, as we reported two years ago, would be perfectly legal. More generally:
Implicitly, what that comment is saying is that currently copyright is an obstacle to innovation and user customization in software. Let’s hope the CJEU agrees with its Advocate General’s opinion, and sets people’s creativity free in this area.
Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.
notion image

from the otoh,-no-one-stands-to-benefit-from-this dept

This is pretty harrowing all the way through. No one’s getting away with anything, so there’s that small comfort. But what motivates a well-liked sheriff of a small county in Kentucky to cross every possible line and kill a judge in his own chambers?
The facts, as they were first presented by numerous sources, did nothing but raise questions reporters were either incapable of or unwilling to answer:
There’s a typo in this quote, which simply omits the date of the shooting. There’s more information, but much of what was published was in the early stages of the investigation. What was known at that point was that Sheriff Stines was the most likely suspect in the shooting of Judge Kevin Mullins. He was arrested without incident and was the only other person in the room with the judge at the time of the shooting.
There’s no other record of this shooting due to facts that would become highly-relevant in subsequent reporting. But at this point, a shooting was confirmed, the judge was dead, and pretty much every county entity was put into lockdown mode (including nearby schools) the moment shots were reported.
The weird thing is that both the victim and the (alleged) perpetrator were liked and respected. Judge Mullins handled criminal cases but also made active efforts to steer drug offenders to rehab and other social programs, rather than simply lock them up. Stines appeared to be a law enforcement official who actually believed in the power of community policing, spending a lot of his time interacting with locals and participating in lots of charitable and social functions.
But after a day or two, a possible motive has emerged. It’s barely hinted at in this CNN report, and even that hint is clouded by language that undercuts the severity of the crime.
While the reporting does use the term “rape” (which it definitely was), it sells out the victims of Deputy Ben Fields’ criminal acts. Jennifer Hill didn’t simply “die.” She died of a drug overdose shortly after the civil lawsuit against Fields was filed. And her death allowed Fields to walk away from roughly half of the criminal charges filed against him. (The civil lawsuit, however, is being handled by his dead victim’s family.)
There’s also this detail from earlier reporting on the deputy’s criminal acts, which makes it clear the department Sheriff Stines ran was being subjected to greater judicial scrutiny. Ben Fields was still a deputy, even if his duties were limited to providing court security — a position he leveraged to deactivate court-ordered ankle monitors in exchange for coerced sexual acts. One of Fields’ preferred venues was none other than the (murdered) judge’s chambers because of its lack of CCTV cameras.
Fields would collect his payment (so to speak) but still wouldn’t hold up his end of this disgusting “bargain” when it was his own freedom on the line.
Shortly after the lawsuit was filed, Sheriff Stines (the accused murderer of the judge) did the right thing and fired Deputy Fields for “conduct unbecoming.” Maybe the Sheriff thought that would be the end of it. It obviously wasn’t. The civil lawsuit continues, with Stines and his office named as defendants.
While plenty of reporting suggests residents of the community are confused and concerned by this turn of events, very few reporters seem willing (at least at this point) to connect dots or dig deeper into the most immediate intimation of a motive: the deposition Stines gave only days before murdering the judge.
Speculation without facts in evidence is equally careless. But the most apparent line of questioning is this event happening immediately prior to the shooting of a judge. Questions like: did the deposition make it apparent Stines had a good chance of being held at least indirectly responsible for his deputy’s acts. Or, did the judge have his own recording devices in his chambers? Or was it something else? Did the sheriff expect some sort of favor from the man he’d worked closely with over most of the last decade?
There aren’t a lot of solid answers at this point. But if there’s anything worth keeping an eye on, it’s the deposition and the subsequent shooting. The deputy was fired two years ago and sentenced this January. The lawsuit survives, along with its allegations that the sheriff ignored his deputy’s unlawful acts for years. Maybe that was enough to push the sheriff into this act. But trading a possible lawsuit dismissal for what looks to be a guaranteed murder charge doesn’t make any sense. Whatever went wrong with the Sheriff’s office would have to be far deeper and darker than what’s been uncovered so far for this killing to make any sense.
This isn’t the end of this story. This is only the beginning. The real surprise here is that everyone is too shocked to offer up the normal exonerative bullshit about what happened here. But, even with these unanswered questions, no one should feel comfortable shrugging off the fact that an elected sheriff walked into a room he knew didn’t contain any cameras and (allegedly) killed a judge in an act that elevated him to judge, as well as jury and executioner.

from the not-the-way-to-handle-this dept

Efforts in Congress to ban the use of AI in political ads have largely stalled. So, naturally, the White House is attempting an end-around through the Federal Communications Commission.
While not an outright ban, the FCC’s proposal to regulate the use of artificial intelligence in political ads is an example of Washington bureaucrats not understanding technology at its finest. The proposal would stifle both speech and innovation. And aside from being a bad idea, the issue is outside the purview of the FCC.
The commission should abandon this misguided effort.
What the FCC is proposing would create an arbitrary distinction between political and issue ads using AI content and those using non-AI content.
That’s a double-header of bad policy.
First, it would discourage the use of a valuable technology for no good policy end. If a candidate or campaign wants to mislead the public via advertising, history clearly demonstrates they don’t need AI to do it.
Perhaps even worse, the proposed rule would itself mislead the public by creating the impression that AI-generated content is somehow inherently suspect.
AI is a tool. It can be used for good or ill, like any other tool.
Most modern voters have at least basic understanding and expectation of what artificial intelligence is. They also have a gut instinct about how to take anything in a political ad with a grain of salt.
But the FCC would define AI so broadly that it includes tools such as Photoshop that have been in use for decades.
This regulatory overreach would thus have the opposite effect from its alleged goal – instead of alerting viewers to “deep fakes,” it would label run-of-the-mill political ads as “AI-generated,” deceptively casting doubt in viewers’ minds as to their veracity.
The FCC justifies this overreach by citing the spread of misinformation.
But when it comes to political and issue ads, the Federal Election Commission already has authority to regulate such content. The FEC has chosen not to take up new rulemaking on this topic for this upcoming election.
Beyond that, tools already exist to combat disinformation, and they don’t require ignoring the law or suppressing speech.
Rapid response and voter common sense are better defenses than the government deciding who has a right to say what.
Instead of building confidence in the electoral process, the FCC’s proposed AI disclosure requirement is more likely to create an environment of suspicion and skepticism that undermines the integrity of our elections and fosters distrust in political messaging at a time when the process is already rife with distrust.
Allowing the FCC to stick its nose where it doesn’t belong could also set a dangerous precedent for federal agencies to justify further intrusions into the realm of free expression.
With this proposal, the FCC demonstrates it has no understanding of political ads, common political ad tools, artificial intelligence software, or First Amendment case law regarding disclaimers and political speech.
Maybe that’s why Congress has repeatedly declined to pass legislation authorizing the FCC to require disclosures in political ads.
The FCC doesn’t have the authority or expertise to deal with this issue. And even if it did, it would be a bad idea.
It’s a classic case of a solution in search of a problem. There’s no evidence AI is having any effect on political advertising, or that voters can’t discern for themselves what is and isn’t legitimate.
We all want fair and secure elections. But the FCC’s proposals would stifle the development of beneficial AI technologies, hinder U.S. leadership in this emerging field, and curtail individuals’ free speech rights, without making our elections one iota more fair or more secure.
We can figure out a way to preserve the integrity of our elections without discouraging the growth of new technologies. Instead of starving AI of oxygen, the federal government should foster a regulatory environment that breathes life into the technology’s potential.
And the FCC should leave the regulation of elections to the Federal Election Commission.
James Czerniawski is a senior policy analyst in technology and innovation at Americans for Prosperity.
Filed Under: ai, deepfakes, fcc, fec, political ads