from the speed-running-for-fun-and-for-profit dept
It’s kind of a rite of passage for any new social media network. They show up, insist that they’re the “platform for free speech” without quite understanding what that actually means, and then they quickly discover a whole bunch of fairly fundamental ideas, institute a bunch of rapid (often sloppy) changes… and in the end, they basically all end up in the same general vicinity, with just a few small differences on the margin. Look, I went through it myself. In the early days I insisted that sites shouldn’t do any moderation at all, including my own. But I learned. As did Parler, Gettr, Truth Social and lots of others.
Anyway, Elon’s in a bit of a different position, because rather than starting something new, he’s taken over a large platform. I recognize that he, his buddies, and a whole lot of other people think that Twitter is especially bad at this, and that he’s got some special ideas for “bringing free speech back,” but the reality is that Twitter was, by far, the most successful platform at taking a “we support free speech” stance for content, and learned over time the many nuances and tradeoffs involved.
And because I do hope that Musk succeeds and Twitter remains viable, I wanted to see if we might help him (and anyone else) speed run the basics of the content moderation learning curve that most newbies run into. The order of the levels and the seriousness of each can change over time, and how it all fits together may be somewhat different, but, in the end, basically every major social media platform ends up in this same place eventually (the place Twitter was already at when Musk insisted he needed to tear things down and start again).
Level One: “We’re the free speech platform! Anything goes!”
Cool. Cool. The bird is free! Everyone rejoice.
“Excuse me, boss, we’re getting reports that there are child sexual exploitation and abuse (CSAM) images and videos on the site.”
Oh shit. I guess we should take that down.
Level Two: “We’re the free speech platform! But no CSAM!”
Alright, comedy is now legal on the site. Everyone rejoice. Everyone love me.
“Um, boss. We have a huge stack of emails from Hollywood, saying something about DMCA takedowns?”
Oh right. Copyright infringement is bad. Get another intern and have them take that all down.
Level Three: “We’re the free speech platform! But no CSAM and no infringement!”
Power to the people. Freedom is great!
“Right, boss, apparently because you keep talking about freedom, a large group of people are taking it to mean they have ‘freedom’ to harass people with slurs and all sorts of abuse. People are leaving the site because of it, and advertisers are pulling ads.”
That seems bad. Quick, have someone write up some rules against hate speech.
Level Four: “We’re the free speech platform without CSAM, infringement or hate speech!”
Bringing freedom back is hard work, but this is all going great. Do the people love me yet?
“Hey, so, the FBI is here? Something about 18 USC 2258A and how we were supposed to report all of that CSAM to some operation called NCMEC?”
Ah, right. Grab an intern and make sure they pass along those images. We obey all the laws!
Level Five: “We’re the free speech platform without CSAM, infringement, or hate speech, and we follow all laws!”
These laws are good. We obey the laws. Social media is a snap.
“Hate to bother you, boss, but our users are mad again. It seems that people are posting memes using images from Hollywood movies, and then the studios sent DMCA notices, and as you ordered, the intern is taking those down. So people are getting mad at you for censoring the memes.”
That seems complicated. Can we have the intern send some of these to our outside counsel to review for fair use before pulling them down?
Level Six: “We’re the free speech platform, without CSAM or hate speech, and who will take down infringing content, but not fair use content, and we follow all laws!”
The memes must flow! Hooray fair use! Love me!
“Very good boss. Love the memes, but, um, there’s a NY Times reporter on the phone, and apparently, we’re not catching all the CSAM, and they’re going to run a story about how we’ve become the new hub for pedophiles.”
That can’t be good… There must be some sort of solution out there? Have someone call up Microsoft and get a license to their PhotoDNA. Surely that will solve this?
Level Seven: “We’re the free speech platform, doing our best to stop CSAM, hate speech, and infringement, and we follow all laws!”
Okay, now things are really coming together.
“Pardon me, boss, people are now complaining that they’re getting inundated with spam and it’s driving users away.”
Spam is bad! Everyone’s against spam! I already said we follow all laws, surely spam is illegal! Why aren’t we blocking it?
“Well, our lawyers say that most spam is actually legal.”
Okay, well ask one of our totally awesome engineers to code up a spam filter. He has one week or he’s fired.
Level Eight: “We’re the free speech platform, doing our best to stop CSAM, hate speech, infringement and spam, and we follow all laws!”
Now that the spam is getting blocked, the people will really love me!
“Good evening sir, sorry to bother you so late, but apparently the spam and CSAM filters are actually catching a lot of legitimate content, and it’s making people mad.”
Fire the engineers! Bring me new engineers who don’t suck. And, I guess, maybe hire someone to manage at least some of these things. We can call them… “director of trust.” That sounds good!
Level Nine: “We’re the trustworthy free speech platform, doing our best to stop CSAM, hate speech, infringement, and spam, and we follow all the laws!”
Trust. That’s a good word! Everyone trusts our platform now that we have a director of trust!
“Pardon me, boss, but it appears we have an urgent email from government officials in Malaysia saying that someone is posting a story to our site that violates their laws, though it’s really just calling out government corruption. You say we follow all laws, so do we follow this demand from the government of Malaysia?”
Yikes. People need to speak truth to power! Let’s leave that content up!
“Okay, sir, Malaysia has now blocked all access to our site.”
Level Ten: “We’re the trustworthy free speech platform, doing our best to stop CSAM, hate speech, infringement, and spam, and we follow laws of democratic countries.”
Malaysia can’t be that important. I’m standing up for free speech! Do people love me yet?
“Hi boss, if you have a second, we’re getting reports that people in Myanmar are using our service to encourage genocide. But we don’t have anyone who speaks the language to fully understand what’s going on.”
Can we hire moderators who understand every language?
“There are a lot of languages.”
Well, it’s either hire more people or block entire countries… I guess?
Level Eleven: “We’re the trustworthy mostly free speech platform, doing our best to stop CSAM, hate speech, infringement, spam, and genocide, and we’re working to hire more moderators to deal with foreign languages.”
It seems like global politics is complicated, but maybe I’ll present my suggestions on world peace, so that people will love me!
“Forgive the interruption, boss, but a lot of our most active users are getting angry at you specifically, because the hate speech, spam, and copyright filters are blocking their dank memes, and sometimes they’re getting removed from the platform for violating the rules too many times. They think you’re sitting here and removing their accounts personally.”
Why does everyone blame me?!? Okay, let’s set up a “trust council” who will handle all content moderation questions and appeals.
Level Twelve: “We’re the trustworthy social media site that supports open dialogue, while doing our best to stop CSAM, hate speech, infringement, spam, and genocide, and we’re working to hire more trust and safety professionals, along with setting up an outside counsel.”
Hmm. Our slogan is getting a bit long.
“Sorry to break in again, boss, but Germany, one of our largest EU markets, has a new law that requires us to take down any ‘hate speech’ within a short time, and if we miss anything, then they can fine us way more than we can afford.”
Hire more people in Germany and now review reports as much as possible, but default to taking down reported content. We can’t afford those fines, even if we end up over-blocking.
Level Thirteen: “We’re the trustworthy social media network, that’s doing our best to balance laws and norms, and is really trying to be welcoming for speech, so please give us the benefit of the doubt.”
Why is everyone so mad at me all the time?
“Yeah, boss, I know you’re sick of hearing from me, but Hollywood is suing us. They’re saying that our fair use determinations are bullshit and we’re engaging in infringement.”
Hire more lawyers! Figure out how much this lawsuit will cost, or see if we can just pay some licensing fee! Have an engineer write up a filter that can determine fair use!
“I don’t think a computer can determine fair use yet, sir.”
Hire BETTER engineers! If a car can drive itself, surely a computer can understand fair use!
Level Fourteen: “We’re a social network that promotes trust, and seeks to comply with reasonable laws while finding a balance for speech, and please stop yelling at us.”
Why are people still so mad?
“Excuse me, boss, I know this is exhausting, but one of our most popular users has chained themselves to our front door, because we took down their account after they harassed someone trying to minimize the impact of climate change.”
Why are people so bad?
Level Fifteen: “We’re a social network that wants you to believe in trust, and we have a legal team to deal with laws, and a trust and safety team that’s, you know, working on things.”
This is exhausting.
“Hey, boss, sorry to interrupt, but this is kind of urgent. It seems that one of our users is livestreaming themselves as they shoot up a school, screaming about ‘freedom!'”
Oh no! Take down his account immediately.
Level Sixteen: “We’re a social network that is really trying to do our best, but humanity is messy.”
Why can’t people just be good? I gave them freedom and look what they’ve done with it!
“Boss, boss, another urgent one. It seems that one of our users is attempting suicide while live streaming on the platform and we can’t figure out where they are!”
Don’t we know our users? Figure out better ways to have this info and hire more people to work on trust and safety!
Level Seventeen: “We’re a social network that is trying our best. Please, be kind.”
I just wanted people to love me and be free to meme.
“Excuse me, boss, it appears that the EU has passed a new law that means we’ll be required to take down content they report, even if it’s legal elsewhere. They’re praising you for your promise from last year to obey all laws. Also, it requires that we have employees in every one of those countries who will be legally responsible if we fail.”
Why did I say that? Oh well, let’s hire more people to staff up, and do our best to obey those laws.
Level Eighteen: “We’re a social network doing our best to survive in a globally connected world.”
Mars is looking pretty sweet about now.
“Pardon me again, boss, but now that you’ve agreed to abide by the EU’s laws, I should note that India has passed laws that sound similar to the EU ones you agreed to abide by, and now they’re threatening to jail your local employees because you won’t take down content mocking the Prime Minister. They’re saying that since you abide by the EU’s laws, they expect you to abide by theirs as well.”
India is a massive market. We can’t survive without India. Can we, um, take down maybe some of the worst posts for violating our rules, and try to leave up the rest?
Level Nineteen: “We’re a social network doing our best in this crazy world.”
I just wanted everyone to love me?
“Boss, apologies, but our most famous and popular user, the President, is encouraging his vocal fans to burn down our offices because we put a fact check on his post urging people to strangle anyone with differing political views.”
Do you think someone will buy us?
Level Twenty: “Look, we’re just a freaking website. Can’t you people behave?”
Congratulations, you have completed the game…. Just kidding! It never ends. It only gets worse, and you will make mistakes, and people will get mad and personally blame you and insist that you are deliberately trying to “censor” their brilliant ideas, and advertisers will get mad, and politicians will pressure you into doing their bidding, and the media will criticize every mistake. You own a social network. Isn’t it fun?
from the good-deals-on-cool-stuff dept
The ProBASE X Aluminum Monitor Stand provides a sturdy solution and well designed load-bearing construction for elevating your displays while letting you charge your phone within close reach. The enhanced charging port powers up your devices up to 4x faster than conventional charging ports and to 80% battery charge in just 35 minutes when charging compatible devices. ProBASE X also enables you to add an instant network interface to your laptop or computer. Its fast ethernet 10/100/1000mbps provides sufficient bandwidth for common home or small office data transfer needs. It’s on sale for $145.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
from the that-article-is-bullshit dept
Do not believe everything you read. Even if it comes from more “respectable” publications. The Intercept had a big story this week that is making the rounds, suggesting that “leaked” documents prove the DHS has been coordinating with tech companies to suppress information. The story has been immediately picked up by the usual suspects, claiming it reveals the “smoking gun” of how the Biden administration was abusing government power to censor them on social media.
The only problem? It shows nothing of the sort.
The article is garbage. It not only misreads things, it is confused about what the documents the reporters have actually say, and presents widely available, widely known things as if they were secret and hidden when they were not.
The entire article is a complete nothingburger, and is fueling a new round of lies and nonsense from people who find it useful to misrepresent reality. If the Intercept had any credibility at all it would retract the article and examine whatever processes failed in leading to the article getting published.
Let’s dig in. Back in 2018, then President Donald Trump signed the Cybersecurity and Infrastructure Security Agency Act into law, creating the Cybersecurity and Infrastructure Security Agency as a separate agency in the Department of Homeland Security. While there are always reasons to be concerned about government interference in various aspects of life, CISA was pretty uncontroversial (perhaps with the exception of when Trump freaked out and fired the first CISA director, Chris Krebs, for pointing out that the election was safe and there was no evidence of manipulation or foul play).
While CISA has a variety of things under its purview, one thing that it is focused on is general information sharing between the government and private entities. This has actually been really useful for everyone, even though the tech companies have been (quite reasonably!) cautious about how closely they’ll work with the government (because they’ve been burned before). Indeed, as you may recall, one of the big revelations from the Snowden documents was about the PRISM program, which turned out to be oversold by the media reporting on it, but was still problematic in many ways. Since then, the tech companies have been even more careful about working with government, knowing that too much government involvement will eventually come out and get everyone burned.
With that in mind, CISA’s role has been pretty widely respected with almost everyone I’ve spoken to, both in government and at various companies. It provides information regarding actual threats, which has been useful to companies, and they seem to appreciate it. Given their historical distrust of government intrusion and their understanding of the limits of government authority here, the companies have been pretty attuned to any attempt at coercion, and I’ve heard of nothing regarding CISA at all.
That’s why the story seemed like such a big deal when I first read the headline and some of the summaries. But then I read the article… and the supporting documents… and there’s no there there. There’s nothing. There’s… the information sharing that everyone already knew was happening and that has been widely discussed in the past.
Let’s go through the supposed “bombshells”:
This sounds all scary and stuff, but most of those “meeting minutes” are from the already very, very public Misinformation & Disinformation Subcommittee that was part of an effort to counter foreign influence campaigns. As is clear on their website, their focus is very much on information sharing, with an eye towards protecting privacy and civil liberties, not suppressing speech.
As professor Kate Starbird notes, the Intercept article makes out like this was some nefarious secret meeting when it was actually a publicly announced meeting with public minutes, and part of the discussion was even on where the guardrails should be for the government so that it doesn’t go too far. Indeed, even though the public output of this meeting is available directly on the CISA website for anyone to download, The Intercept published a blurry draft version, making it seem more secret and nefarious. (Updated: to note that not all of the meeting minutes published by The Intercept were public: they include a couple of extra subcommittee minutes that are not on the CISA website, but which have nothing particularly of substance, and certainly nothing that supports the claims in the article. And all of the claims here stand: the committee is public, their meeting minutes are public, including summaries of the subcommittee efforts, even if not all the full subcommittee meeting minutes are public).
And if you read the actual document it’s… all kinda reasonable? It does talk about responding to misinformation and disinformation threats, mainly around elections — not by suppressing speech, but by sharing information to help local election officials respond to it and provide correct information. From the actual, non-scary, very public report:
It includes four specific recommendations for how to deal with mis- and disinformation and none of them involve suppressing it. They all seem to be about responding to and countering such information by things like “broad public awareness campaigns,” “enhancing information literacy,” “providing informational resources,” “providing education frameworks,” “boosting authoritative sources,” and “rapid communication.” See a pattern? All of this is about providing information, which makes sense. Nothing about suppressing. The report even notes that there are conflicting studies on the usefulness of “prebunking/debunking” misinformation, and suggests that CISA pay attention to where that research goes before going too hard on any program.
There’s literally nothing nefarious at all.
The next paragraph in the Intercept piece then provides an email that kinda debunks the entire framing of the article:
Masterson had worked in DHS on these kinds of programs and then moved over to Microsoft. But here he’s literally pointing out that the companies remain hesitant to work too closely with government, which is exactly what we’ve been saying all along, and completely undermines the narrative people have taken out of this article that it proves that the government was too chummy with the companies.
(Also updating to note that the original Intercept story falsely claimed that Masterson was working for DHS at the time of the text, which makes it sound more nefarious. They later quietly changed it, and only added a correction days later when people called them out on it).
Also, this text message is completely out of context, but hold on for that, because it comes up again later in the article.
Next up, the article takes a single quote out of context from an FBI official.
First off, this is generally no different than the nonsense the FBI says publicly, and there’s nothing in the linked document that suggests the companies were in agreement that anyone should be “held accountable.” But even if we look at what Dehmlow actually said, in context, while she did talk about accountability, she mostly focused on education.
Read in context, it sure looks like Dehmlow’s use of the phrase that media should be “held accountable,” means by an educated public. I mean, there’s some notable irony in all of this, where Dehmlow is talking about better educating people on critical thinking, and that’s been turned into pure nonsense and misinformation.
From there, the misleading article jumps randomly to Meta’s interface for the government to submit reports, again implying that this is somehow connected to everything above (it’s not, it’s something totally different):
Again, this is wholly unrelated to the paragraphs above it. The article is just randomly trying to tie this to it. Every company has systems for anyone to report information for the companies to review. But the big companies, for fairly obvious and sensible reasons, also set up specialized versions of that reporting system for government officials so that reports don’t get lost in the flow. Nothing in that system is about demanding or suppressing information, and it’s basically misinformation for the Intercept to imply otherwise. It’s just the standard reporting tool. The presentation that the Intercept links to is just about how government officials can log into the system because it has multiple layers of security to make sure that you’re actually a government official.
It remains difficult to see (1) how this is connected to the CISA discussion, and (2) how this is even remotely new, interesting or relevant. Indeed, you can find out more about this system on Facebook’s “information for law enforcement authorities” page, and the nefarious sounding “Content Request System (CRS)” highlighted in the document the Intercept shows appears to just be the system for law enforcement agents to request information regarding an investigation. That is, a system for submitting a subpoena, court order, search warrant, or national security letter.
Update: Now there is also a part of the system that enables governments to report potential misinformation and disinformation, though again that appears to be the same kind of reporting that anyone can do, because such information breaks Facebook’s rules. The actual document this comes from again does not seem nefarious at all. It literally is just saying the government can alert Facebook to content that violates its existing rules.
So, it allows law enforcement to report the content, but it shows with it the relevant rules. This is the same kind of reporting that any regular user can do, it’s just that law enforcement is viewed as a “trusted” flagger, so their flags get more attention. It does not mean that the government is censoring content, and Facebook’s ongoing transparency reports show that they often reject these requests.
After tossing in that misleading and unrelated point, the article takes another big shift, jumps to a separate DHS “Homeland Security Review” in which DHS warns about the problem of “inaccurate information” which, you know, is a legitimate thing for DHS to be concerned about, because it can impact security. It’s certainly quite reasonable to be worried about DHS overreach. We’ve screamed about DHS overreach for years.
But I keep reading through the article and the documents, and there’s nothing here.
The report notes that there’s a lot of misinformation, and there is, including on the withdrawal of US troops from Afghanistan. That’s true, and it seems like a reasonable concern for DHS… but the Intercept then throws in a random quote about how Republicans (who have been one source of misinformation about the withdrawal) are planning to investigate if they retake the House.
But how is that relevant to the rest of the article and what does it have to do with the government supposedly suppressing information or working with the companies? The answer is absolutely nothing at all, but I guess it’s the sort of bullshit you throw in to make things sound scary when your “secret” (not actually secret) documents don’t actually reveal anything.
There’s also a random non sequitur about DHS in 2004 ramping up the national threat level for terrorism. What’s that got to do with anything? ¯\_(ツ)_/¯
The article keeps pinballing around to random anecdotes like that, which are totally disconnected and have nothing to do with one another. For example:
I keep rereading that, and the paragraph before and after it, trying to figure out if they were working on a different article and accidentally slipped it into this one. It has nothing whatsoever to do with the rest of the article. And Ron DeSantis is not in “the U.S. government.” While he may want to be president, right now he’s governor of Florida, which is a state, not the federal government. It’s just… weird?
Then, finally, after these random tangents, with zero effort to thread them into any kind of coherent narrative, the article veers back to DHS and social media by saying it’s not actually clear if DHS is doing anything.
Again, this is extremely weak sauce. People “report” content that violates social media platform rules all the time. You and I can do it. The very fact that the article admits the companies only “took action” on 35% of reports (and again, only a subset of that was removing) shows that this is not about the government demanding action and the companies complying.
In fact, if you actually read the Stanford report (which it’s unclear if these reporters did), the flagged items they’re talking about are ones that the Election Integrity Project flagged, not the government. And, even then, the 35% number is incredibly misleading. Here’s the paragraph from the report:
So the most active in removals was TikTok, which people already think is problematic, but the big American companies were even less involved. Second, only 13% of the reports resulted in removing the content, and the EIP report actually breaks down what kinds of content were removed vs . labeled, and it’s a bit eye opening (and again destroys the Intercept’s narrative):
If you look, the only cases where the majority of content reported was removed rather than just “labeled” (i.e., providing more information) were phishing attempts and fake official accounts. Those seems like the sorts of things where it makes sense for the platforms to take down that content, and I’m curious if the reporters at the Intercept think we’d be better off if the platforms ignored phishing attempts.
The article then pinballs back to talking about DHS and CISA, how it was set up, and concerns about elections. Again, none of that is weird or secret or problematic. Finally, it gets to another bit that, when read in the article, sounds questionable and certainly concerning:
Except if you look at the actual documents, again, they’re taking things incredibly out of context and turning nothing into something that sounds scary. The first link — supposedly the one that “outlines the process for such takedown events” — does no such thing. It’s literally CISA passing information on to Twitter from the Colorado government, highlighting accounts that they were worried were impersonating Colorado state official Twitter accounts.
The email flat out says that CISA “is not the originator of this information. CISA is forwarding this information, unedited, from its originating source.” And the “information” is literally accounts that Colorado officials are worried are pretending to be Colorado state official government accounts.
Now, it does look like at least some of those accounts may be parody accounts (at least one claims to be in its bio). But there’s no evidence that Twitter actually took them down. And nowhere in that document is there an outline of a process for a takedown.
The second document also does not seem to show what the Intercept claims. It shows some emails, where CISA was trying to set up a reporting portal to make all of this easier (state officials seeing something questionable and passing it on to the companies via CISA). What the email actually shows is that whoever is responding to CISA from Twitter has a whole bunch of questions about the portal before they’re willing to sign on to it. And those concerns include things like “how long will reported information be retained?” and “what is the criteria used to determine who has access to the portal?”
These are the questions you ask when you are making sure that this kind of thing is not government coercion, but is a limited purpose tool for a specific situation. The response from a CISA official does say that their hope is the social media companies will (as the Intercept notes) “process reports and provide timely responses, to include the removal of reported misinformation from the platform where possible.” But in context, again, that makes sense. This portal is for election officials to report problematic accounts, and part of the point of the portal is that if the platforms agree that the content or accounts break their rules they will report back to the election officials.
And, again, this is not all that different from how things work for every day users. If I report a spam account on Twitter, later on Twitter sends me back a notification on the resolution for what I reported. This sounds like the same thing, but perhaps a slightly more rapid response so that election officials know what’s happening.
Again, I’m having difficulty finding anything nefarious here at all, and certainly no evidence of coercion or the companies agreeing to every government request. In fact, it’s quite the opposite.
Then the article pinballs again, back around to the (again, very public) MDM team. And, again, it tries to spin what is clearly reasonable information sharing into something more nefarious:
And, again, as the documents (but not the article!) demonstrate, the companies are often resistant to these government requests.
Then suddenly we come back around to the Easterly / Masterson text messages. The texts are informal, which is not a surprise. They work in similar circles, and both have been at CISA (though not at the same time). The Intercept presents this text exchange in a nefarious manner, even as Masterson is making it clear that the companies are resistant. But the Intercept reporters leave out exactly what Masterson is saying they’re resistant to. Here’s what the Intercept says:
Here’s the full exchange:
If you can’t read that, Easterly texts:
And Masterson replies:
This shows that the platforms are treading very carefully in working with government, even around this request which seems pretty innocuous. CISA is trying to help coordinate so that when local officials have issues they have a path to reach out to the platforms, rather than just reaching out willy-nilly.
We’re now deep, deep in this article, and despite all these hints of nefariousness, and people insisting that it shows how the government is collaborating with social media, all the underlying documents suggest the exact opposite.
Then the article pinballs back to the MDM meeting (whose recommendations are and have been publicly available on the CISA website), and note that Twitter’s former head of legal, Vijaya Gadde, took part in one of the meetings. And, um, yeah? Again, the entire point of the MDM board is to figure out how to understand the information ecosystem and, as we noted up top, to do what they can to provide additional information, education and context.
There is literally nothing about suppression.
But the Intercept, apparently desperate to put in some shred that suggests this proves the government is looking to suppress information, slips in this paragraph:
Note the careful use of quotes. All of the problematic words and phrases like “closely monitor” and “take steps to halt” are not in the report at all. You can go read the damn thing. It does not say that it should “closely monitor” social media platforms of all sizes. It says that the misinformation/disinformation problem involves the “entire information ecosystem.” It’s saying that to understand the flow of this, you have to recognize that it flows all over the place. And that’s accurate. It says nothing about monitoring it, closely or otherwise.
As for “taking steps to halt the spread” it also does not even remotely say that. If you look for the word “spread” it appears in the report seven times. Not once does it discuss anything about trying to halt the spread. It talks about teaching people how not to accidentally spread misinformation, about how the spread of misinformation can create a risk to critical functions like public health and financial services, how foreign adversaries abuse it, and how election officials lack the tools to identify it.
Honestly, the only point where “spread” appears in a proactive sense is where it says that they should measure “the spread” of CISA’s own information and messages.
The Intercept article is journalistic malpractice.
It then pinballs yet again, jumping to the whole DHS Disinformation Governance Board, which we criticized, mainly because of the near total lack of clarity around its rollout, and how the naming of it (idiotic) and the secrecy seemed primed to fuel conspiracy theories, as it did. But that’s unrelated to the CISA stuff. The conspiracy theories around the DGB (which was announced and disbanded within weeks) only help to fuel more nonsense in this article.
The article continues to pinball around, basically pulling random examples of questionable government behavior, but never tying it to anything related to the actual subject. I mean, yes, the FBI does bad stuff in spying on people. We know that. But that’s got fuck all to do with CISA, and yet the article spends paragraphs on it.
And then, I can’t even believe we need to go here, but it brings up the whole stupid nonsense about Twitter and the Hunter Biden laptop story. As we’ve explained at great length, Twitter blocked links to one article (not others) by the NY Post because they feared that the article included documents that violated its hacked materials policy, a policy that had been in place since 2019 and had been used before (equally questionably, but it gets no attention) on things like leaked documents of police chatter. We had called out that policy at the time, noting how it could potentially limit reporting, and right after there was the outcry about the NY Post story, Twitter changed the policy.
Yet this story remains the bogeyman for nonsense grifters who claim it’s proof that Twitter acted to swing the election. Leaving aside that (1) there’s nothing in that article that would swing the election, since Hunter Biden wasn’t running for president, and (2) the story got a ton of coverage elsewhere, and Twitter’s dumb policy enforcement actually ended up giving it more attention, this story is one about the trickiness in crafting reasonable trust & safety policies, not of any sort of nefariousness.
Yet the Intercept takes up the false narrative and somehow makes it even dumber:
The Zuckerberg/Rogan podcast thing has also been taken out of context by the same people. As he notes, the FBI gave a general warning to be on the lookout for false material, which was a perfectly reasonable thing for them to do. And, in response Facebook did not actually block links to the article. It just limited how widely the algorithm would share it until the article had gone through a fact check process. This is a reasonable way to handle information when there are questions about its authenticity.
But neither Twitter nor Facebook suggest that the government told them to suppress the story, because it didn’t. It told them generally to be on the lookout, and both companies did what they do when faced with similar info.
From there, the Intercept turns to a nonsense frivolous lawsuit filed by Missouri’s Attorney General and takes a laughable claim at face value:
Now here, you can note that Dehmlow was the person mentioned way above who talked about platforms and responsibility, but as we noted, in context, she was talking about better education of the public. The section quoted in Missouri’s litigation is laughable. It’s telling a narrative for fan service to Trumpist voters. We already know that the FBI told Facebook to be on the lookout for fake information. The legal complaint just makes up the idea that Dehmlow tells them what to censor. That’s bullshit without evidence, and there’s nothing to back it up beyond a highly fanciful and politicized narrative.
But from there, the Intercept says this:
Except… it wasn’t. Literally nothing anywhere in this story shows law enforcement “pressuring technology firms” about the Hunter Biden laptop story.
The article then goes on at length about the silly politicized lawsuit, quoting two highly partisan commentators with axes to grind, before quoting former ACLU president Nadine Strossen claiming:
Because of the horrible way the article is written, it’s not even clear which “messages” she’s talking about, but I’ve gone through every underlying document in the entire article and none of them involve anything remotely close to censorship. Given the selective quoting and misrepresentation in the rest of the article, it makes me wonder what was actually shown to Strossen.
As far as I can tell, the emails they’re discussing (again, this is not at all clear from the article) are the ones discussed earlier in which Colorado officials (not DHS) were concerned that some new accounts were attempting to impersonate Colorado officials. They sent a note to CISA, which auto-forwarded it to the companies. Yes, some of the accounts may have been parodies, but there’s no evidence that Twitter actually took action on the accounts, and the fact is that the accounts did make some effort to at least partially appear as Colorado official state accounts. All the government officials did was flag it.
I think Strossen is a great defender of free speech, but I honestly can’t see how anyone thinks that’s “censorship.”
Anyway, that’s where the article ends. There’s no smoking gun. There’s nothing. There are a lot of random disconnected anecdotes, misreading and misrepresenting documents, and taking publicly available documents and pretending they’re secret.
If you look at the actual details it shows… some fairly basic and innocuous information sharing with nothing even remotely looking like pressure on the companies to take down information. We also see pushback from the companies, which are being extremely careful not to get too close to the government and to keep them at arms’ length.
But, of course, a bunch of nonsense peddlers are turning the story into a big deal. And other media is picking up on it and turning it into nonsense.
None of those headlines are accurate if you actually look at the details. But all are getting tremendous play all over the place.
And, of course, the reporters on the story rushed to appear on Tucker Carlson:
Except that’s not at all what the “docs show.” At no point do they talk about “monitoring disinformation.” And there is nothing about them “working together” on this beyond basic information sharing.
In fact, just after this story came out, ProPublica released a much more interesting (and better reported) article that basically talks about how the Biden administration gave up on fighting disinformation because Republicans completely weaponized it by misrepresenting perfectly reasonable activity as nefarious.
Incredibly, that ProPublica piece quotes Colorado officials (you know, like the ones who emailed CISA their concern, which got forwarded to Twitter, about fake accounts) noting how they really could use some help from the government and they’re not getting it:
I had tremendous respect for The Intercept, which I think has done some great work in the past, but this article is so bad, so misleading, and just so full of shit that it should be retracted. A credible news organization would not put out this kind of pure bullshit.