The Motion Picture Association Doesn’t Get To Decide Who The First Amendment Protects

notion image

from the that's-not-how-any-of-this-works dept

and
Twelve years ago, internet users spoke up with one voice to reject a law that would build censorship into the internet at a fundamental level. This week, the Motion Picture Association (MPA), a group that represents six giant movie and TV studios, announced that it hoped we’d all forgotten how dangerous this idea was. The MPA is wrong. We remember, and the internet remembers.
What the MPA wants is the power to block entire websites, everywhere in the U.S., using the same tools as repressive regimes like China and Russia. To it, instances of possible copyright infringement should be played like a trump card to shut off our access to entire websites, regardless of the other legal speech hosted there. It is not simply calling for the ability to take down instances of infringement—a power they already have, without even having to ask a judge—but for the keys to the internet. Building new architectures of censorship would hurt everyone, and doesn’t help artists.
The bills known as SOPA/PIPA would have created a new, rapid path for copyright holders like the major studios to use court orders against sites they accuse of infringing copyright. Internet service providers (ISPs) receiving one of those orders would have to block all of their customers from accessing the identified websites. The orders would also apply to domain name registries and registrars, and potentially other companies and organizations that make up the internet’s basic infrastructure. To comply, all of those would have to build new infrastructure dedicated to site-blocking, inviting over-blocking and all kinds of abuse that would censor lawful and important speech.
In other words, the right to choose what websites you visit would be taken away from you and given to giant media companies and ISPs. And the very shape of the internet would have to be changed to allow it.
In 2012, it seemed like SOPA/PIPA, backed by major corporations used to getting what they want from Congress, was on the fast track to becoming law. But a grassroots movement of diverse Internet communities came together to fight it. Digital rights groups like EFF, Public Knowledge, and many more joined with editor communities from sites like Reddit and Wikipedia to speak up. Newly formed grassroots groups like Demand Progress and Fight for the Future added their voices to those calling out the dangers of this new form of censorship. In the final days of the campaign, giant tech companies like Google and Facebook (now Meta) joined in opposition as well.
What resulted was one of the biggest protests ever seen against a piece of legislation. Congress was flooded with calls and emails from ordinary people concerned about this steamroller of censorship. Members of Congress raced one another to withdraw their support for the bills. The bills died, and so did site blocking legislation in the US. It was, all told, a success story for the public interest.
Even the MPA, one of the biggest forces behind SOPA/PIPA, claimed to have moved on. But we never believed it, and they proved us right time and time again. The MPA backed site-blocking laws in other countries. Rightsholders continued to ask US courts for site-blocking orders, often winning them without a new law. Even the lobbying of Congress for a new law never really went away. It’s just that today, with MPA president Charles Rivkin openly calling on Congress “to enact judicial site-blocking legislation here in the United States,” the MPA is taking its mask off.
Things have changed since 2012. Tech platforms that were once seen as innovators have become behemoths, part of the establishment rather than underdogs. The Silicon Valley-based video streamer Netflix illustrated this when it joined MPA in 2019. And the entertainment companies have also tried to pivot into being tech companies. Somehow, they are adopting each other’s worst aspects.
But it’s important not to let those changes hide the fact that those hurt by this proposal are not Big Tech but regular internet users. Internet platforms big and small are still where ordinary users and creators find their voice, connect with audiences, and participate in politics and culture, mostly in legal—and legally protected—ways. Filmmakers who can’t get a distribution deal from a giant movie house still reach audiences on YouTube. Culture critics still reach audiences through zines and newsletters. The typical users of these platforms don’t have the giant megaphones of major studios, record labels, or publishers. Site-blocking legislation, whether called SOPA/PIPA, “no fault injunctions,” or by any other name, still threatens the free expression of all of these citizens and creators.
No matter what the MPA wants to claim, this does not help artists. Artists want their work seen, not locked away for a tax write-off. They wanted a fair deal, not nearly five months of strikes. They want studios to make more small and midsize films and to take a chance on new voices. They have been incredibly clear about what they want, and this is not it.
Even if Rivkin’s claim of an “unflinching commitment to the First Amendment” was credible from a group that seems to think it has a monopoly on free expression—and which just tried to consign the future of its own artists to the gig economy—a site-blocking law would not be used only by Hollywood studios. Anyone with a copyright and the means to hire a lawyer could wield the hammer of site-blocking. And here’s the thing: we already know that copyright claims are used as tools of censorship.
The notice-and-takedown system created by the Digital Millennium Copyright Act, for example, is abused time and again by people who claim to be enforcing their copyrights, and also by folks who simply want to make speech they don’t like disappear from the Internet. Even without a site-blocking law, major record labels and US Immigration and Customs Enforcement shut down a popular hip hop music blog and kept it off the internet for over a year without ever showing that it infringed copyright. And unscrupulous characters use accusations of infringement to extort money from website owners, or even force them into carrying spam links.
This censorious abuse, whether intentional or accidental, is far more damaging when it targets the internet’s infrastructure. Blocking entire websites or groups of websites is imprecise, inevitably bringing down lawful speech along with whatever was targeted. For example, suits by Microsoft intended to shut down malicious botnets caused thousands of legitimate users to lose access to the domain names they depended on. There is, in short, no effective safeguard on a new censorship power that would be the internet’s version of police seizing printing presses.
Even if this didn’t endanger free expression on its own, once new tools exist, they can be used for more than copyright. Just as malfunctioning copyright filters were adapted into the malfunctioning filters used for “adult content” on tumblr, so can means of site blocking. The major companies of a single industry should not get to dictate the future of free speech online.
Why the MPA is announcing this now is anyone’s guess. They might think no one cares anymore. They’re wrong. Internet users rejected site blocking in 2012 and they reject it today.
notion image

from the it-sucks,-but-it's-reality dept

Not every bad mistake is evil. Not every poor decision is deliberate. Especially in these more automated times. Sometimes, machines just make mistakes, and it’s about time we came to terms with that simple fact.
Last week, we wrote about how, while Meta may be a horrible awful company that you should not trust, there was no evidence suggesting that its blocking of the local news site, the Kansas Reflector, soon after it published a mildly negative article about Meta, was in any way deliberate.
As we pointed out, false positives happen all the time with automated blocking tools, where classifiers mistakenly decide a site or a page (or an email) is problematic. And that’s just how this works. If you want fewer false positives, then you end up with fewer false negatives. And that would mean more actually dangerous or problematic content (phishing sites, malware, etc.) get through. At some point, you simply have to decide what types of errors are more important to stop and tweak the systems accordingly.
In general, it’s probably better to get more false positives than false negatives. It’s ridiculously annoying and problematic for those who are the victims of such mistakes. But, in general, you’d rather have fewer actual scams and malware getting through. And, that absolutely sucks for sites caught in the crossfire. Hell, last year, Microsoft Bing and DuckDuckGo banned all Techdirt links for a good five months or so. There was nothing I could do about it. At least I knew that it was likely just yet another false positive, because such false positives happen all the time.
I also knew that it was likely that there would never be a good explanation for what happened (Microsoft and DuckDuckGo refused to comment). Because, I also understand that the companies running these systems don’t have full visibility into what happened either. Some people think this is a condemnation of the system, but I don’t think it is. Classifier systems take a very large number of signals, and then decide whether that large combination of signals suggest a problem site or an acceptable one. And the thresholds and signals can (and do) change all the time.
Still, people who got mad at what I said last week kept insisting that (1) it must be deliberate, and (2) that Meta had to give a full and clear explanation of how this happened. I found both such propositions dubious. The first one for all the reasons above, and the second one because I know that it’s often just not possible to tell. Hell, on a much smaller scale, this is how our own spam filter works in the comments here at Techdirt. It takes in a bunch of signals and decides whether or not something is spam. And sometimes it makes mistakes. Sometimes it flags content that isn’t spam. Sometimes it lets through content that is. In most cases, I have no idea why. It’s just that when all the signals are weighted, that’s what’s spit out.
And so, it’s of little surprise that the Kansas Reflector is now more or less admitting what I suggested was likely last week. They are admitting that it was just Meta’s automated detector (though they make it sound scarier by calling it “AI”) that made a bad call, and that even Meta probably couldn’t explain why it happened:
Basically, exactly what I suggested was likely what happened (and which got a bunch of people mad at me). The Kansas Reflector story about it is a bit misleading because it keeps referring to the automated systems as “AI” (which is a stretch) and also suggests that all this shows that Meta is somehow not sophisticated here, quoting the ACLU’s Daniel Kahn Gillmor:
But, I think that’s basically wrong. Meta may not be particularly good at many things, and the company may have very screwed up incentives, but fundamentally, its basic trust & safety operation is absolutely one of the company’s core competencies. It’s bad, because every company is bad at this, but Meta’s content moderation tools are much more sophisticated than most others.
Part of the issue is simply that the scale of content it reviews is so large that even if it has a very, very small error rate, many, many sites will get falsely flagged (either as a false negative or a false positive). You can argue that the answer to this is less scale, but that raises other questions, especially in a world where it appears that people all over the world want to be able to connect with other people all over the world.
But, at the very least, it’s nice that the Kansas Reflector has published this article explaining that it’s unlikely that, even if it wanted to, Meta could explain what happened here.
It’s not even that it’s “not within the technical capability” to do it, because that implies that if they just programmed it differently, it could tell you. Rather, there are so many different signals that it’s weighing, that there’s no real way to explain what triggered things. It could be a combination of the number of links, with the time it was posted, with how it was shared, to a possible vulnerability on the site, each weighted differently. But when combined, they all worked to trip the wire saying “this site might be problematic.”
Any one of those things by themselves might not matter and might not trip things, but somehow the combination might. And that’s not at all easy to explain, especially when the signals, and the weights, and the thresholds are likely in constant flux.
Yes, this sucks for the Kansas Reflector. However, it seems like it got a lot more attention because of all of this. But it’s the nature of content moderation these days that is unlikely to change. Every site has to use some form of automation, and that’s always going to lead to mistakes of some sort or another. It’s fine to call out these mistakes and even to make fun of Meta, but it helps to be realistic about what the cause is. This way, people won’t overreact and suggest that this fairly typical automated mistake was actually a deliberate attempt to suppress speech critical of the company.