Death Of A Forum: How The UK’s Online Safety Act Is Killing Communities

notion image
We’ve been warning for years that the UK’s Online Safety Act would be a disaster for the open internet. Its supporters accused us of exaggerating, or “shilling” for Big Tech. But as we’ve long argued, while tech giants like Facebook and Google might be able to shoulder the law’s immense regulatory burdens, smaller sites would crumble.
Well, it’s already happening.
On Monday, the London Fixed Gear and Single-Speed (LFGSS) online forum announced that it would be shutting down the day before the Online Safety Act goes into effect. It noted that it is effectively impossible to comply with the law. This was in response to UK regulator Ofcom telling online businesses that they need to start complying.
This includes registering a “senior person” with Ofcom who will be held accountable should Ofcom decide your site isn’t safe enough. It also means that moderation teams need to be fully staffed with quick response times if bad (loosely defined) content is found on the site. On top of that, sites need to take proactive measures to protect children.
While all of this may make sense for larger sites, it’s impossible for a small one-person passion project forum for bikers in London. For a small, community-driven forum, these requirements are not just burdensome, but existential.
LFGSS points out that the rules are designed for big companies, not small forums, even as it’s likely covered by the law:
But it’s not just the LFGSS that’s shutting down, but also Microcosm, the open source forum platform underlying LFGSS which was apparently created by the same individual and offered similar local community forums for others beyond just the London biking community.
Apparently, Microcosm is hosting approximately 300 small communities, all of which will either shut down or have to migrate within three months. The developer behind all of this seems understandably devastated:
This is why we’ve spent years warning people. When you regulate the internet as if it’s all just Facebook, all that will be left is Facebook.
Policymakers have repeatedly brushed off warnings about these consequences, insisting that concerns are overblown or merely fear-mongering from big tech companies looking to avoid regulation. But it’s not. And we’re seeing the impact already.
The promise of the internet was supposed to be that it allowed anyone to set up whatever they wanted online, whether it’s a blog or a small forum. The UK has decided that the only forums that should remain online are those run by the largest companies in the world.
Some might still argue that this law is “making the internet safer,” but it sure seems to be destroying smaller online communities that many people relied on.
It may be too late for the UK, but one would hope that other countries (and states) realize this and step back from the ledge of passing similar legislation.
Companies: microcosm

from the good-deals-on-cool-stuff dept

The Cybersecurity Projects Bundle offers a hands-on program featuring five real-world cybersecurity projects, totaling 35 tasks. Participants start with an introductory video for each project, detailing objectives and requirements, followed by task completion that mirrors real cybersecurity challenges. Support from industry professionals ensures personalized feedback and guidance. Upon completing the program, participants gain practical experience, a solid understanding of cybersecurity practices, and a certificate recognizing their achievements. Ideal for both beginners and experienced professionals. It’s on sale for $30.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
notion image
Filed Under: daily deal

from the bring-on-the-robocops dept

It often seems that when people have no good ideas or, indeed, any ideas at all, the next thing out of their mouths is “maybe some AI?” It’s not that AI can’t be useful. It’s that so many use cases are less than ideal.
Enter Axon, formerly Taser, which has moved from selling modified cattle prods to cops to selling them body cameras. The shift makes sense. Policy makers want to believe body cameras will create more accountability in police forces that have long resisted this. Cops don’t mind this push because it’s far more likely body cam footage will deliver criminal convictions than it will force them to behave better when wielding the force of law.
Axon wants to keep cops hooked on body cams. It hands them out like desktop printers: cheap entry costs paired with far more expensive, long-term contractual obligations. Buy a body cam from Axon on the cheap and expect to pay fees for access and storage for years to come. Now, there’s another bit of digital witchery on top of the printer cartridge-esque access fees: AI assistance for police reports.
Theoretically, it’s a win. Cops will spend less time bogged down in paperwork and more time patrolling the streets. In reality, it’s something else entirely: the abdication of responsibility to algorithms and a little more space separating cops from accountability.
AI can’t be relied on to recap news items coherently. It’s already shown it’s capable of “hallucinating” narratives due to the data it relies on or has been trained on. There’s no reason to believe that, at this point, AI is capable of performing tasks cops have been doing for years: writing up arrest/interaction reports.
The problem here is that a bogus AI-generated report causes far more real-world pain than that experienced by news agencies that endure momentary public shaming or lawyers being chastised by judges. People can lose their rights and their actual freedom if AI concocts a narrative that supports the actions taken by officers. Even at its best, AI should not be allowed to determine whether or not people have access to their rights or literal freedom.
The ACLU, following up on an earlier report criticizing adoption of AI-assisted police paperwork, has released its own take on the tech proposed and pushed by companies like Axon. Unsurprisingly, it’s not in favor of abdicating human rights to AI armchair quarterbacking.
There’s more in this article from The Register than just some summarizing of the ACLU’s comprehensive report [PDF]. It also features input from people who’ve actually done this sort of work on the ground level who align themselves with the ACLU’s criticism, rather than the government agencies they worked for. This is from Brandon Vigliarolo, who wrote this op-ed for El Reg:
The answer is we can’t. We can’t do it now. And there’s a solid chance we can’t do it ever.
Both Axon and law enforcement agencies choosing to utilize this tech will claim human backstops will prevent AI from hallucinating someone into jail or manufacturing justification for civil rights violations. But that’s obviously not true. And that’s been confirmed by Axon itself, whose future business relies on future uptake of its latest tech offering.
This leading indicator suggests cop shops are looking for a cheap way to relieve the paperwork burden on officers, presumably to free them up to do the more important work of law enforcement. The lower cost/burden seems to be the only focus, though. Even when given something as simple as a single-click option to ensure better human backstopping of AI-generated police reports, agencies are opting out because, apparently, it might mean some reports will be rejected and/or the thin veil of plausible deniability might be pierced.
That’s part of the bargain. If a robot writes a report, officers can plausibly claim discrepancies between reports and recordings aren’t their fault. But that’s not even the only problem. As the ACLU report notes, there’s a chance AI-generated reports will decided something “seen” or “heard” in recordings supports officers’ actions, even if human review of the same footage would see clear rights violations.
The other problem is inadvertent confirmation bias. In an ideal world, any arrest or interaction that has resulted in questionable force deployment — especially when cops kill someone — cops would need to give statements before they’ve had a chance to review recordings. This would help eliminate post facto narratives that remove contradictory statements and allow officers to agree upon an exonerative narrative. Allowing AI to craft reports from uploaded footage undercuts this necessary time-and-distance factor, giving cops’ cameras the chance to tell the story before the cops have even come up with their own.
Now, it might seem that would be better. But I can guarantee you that if the AI report doesn’t agree with the officer’s report in disputed situations, the AI-generated report will be kicked to the curb. And it works the other way, too.
Even the early adopters of body cams found a way to make this so-called “accountability” tech work for them. When the cameras weren’t being turned on or off to suit narrative needs, cops were attacking compliant arrestees while yelling things like “stop resisting” or claiming the suspect was trying to grab one of their weapons. The subjective angle, coupled with extremely subjective statements in the recordings, was leveraged to provide justification for any lovely of force deployed. AI is incapable of separating cop pantomime from what’s captured on tape, which means all cops have to do to talk a bot into backing their play is say a bunch of stuff that sounds like probable cause while recording an arrest or search.
We already know most law enforcement agencies rarely proactively review body cam footage. And they’re even less likely to review reports and question officers if things look a bit off. Most agencies don’t have the personnel to handle proactive reviews, even if they have the desire to engage in better oversight. And an even larger percentage lack the desire to police their police officers, which means there will never be enough people in place to check the work (and paperwork) of law enforcers.
Adding AI won’t change this equation. It will just make direct oversight that much simpler to abandon. Cops won’t be held accountable because they can always blame discrepancies on the algorithm. And the tech will encourage more rights violations because it adds another layer of deniability officers and their supervisors can deploy when making statements in state courts, federal courts, or the least-effective court of all, the court of public opinion.
These are all reasons accountability-focused legislators, activists, and citizens should oppose a shift to AI-enhanced police reports. And they’re the same reasons that will encourage rapid adoption of this tech by any law enforcement agency that can afford it.
Companies: axon