The REPORT Act: Enhancing Online Child Safety Without the Usual Congressional Nonsense

from the a-good-bill?-for-the-children? dept

For years and years, Congress has been pushing a parade of horrible “protect the children online” bills that seem to somehow get progressively worse each time. I’m not going through the entire list of them, because it’s virtually endless.
One of the most frustrating things about those bills, and the pomp and circumstance around them, is that it ignores the simpler, more direct things that Congress could do that would actually help.
Just last week, we wrote about the Stanford Internet Observatory’s big report on the challenges facing the CyberTipline, run by the National Center for Missing & Exploited Children (NCMEC). We wrote two separate posts about the report (and also discussed it on the latest episode of our new podcast, Ctrl-Alt-Speech) because there was so much useful information in there. As we noted, there are real challenges in making the reporting of child sexual abuse material (CSAM) work better, and it’s not because people don’t want to help. It’s actually because of a set of complex issues that are not easily solvable (read the report or my articles for more details).
But there were still a few clear steps that could be taken by Congress to help.
This week, the REPORT Act passed Congress, and it includes… a bunch of those straightforward, common sense things that should help improve the CyberTipline process. The key bit is allowing the CyberTipline to modernize a bit, including allowing it to use cloud storage. To date, no cloud storage vendors could work with NCMEC, out of a fear that they’d face criminal liability for “hosting CSAM.”
This bill fixes that, and should enable NCMEC to make use of some better tools and systems, including better classifiers, which are becoming increasingly important.
There are also some other factors around letting victims and parents of victims report CSAM involving the child directly to NCMEC, which can be immensely helpful in trying to stop the spread of some content (and on focusing some law enforcement responses).
There are also some technical fixes that require platforms to retain certain records for a longer period of time. This was another important point that was highlighted in the Stanford report. Given the flow of information and prioritization, sometimes by the time law enforcement realized it should get a warrant to get more info from a platform, the platform would have already deleted it as required under existing law. Now that time period is extended to give law enforcement a bit more time.
The one bit that we’ll have to see how it works is that it extends the reporting requirements for social media to include violations of 18 USC 1591, which is the law against sex trafficking. Senator Marsha Blackburn, who is the co-author of the bill, is claiming that this means that “big tech companies will now be required to report when children are being trafficked, groomed or enticed by predators.”
notion image
So, it’s possible I’m misreading the law (and how it works with existing laws…) but I see nothing limiting this to “big tech.” It appears to apply to any “electronic communication service provider or remote computing service.”
Also, given that Marsha Blackburn appears to consider “grooming” to include things like LGBTQ content in schools, I worried that this was going to be a backdoor bill to making all internet websites have to “report” such content to NCMEC, which would flood their systems with utter nonsense. Thankfully, 1591 seems to include some pretty specific definitions of sex trafficking that do not match up with Blackburn’s definition. So she’ll get the PR victory among nonsense peddlers for pretending that it will lead to the reporting of the non-grooming that she insists is grooming.
And, of course, while this bill was actually good (and it’s surprising to see Blackburn on a good internet bill!) it’s not going to stop her from continuing to push KOSA and other nonsense moral panic “protect the children” bills that will actually do real harm.

from the holy-shit dept

Artificial Intelligence is all the rage these days, so I suppose it was inevitable that major world religions would try their holy hands at the game eventually. While an unfortunate amount of the discourse around AI has devolved into doomerism of one flavor or another, the truth is that this technology is still so new that it underwhelms as often as it impresses. Still, one particularly virulent strain of the doom-crowd around AI centers on a great loss of jobs for us lowly human beings if AI can be used instead.
Would this work for religious leaders like priests? The Catholic Answers group, which is not part of the Catholic Church proper, but which advocates on behalf of the Church, tried its hand at this, releasing an AI chatbot named “Father Justin” recently. It… did not go well.
notion image
So, yeah, that’s kind of a problem with chatbots generally. If you give them a logical prompt, they’re going to answer it logically as well, so long as guardrails preventing certain answers aren’t constructed. Like an AI bot claiming to be a real priest and offering users actual sacraments, for instance. This impersonation of a priest generally can’t have made the Vatican very happy, nor some of the additional guidance it gave to folks that asked it questions.
I suppose this makes Mike Judge something of a prophet, given the film Idiocracy. In any case, it appears that this particular AI software at least is not yet in a position to replace wetware clergy, nor should it ever be. There are things that AI can do for us that can be of great use. See Mike’s post on how he’s using it here at Techdirt, for instance. But answering the most inherent philosophical questions human beings naturally have certainly isn’t one of them. And I cannot think of a worse place for AI to stick its bit-based nose into than on matters of the numinous.
It seems that Catholic Answers got there eventually, stripping Justin of his priesthood and demoting him to a mere layperson.
Meet Father Justin:
notion image
And meet “lay theologist” regular-guy Justin:
notion image
Regular-guy Justin also no longer claims to be a priest, so there’s that. But the overall point here is that deploying generative AI like this in a way that doesn’t immediately create some combination of embarrassment and hilarity is really hard. So hard, in fact, that it should probably only be done for narrow and well-tested applications.
On the other hand, I suppose, of all the reasons for a priest to be defrocked, this is among the most benign.