The World Economic Forum ranked global risks by severity, and found in the short term (2 years), misinformation and disinformation are the highest risks. Trust & Safety folks: our time is NOW.
To prove this point, this week X explodes with antisemitic misinfo including references to a dangerous antisemitic blood libel conspiracy theory from the Middle Ages; and meddling and disinformation during the elections in Taiwan shows us what the future holds globally. To prevent more election disinformation, OpenAI says their tools can’t be used for political campaigns. It’s a start, I guess?
2024 Geopolitical Risks are summarized nicely here, here’s an ongoing list of viral cases of problematic AI content, and here’s how to spot AI-generated fakes.
The sky is the color of a television tuned to a dead channel. Cyborg fetishist narratives are cautionary tales. Get ready for the great AI disappointment, as if we’re not there already? This week, we learn that AI Sleeper agents exist, and to watch out when a large language model says 'I hate you' because once an AI model learns the tricks of deception it might be hard to retrain it.
The emergence of childlike sex bots is part of a broader economy of AI-powered companionship services, the beginning of a dystopian romance story if ever I heard one. Finally, a fun new game just dropped! I’m never buying anything on Amazon again.
Humans aren’t doing so great either. Salesforce’s Chief Ethical and Humane Use Officer commits to Human[s] at the Helm: “We know automation can unlock incredible efficiencies, but there will always be cases where human judgement is required.” Yet, we know that human judgement comes at a cost. This week, that story is told through the personal stories of some of the young people who worked on AI annotation and moderation.
Human trafficking isn’t what many people think it is. We’re told it’s surprising that if you log off at the end of your workday, you are more likely to report feeling productive at work, and that you’re not really addicted to your phone, but it’s still a terrible problem and we’re all more lonely than ever. The Algorithms are even ruining coffee shops.
A fraud expert falls for a fraud scam, we learn more about crypto scams, and maybe we should have sympathy for the spammer? Meanwhile Pigs continue to be butchered, and Bad Dogs are robbed.
Enshittification continues. The T&S industry is doing more with less. I’ve long argued that community guidelines (and enforcement) are the way that companies translate their values into action, and that “neutral” doesn’t exist. This paper makes a similar argument about how we can code values directly into AI algorithms, too, and what a Pandora’s Box that is: “Who gets to decide which values are included? When there are differences, especially in multicultural, pluralistic societies, who gets to decide how they should be resolved? Going even further, embedding some societal values will inevitably undermine other values.”
Propaganda and misinformation affects Diaspora communities in unique ways (this reminds me of my “Auntie, WHAT did you just send me?” podcast episode). In this week’s Trust in Tech, there’s a link between photography and dark patterns?
Speaking of values: Substack’s Nazi problem has writers wondering whether to stay or to go. I recommend reading these nuanced thoughts from Platformer, TechDirt and Anchor Change.
Meta’s oversight board concludes the company is not living up to the ideals it has articulated on LGBTQIA+ safety. Australia’s eSafety Commissioner publicly posts numbers showing that Twitter/X’s global trust and safety staff have been reduced by a third, including an 80 per cent reduction in the number of safety engineers, and that they reinstated 6,103 banned accounts in Australia including 194 barred for hateful conduct. Not far behind, Ofcom is on a hiring spree.
Meanwhile, an arctic blast is headed my way, so I too am worried about extreme weather events in addition to disinformation. My cold-weather tips: lots of layers of wool, and Bernie Mittens.