This week in discussing (yet again) why moderation is important, and why deplatforming extremists is a good idea:
- Social Media companies’ Moderation Efforts lost steam in 2023 (Bloomberg). I’d say “lost steam” is generous. Here’s why that’s bad (Tech Policy Press): “If products generate enough bad experiences and harm people enough, people will seek less noxious alternatives.”
- An example of this rolling out in real time this week: Why Substack is at a crossroads (Platformer), followed by Substack will remove Nazi content (also Platformer) with a sprinkling of corporate comms advice from Dave Karpf(a good read for anyone involved in T&S comms strategy).
- Speaking of: here’s some evidence that deplatforming works (Tech Policy Press): it reduces overall attention to online figures.
This week in extremists:
- Recognizing extremist misogyny outside inceldom (gnet-research).
- Here’s an incredibly thorough report on what the Proud Boys have been up to (Khalifa Ihler institute).
This week in elections integrity:
- This January 6th clearinghouse is a great resource of information, which I’ve also added to
- Defending the year of Democracy: “This year must be remembered not only for the scale of its elections but also for the speed and scale of democracy’s defense.” (Foreign affairs)
This week in scams, fraud, and deepfakes:
- GenAI could make KYC (know your customer) effectively useless (Tech Crunch). Basically, many platforms rely on verification through 1) selfies (sometimes with the person holding a piece of paper with email/ username on it) and 2) IDs, and cross-checking these against known information. This can all now be generated by AI easily. An extra layer of security is a “liveness” check (moving your head back and forth) and/or asking for specific poses, but this can be faked, too.
- My favorite work advice column, Ask A Manager, tackles an employer’s response to an employee falling victim to a sextortion scam.
- “I conned the romance scammers with hilarious results.” (The Times, archived link to bypass paywall)
- “If you think only lonely middle-aged women fall for romance scams, you may be the perfect victim.” (The Guardian)
This week in AI and ML safety:
- “GenAI learned nothing from web 2.0” (Wired), and “Dark corners of the web offer a glimpse at AI’s nefarious future” (NYT).
- How to get into AI policy (part 1), by B Cavello, and part 2.
- Three recommended reads in AI policy by Nicklas Berild Lundblad on LinkedIn.
This week in tech policy and minor safety:
- “Virtual reality risks to children will only worsen without coordinated action” (WeProtect Global Alliance).
- Resource for parents and educators on AI safety (Cyberlite).
- Meta makes changes to teen safety regarding suicide and eating disorder content- read more on Platformer and The Verge.
- Twitch updates policy on nudity (yet again) (The Verge).
And, as if all that wasn’t enough, here are even more Trust & Safety links from this week:
- This one is extremely worrying: California Judge Says Because Snapchat Has Disappearing Messages, Section 230 Doesn’t Apply To Lawsuits Over Snapchat Content (TechDirt).
- Top Trust & Safety Experts to follow in 2024 from ActiveFence (full disclosure: I’m listed here)
- “Could a design code help social media serve society better?” (TechDirt podcast)
- Examining the use of the words “health” and “toxicity” in content moderation (Tech Policy Press)