Unfortunately, business investments in Trust & Safety often only come after significant pressure, whether from the public, the press, or the government. After industry-wide layoffs over the last 6 months, we're now seeing mounting pressure to reinvest in two key areas:
Child Safety:
- Tech CEOs will testify to the Senate Judiciary Committee about child safety.
- X announces they will be hiring a team of 100 to focus on CSAM.
- The Integrity Institute posts recommendations for best practices to ensure child safety online without requiring age verification, saying "rigorous age and identity verification infringe on privacy and put marginalized people at risk".
- A whistleblower says that Meta is not doing enough to safeguard children.
Deepfakes & Sexually Explicit Content:
- Sexually Explicit deepfake images of Taylor Swift results in a call for legislation from the White House and the prediction that this may be the catalyst that the internet needs to regulate AI.
- In response, X removes the ability to search for Taylor Swift, and Microsoft makes changes to their AI tool. (Here's a report on how the images were created and spread, and which platforms were involved).
I'm glad these conversations are happening, and hope that they will result in increased (and sustained!) investment in T&S.
Cinder's Brian Fishman and Lucia Stacey Harris nicely sum up the issue of how to justify and measure T&S work, saying: "Leaders that are used to thinking about risk in terms of customer dangers or real-world harm are often asked to quantify outcomes in terms of dollars saved or produced." Perhaps one reason T&S leaders instinctively focus on real-world harm is because we are also exposed to those dangers ourselves, sometimes in extreme ways.
I've personally been sent (multiple) death threats over the years from disgruntled users who were angry about my decisions, and many colleagues I know in the industry have experienced the same. In last week's episode of the Trust in Tech podcast, Bella and I discuss this. Check out the show notes for some resources on how to protect yourself. I was also recently sent a link to Operation Privacy, which I'm testing out - it's a free site with a self-managed dashboard that helps you remove your presence from the internet if you need to.
More of what I'm reading this week:
- GLAAD shares specific policy recommendations for prohibiting conversion therapy. Increased attention to policy to protect the LGBTQ+ community is more important than ever, because a recent study shows that 28% of Gen Z identifies as LGBTQ+.
- OpenAI's Sam Altman discusses an "uncomfortable" future where ChatGPT may give different answers to users based on their personal values and where they live.
- A thoughtful argument for federated moderation and a platform user's right to choose what they want to pay attention to. And a great panel discussion with Yoel Roth, Jay Graber, and Mike Masnick on "Protocols, not Platforms".
- A support specialist's view on Why AI customer support is mediocre, with some fun references to That T-Shirt on The Bear.
- The New York Times test: which faces were generated by AI? For a non-paywalled test, try whichfaceisreal.com.
- If you've been laid off, here's a handy Notion workbook.
Coming up:
- February 2nd: TrustCon proposals are due. I’m personally working on a couple, and know of quite a few exciting proposals in the works from friends of mine.
- February 6th: Safer Internet Day.
- February 6th: I'm participating in a discussion on moderating online content during the 2024 elections with Katie Harbath (Chief Global Affairs Officer of Duco Experts) and Nighat Dad (a board member of Meta's Oversight Board).
- February 7th: "Breaking the Silence: Marginalized Tech Workers’ Experiences and Community Solidarity" discussion with Anika Collier Navaroli and Nadah Feteih from Harvard's Berkman Klein Center (on Zoom for the public). The Trust & Safety community is stronger with diversity, but it's critical to ensure that we're not unfairly putting more of a burden on underrepresented and marginalized folks, so this is a discussion I'll be sure to tune into.