Major Platforms Fail LGBTQ Safety Tests, Says New GLAAD Report | TechPolicy.Press

Created time
May 21, 2024 07:13 PM
Posted?
Posted?
Gabby Miller / May 21, 2024
notion image
notion image
Logos of the social media companies assessed by GLAAD for its 2024 Social Media Safety Index.
Major social media companies including Meta and YouTube are failing to protect LGBTQ safety, privacy, and expression on their platforms while profiting off of the rise in anti-LGBTQ hate, according to a new report by GLAAD. In the LGBTQ advocacy group’s fourth annual Social Media Safety Index (SMSI) published Tuesday, GLAAD doled out failing grades to nearly all platforms included in the report. This includes Meta’s Instagram, Facebook, and Threads, as well as Google-owned YouTube, TikTok, and X (formerly Twitter). While TikTok scored the highest this year with a D+, GLAAD doled out F grades to the remaining platforms for the third consecutive year.
This year’s report highlights how platforms are “largely failing to successfully mitigate dangerous anti-LGBTQ hate and disinformation” due to weak content moderation policies that are not adequately enforced. A corollary problem is the “over-moderation of legitimate LGBTQ expression — including wrongful takedowns of LGBTQ accounts and creators, shadowbanning, and similar suppression of LGBTQ content,” according to a statement by GLAAD’s Senior Director of Social Media Safety, Jenni Olson. She added that Meta’s recent decision to limit so-called ‘political content,’ which Meta defines as ‘social topics that affect a group of people and/or society at large,’ is especially concerning. Content suppression can lead to demonetization and various forms of shadowbanning, according to GLAAD.
The SMSI “Scorecard” uses twelve LGBTQ-specific indicators for its rating system. These include an assessment of a range of disclosures, including whether a company has an option for users to add pronouns to their profiles or if it has a policy commitment to protect LGBTQ users from harm and hate on the platform. It does not rate major platforms on the enforcement of their policies, despite demonstrating that they repeatedly fail to enforce them.
X received the lowest SMSI score with a failing grade of 41 percent – up eight points from the year prior. While X is one of the only companies included in the study that prohibits both targeted misgendering and deadnaming, GLAAD says it falls short in that users must self-report these instances rather than the platform automatically detecting this type of content and behavior. X also lacks disclosures on whether it educates content moderators on the needs of LGBTQ users, and has not renewed its commitment to diversifying its workplace under owner Elon Musk.
The video-sharing app TikTok performed the best. It received a SMSI score of 67 percent – a ten-point increase from the previous year. GLAAD applauded TikTok’s revised Anti-Discrimination Ad Policy, which now explicitly prohibits both targeted advertising based on gender identity or sexual orientation as well as targeted misgendering and deadnaming. TikTok is also one of the only major social media platforms that does not require user self-reporting for potential community guideline violations, and uses a combination of tech, human review, and third-party user reporting to detect violations. TikTok falls short on transparency, though. The report notes that TikTok provides limited information on the steps it takes to address wrongful demonetization and removal of LGBTQ creators and has failed to produce any data on its LGBTQ workforce despite its public commitment to diversifying.
New to the Index is Meta’s Threads, a text-based app akin to X that launched in 2023. Threads comes in at a 51 percent rating, with recommendations for Meta to “provide all users with tools for self-expression” around gender pronouns and being transparent about content and account restrictions. Threads is covered by Instagram’s Community Guidelines, which has a comprehensive policy for protected groups that includes LGBTQ users and explicitly prohibits the targeted misgendering of transgender, nonbinary, and gender non-conforming users. However, this policy requires self-reporting and does not extend to public figures, despite an onslaught of anti-trans hate campaigns aimed at stars like TikToker Dylan Mulvaney and actor Elliot Page.
The “anti-LGBTQ hate industrial complex” is a “lucrative enterprise,” according to GLAAD. “Targeting historically marginalized groups, including LGBTQ people, with fear-mongering, lies, and bigotry is both an intentional strategy of bad actors for attempting to consolidate political power,” the report says. The high-follower hate accounts and right-wing figures driving these campaigns, as well as the tech companies hosting them, profit the most from this activity, GLAAD writes.
Based on these companies’ annual ad revenues and the rising levels of hate speech on their platforms, GLAAD calls for a greater investment in product safety.
“There is a direct relationship between online harms and the hundreds of anti-LGBTQ legislative attacks, rising rates of real-world anti-LGBTQ violence and threats of violence,” said GLAAD President and CEO, Sarah Kate Ellis, in a press release. She added that social media platforms are responsible for “failing to make safe products” and should act with urgency to address this.
The report does provide a series of platform-specific recommendations as well as legislative and regulatory guidance for platform accountability. As part of its core recommendations, GLAAD calls on social media companies to strengthen and enforce existing policies, protect LGBTQ users from surveillance and discrimination by respecting data privacy, and improve moderation. The report also says that broader regulatory solutions to preserve LGBTQ rights and safety should focus on tackling tech companies’ harmful business practices, like surveillance advertising and the over-collection of user data.