GLAAD gives failing grade to Facebook, Instagram, Twitter, TikTok for LGBTQ safety

Listen to this article

LGBTQ media advocacy group GLAAD released a report Wednesday showing failing grades for all major social media platforms. Its second annual Social Media Safety Index addresses LGBTQ user safety across five major social media platforms: Facebook, Instagram, Twitter, YouTube, and TikTok.

The 2022 SMSI introduces a “Platform Scorecard” which utilizes 12 LGBTQ-specific indicators including explicit protections from hate and harassment for LGBTQ users, offering gender pronoun options on profiles, and prohibiting advertising that could be harmful and/or discriminatory to LGBTQ people., All platforms scored below 50 out of a possible 100:

●      Instagram: 48%
●      Facebook: 46%
●      Twitter: 45%
●      YouTube: 45%
●      TikTok: 43%

Detailed scores and a full list of Platform Scorecard indicators are available here. Indicators include:

●      The company should disclose a policy commitment to protect LGBTQ users from harm, discrimination, harassment, and hate on the platform.

●      The company should disclose an option for users to add pronouns to user profiles.

●      The company should disclose a policy that expressly prohibits targeted deadnaming and misgendering of other users.

●      The company should clearly disclose what options users have to control the company’s collection, inference, and use of information related to their sexual orientation and gender identity.

●      The company should disclose training for content moderators, including those employed by contractors, that trains them on the needs of vulnerable users, including LGBTQ users.

“Today’s political and cultural landscapes demonstrate the real-life harmful effects of anti-LGBTQ rhetoric and misinformation online,” said GLAAD President and CEO Sarah Kate Ellis. “The hate and harassment, as well as misinformation and flat-out lies about LGBTQ people, that go viral on social media are creating real-world dangers, from legislation that harms our community to the recent threats of violence at Pride gatherings. Social media platforms are active participants in the rise of anti-LGBTQ cultural climate and their only response should be to urgently create safer products and policies, and then enforce those policies.”

GLAAD also released new data from a May 2022 study conducted with Community Marketing & Insights, showing that 84% of LGBTQ adults agree there are not enough protections on social media to prevent discrimination, harassment, or disinformation. Forty percent of LGBTQ adults and 49 percent of transgender and nonbinary people do not feel welcomed and safe on social media.

Additionally, the newly released 2022 Anti-Defamation League Online Hate and Harassment report found that two-thirds of LGBTQ users experienced harassment online, with 54 percent of LGBTQ users reporting severe harassment including sustained harassment, stalking, or doxxing.

The study also shows that anti-LGBTQ rhetoric on social media translates to real-life harm, including reported levels of increased severe harassment for LGBTQ users when compared to 2021.

Anti-LGBTQ hate speech and misinformation also continue to be a public health and safety issue. Viral misinformation and inaccuracies have been cited as drivers of many of the nearly 250 anti-LGBTQ bills introduced in states around the country this year. Platforms are largely meeting this dangerous misinformation with inaction and often do not enforce their own policies regarding such content.

Issues like the promotion of so-called “conversion therapy,” targeted misgendering and deadnaming, and lack of true transparency reporting, remain prevalent for select platforms. Only select platforms prohibit actions like targeted misgendering and the promotion of conversion therapy. These actions need to be prohibited across the industry.

The group says that companies possess the tools they need to effectively curb anti-LGBTQ hate and rhetoric but instead are prioritizing profit over LGBTQ safety and lives.

GLAAD calls on the platforms to improve the design of algorithms that currently circulate and amplify harmful content, extremism, and hate and train moderators to understand the needs of LGBTQ users, and to moderate across all languages, cultural contexts, and regions.

They also seek transparency in content moderation, community guidelines and terms of service policy implementation, and algorithm designs.

Related Articles

Back to top button