Now Reading
GLAAD Releases LGBTQ+ Social Media Safety Report

GLAAD Releases LGBTQ+ Social Media Safety Report

Social Media

GLAAD, the Gay and Lesbian Alliance Against Defamation, has just released its findings in its annual Social Media Safety Index (SMSI), a report on LGBTQ+ safety, privacy, and expression online. 

According to their website, the report analyzes six major social media platforms: Instagram, YouTube, X, TikTok, Facebook, and Threads. The analysis uses 14 indicators that focus on a range of LGBTQ+ issues including workforce diversity, data privacy, and content moderation. 

GLAAD report Instagram

Through quantitative research, the report shows some companies implementing changes similar to Project 2025 such as “deleting the terms sexual orientation and gender identity.” YouTube removed “gender identity” from the list of protections from hate speech. Meta removed parts of its hate speech policy that protected LGBTQ+ people allowing users to refer to them as “abnormal” and “mentally ill.”

Here are some Key Findings of the 2025 SMSI:

-Recent hate speech policy rollbacks from Meta and YouTube present grave threats to safety and are harmful to LGBTQ+ people on these platforms.
-Platforms are largely failing to mitigate harmful anti-LGBTQ+ hate and disinformation that violates their own policies.
-Platforms disproportionately suppress LGBTQ+ content, via removal, demonetization, and forms of shadowbanning.
-Anti-LGBTQ+ rhetoric and disinformation on social media has been shown to lead to offline harms.
-Social media companies continue to withhold meaningful transparency about content moderation, algorithms, data protection, and data privacy practices.

GLAAD Report
LGBTQ+ safety report of each social media platform based on a 100-point system.

Here are some of GLAAD’s Key Recommendations:

-Strengthen and enforce (or restore) existing policies and mitigations that protect LGBTQ people and others from hate, harassment, and misinformation; while also reducing suppression of legitimate LGBTQ expression.

-Improve moderation by providing mandatory training for all content moderators (including those employed by contractors) focused on LGBTQ safety, privacy, and expression; and moderate across all languages, cultural contexts, and regions. AI systems should be used to flag for human review, not for automated removals.

-Work with independent researchers to provide meaningful transparency about content moderation, community guidelines, development and use of AI and algorithms, and enforcement reports.

-Respect data privacy. Platforms should reduce the amount of data they collect, infer, and retain, and cease the practice of targeted surveillance advertising, including the use of algorithmic content recommender systems, and other incursions on user privacy.

-Promote and incentivize civil discourse including working with creators and proactively messaging expectations for user behavior, such as respecting platform hate and harassment policies.

“We need to hold the line,” Jenni Olson, GLAAD’s Senior Director of Social Media Safety, says. “As tech companies are taking unprecedented leaps backwards, we remain firm in advocating for basic best practices that protect the safety of LGBTQ people on these platforms. This is not normal. Our communities deserve to live in a world that does not generate or profit off of hate.”

Photos courtesy social media

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top