Meta Content Moderation

Meta’s New Hate Speech Policy Allows Attacks on LGBTQ+ Individuals

Meta updated its content moderation policies, notably allowing accusations of mental illness against LGBTQ individuals based on their identity, citing political and religious discourse. These changes, part of a broader shift towards community-based content moderation similar to X’s Community Notes, also removed prohibitions against insults based on various protected characteristics and eliminated its fact-checking program. This decision has drawn criticism from LGBTQ advocacy groups like GLAAD, who argue it normalizes hate speech and jeopardizes user safety. The timing coincides with Meta’s increased engagement with President-elect Trump, including a significant donation to his inaugural fund.

Read More

Meta Ditches Fact-Checkers, Embraces Community Notes: A Risky Gamble?

Meta is significantly altering its content moderation policies, ending its third-party fact-checking program in favor of a community-based system similar to X’s Community Notes. This shift, impacting Facebook, Instagram, and Threads, aims to reduce moderation errors and prioritize free expression, while still aggressively addressing high-severity violations like terrorism and child exploitation. The changes also include relaxing content policies on certain issues and increasing the threshold for content removal. These adjustments follow criticism of Meta’s moderation practices and reflect a broader industry trend towards less stringent content control.

Read More

Meta Drops Fact-Checkers, Overhauls Moderation: The Disinformation Age Arrives

Meta is significantly altering its content moderation policies on Facebook and Instagram, eliminating third-party fact-checkers in favor of user-generated “community notes,” mirroring X’s approach. This shift, announced by CEO Mark Zuckerberg, follows criticism of alleged bias against conservative voices and aims to prioritize free expression, though it acknowledges a potential increase in harmful content. The changes include adjustments to automated content-removal systems, focusing on high-severity violations, and relocating content moderation teams. This represents a major reversal from Meta’s previous commitment to independent fact-checking and more stringent content moderation.

Read More