Content Moderation

TikTok Bans #SkinnyTok: A PR Stunt or Real Concern?

Following pressure from French Digital Minister Clara Chappaz and the EU, TikTok removed the pro-eating disorder hashtag #SkinnyTok and replaced it with a link to mental health resources. This action, part of an ongoing review of TikTok’s safety measures, follows concerns raised by regulators regarding the platform’s algorithm and its impact on children’s mental wellbeing. The EU’s investigation into TikTok’s algorithms under the Digital Services Act continues, highlighting the growing focus on online child safety. This incident underscores the ongoing debate surrounding social media’s effect on young users and the need for stricter content moderation. Prior efforts to mitigate these risks include TikTok’s 2024 suspension of its screen-time reward program.

Read More

X’s Community Notes Removal Sparks Outrage Amid EU Probe

X’s Community Notes fact-checking system, reliant on user input to flag misinformation, has inexplicably vanished from user feeds, raising concerns about the platform’s compliance with EU regulations regarding content moderation. While the system technically remains active, its absence leaves users more susceptible to false information. The outage, possibly linked to a recent data center fire and ongoing technical issues, coincides with an existing EU investigation into X’s content moderation practices. Experts note that while Community Notes had flaws, its disappearance exacerbates the spread of misinformation on the platform.

Read More

Reddit Bans Anti-Natalist Subreddit Following Palm Springs Bombing

Following a Palm Springs fertility clinic bombing linked to a suspect with anti-natalist beliefs, Reddit banned the r/Efilism subreddit due to violations of its rules against promoting self-harm and violence. The suspect, who identified as a “promortalist,” published a manifesto referencing Efilism and other online anti-natalist communities before the attack. While Reddit is removing related content, other anti-natalist subreddits remain active on the platform, with some moderators publicly denouncing the suspect’s actions. The platform emphasizes its commitment to preventing violence and the spread of harmful ideologies.

Read More

Musk’s Twitter UK Profits Plummet 74% After Takeover

Following Elon Musk’s acquisition, X’s UK revenue plummeted 66.3% to £69.1 million in 2023, resulting in a significant profit decrease. This downturn is attributed to reduced advertising spending due to brand safety and content moderation concerns. The company’s UK workforce also experienced substantial cuts, falling from 399 to 114 employees. Despite these challenges, X’s overall value has since recovered, and a new AI-focused subsidiary, X.AI London, was recently established.

Read More

Musk Pressured Reddit CEO on Content Moderation, Sparking Outrage

Elon Musk privately contacted Reddit CEO Steve Huffman following public criticism of Reddit’s content moderation. Subsequently, Reddit banned a subreddit containing violent threats against DOGE employees, a thread Musk highlighted. This action, while seemingly addressing violent content, also removed non-violent posts and prompted concerns among Reddit moderators about Musk’s undue influence. The incident follows a pattern of Musk blocking competitor links on X, raising questions about his methods and impact on platform governance. Reddit maintains that they address policy violations regardless of the source of the report.

Read More

China, Japan, South Korea Pledge Cooperation Amid US Tariff Concerns

Users can report offensive comments by selecting a reason from a list including “foul language,” “slanderous,” and “inciting hatred against a certain community.” A report triggers moderator review and potential action. The reporting process requires the user to provide their name. This system allows for the flagging and handling of inappropriate content. Moderators will then assess the reported comment.

Read More

Musk Deletes Post Excusing Dictators’ Genocide

Elon Musk shared, then removed a post that minimized the actions of dictators responsible for genocide. This action, swiftly followed by the removal of the post, has sparked considerable outrage and discussion online. The fleeting nature of the post only amplified the controversy, leaving many to speculate on the reasons behind both its publication and subsequent deletion.

Elon Musk shared, then removed a post that attempted to justify or excuse the atrocities committed by dictators. The initial posting suggests a perspective that downplayed the severity of genocide, causing widespread condemnation. The act of sharing such a post, even briefly, highlights the significant power wielded by influential figures like Musk and the potential for misuse of that power to disseminate potentially harmful ideologies.… Continue reading

Instagram Flooded with Graphic Violence: Users Report Algorithm Failure

Following user reports of increased violent and graphic content appearing in their Instagram Reels feeds, Meta acknowledged a system error responsible for the inappropriate recommendations. The company issued an apology, stating the error has been rectified. Users reported seeing this content despite having sensitive content controls enabled to the highest setting. Meta employs a large team and AI technology to moderate content, aiming to prevent such issues, but this instance highlights a lapse in their system.

Read More

French Probe into X’s Algorithms Sparks Global Debate on Social Media Manipulation

A Paris cybercrime unit has opened an investigation into X’s algorithms, prompted by concerns over algorithm manipulation and potential distortion of its automated data processing system. The investigation follows reports alleging algorithm changes led to the over-representation of certain political content and preferential treatment of Elon Musk’s posts. This action utilizes a novel legal interpretation, applying existing hacking laws to algorithm manipulation on social media platforms. The investigation coincides with broader European scrutiny of X’s content moderation and algorithm practices.

Read More

EU Agency Ditches Musk’s X for Bluesky Amidst Controversy

The European Medicines Agency (EMA) has ceased using X, citing that the platform no longer meets its communication needs, and will now utilize Bluesky. This decision follows the European Commission’s investigation into X’s compliance with EU social media regulations, specifically regarding algorithms and content moderation. The EMA will maintain its X account to prevent impersonation and monitor public health discussions. The agency’s departure is one among many, with numerous organizations and universities also abandoning the platform due to concerns over its management.

Read More