Content Moderation

State Department to Deny Visas to Fact-Checkers, Critics, Citing Censorship Concerns

The State Department is instructing staff to deny visa applications to individuals involved in fact-checking, content moderation, and related activities, citing concerns about “censorship” of American speech. This directive targets H-1B visa applicants, particularly those in the tech sector, and instructs consular officers to scrutinize their work histories for activities combating misinformation or managing online content. The policy stems from the Trump administration’s criticism of tech companies and their efforts to regulate online content, with the administration claiming censorship of Americans. First Amendment experts are criticizing this guidance as a potential violation of free speech rights.

Read More

France Sues Australian Platform Kick Over Livestream Death Allegations

France is taking legal action against the Australian streaming platform Kick following the death of a French user during a livestream. The 46-year-old user, known online as “Jean Pormanove,” died during a 12-day live streaming marathon, prompting scrutiny of the platform’s handling of dangerous content. French authorities are investigating Kick for potential violations of laws regulating online content and the EU’s Digital Services Act, with penalties including potential imprisonment and fines. The probe will examine whether Kick knowingly broadcast content that attacked the user’s personal integrity.

Read More

Child Sex Abuse Victim Asks Musk to Remove Images, Critics Cite Hypocrisy

A victim of child sexual abuse, identified as Zora, is pleading with Elon Musk to remove links to her abusive images on X. The BBC’s investigation uncovered the presence of these images within a global trade of child sex abuse material, with an X account offering them for sale and linking to a trader in Indonesia. Despite X’s claims of zero tolerance, Zora and other victims are still suffering, as images of their abuse circulate online. The investigation also revealed the difficulty in stopping the traders from creating new accounts to replace those that get taken down.

Read More

Musk’s X Sues New York Over Hate Speech Law

X, formerly Twitter, is suing New York State over the Stop Hiding Hate Act, arguing that the law’s requirement for disclosure of content moderation policies violates the First Amendment by forcing the release of constitutionally protected speech. The act mandates social media companies report on their efforts to combat hate speech and extremism. New York lawmakers defended the law, countering that social media platforms are havens for hate and misinformation. X’s suit cites a previous successful challenge to a similar California law and alleges the New York legislation is similarly flawed.

Read More

TikTok Bans #SkinnyTok: A PR Stunt or Real Concern?

Following pressure from French Digital Minister Clara Chappaz and the EU, TikTok removed the pro-eating disorder hashtag #SkinnyTok and replaced it with a link to mental health resources. This action, part of an ongoing review of TikTok’s safety measures, follows concerns raised by regulators regarding the platform’s algorithm and its impact on children’s mental wellbeing. The EU’s investigation into TikTok’s algorithms under the Digital Services Act continues, highlighting the growing focus on online child safety. This incident underscores the ongoing debate surrounding social media’s effect on young users and the need for stricter content moderation. Prior efforts to mitigate these risks include TikTok’s 2024 suspension of its screen-time reward program.

Read More

X’s Community Notes Removal Sparks Outrage Amid EU Probe

X’s Community Notes fact-checking system, reliant on user input to flag misinformation, has inexplicably vanished from user feeds, raising concerns about the platform’s compliance with EU regulations regarding content moderation. While the system technically remains active, its absence leaves users more susceptible to false information. The outage, possibly linked to a recent data center fire and ongoing technical issues, coincides with an existing EU investigation into X’s content moderation practices. Experts note that while Community Notes had flaws, its disappearance exacerbates the spread of misinformation on the platform.

Read More

Reddit Bans Anti-Natalist Subreddit Following Palm Springs Bombing

Following a Palm Springs fertility clinic bombing linked to a suspect with anti-natalist beliefs, Reddit banned the r/Efilism subreddit due to violations of its rules against promoting self-harm and violence. The suspect, who identified as a “promortalist,” published a manifesto referencing Efilism and other online anti-natalist communities before the attack. While Reddit is removing related content, other anti-natalist subreddits remain active on the platform, with some moderators publicly denouncing the suspect’s actions. The platform emphasizes its commitment to preventing violence and the spread of harmful ideologies.

Read More

Musk’s Twitter UK Profits Plummet 74% After Takeover

Following Elon Musk’s acquisition, X’s UK revenue plummeted 66.3% to £69.1 million in 2023, resulting in a significant profit decrease. This downturn is attributed to reduced advertising spending due to brand safety and content moderation concerns. The company’s UK workforce also experienced substantial cuts, falling from 399 to 114 employees. Despite these challenges, X’s overall value has since recovered, and a new AI-focused subsidiary, X.AI London, was recently established.

Read More

Musk Pressured Reddit CEO on Content Moderation, Sparking Outrage

Elon Musk privately contacted Reddit CEO Steve Huffman following public criticism of Reddit’s content moderation. Subsequently, Reddit banned a subreddit containing violent threats against DOGE employees, a thread Musk highlighted. This action, while seemingly addressing violent content, also removed non-violent posts and prompted concerns among Reddit moderators about Musk’s undue influence. The incident follows a pattern of Musk blocking competitor links on X, raising questions about his methods and impact on platform governance. Reddit maintains that they address policy violations regardless of the source of the report.

Read More

China, Japan, South Korea Pledge Cooperation Amid US Tariff Concerns

Users can report offensive comments by selecting a reason from a list including “foul language,” “slanderous,” and “inciting hatred against a certain community.” A report triggers moderator review and potential action. The reporting process requires the user to provide their name. This system allows for the flagging and handling of inappropriate content. Moderators will then assess the reported comment.

Read More