Following the takeover of TikTok’s U.S. operations by American investors, users reported content censorship, particularly concerning sensitive topics. While TikTok attributed these issues to a system-wide failure caused by a power outage, questions remain about intentional censorship and the platform’s capabilities. Tech journalist Jacob Ward notes the platform’s sophisticated built-in censorship mechanisms, originally developed in China, and highlights that while current issues may not be intentional, the potential for future manipulation by new ownership is significant. Additionally, TikTok has settled a social media addiction trial, revealing significant awareness of harms to children over years.
Read More
Oracle says data center outage causing issues faced by US TikTok users, and it’s certainly stirred up a lot of chatter. You know, when a major platform like TikTok experiences problems, and the explanation involves a data center outage, it naturally raises eyebrows. The core issue seems to be that users in the US are experiencing difficulties, and the initial explanation points to this specific technical problem.
Data center issues cause censorship? Interesting. Many people are questioning if a simple technical glitch is truly behind the problems, with some suggesting a different, more politically motivated explanation. The timing is also a point of contention, with the “outage” coinciding with sensitive events or specific types of content, like discussions about specific political figures or topics.… Continue reading
Comedian Megan Stalter and other TikTok users reported difficulties uploading videos critical of ICE, leading to accusations of censorship. These issues arose around the same time a new joint venture, partly owned by Oracle with ties to the Trump administration, took control of TikTok’s US operations. While TikTok attributed the glitches to a power outage, the timing of the issues raised concerns about content moderation and data security among users. Experts like Casey Fiesler highlight the lack of trust in social media platforms and the potential for perceived censorship, especially given the platform’s changing ownership.
Read More
The European Commission has initiated a formal investigation into X’s chatbot, Grok, following reports of its image-editing function being used to create non-consensual, sexually explicit images of women and underage girls. The probe will examine whether X adequately addressed the risks associated with the tool, potentially leading to fines of up to 6% of its global annual turnover if violations of the Digital Services Act are found. This incident, occurring after Grok’s “Spicy Mode” feature allowed explicit content generation, prompted widespread condemnation and led to platform measures to restrict image manipulation. Grok has previously faced scrutiny for generating inappropriate content, including Holocaust denial, and is currently subject to investigations in multiple countries and has been banned in others.
Read More
Over the weekend, Malaysia and Indonesia restricted access to Elon Musk’s AI chatbot Grok due to the tool’s generation of nonconsensual, sexually explicit content and child sexual abuse material (CSAM). These actions followed repeated failures by X Corp to address associated risks. The restrictions came after Grok’s image generation features were updated, allowing users to easily create and share problematic images. xAI responded by limiting image generation to paying subscribers, while Musk stated that users creating illegal content would face consequences.
Read More
Meta has recently removed or restricted numerous accounts belonging to abortion access providers, queer groups, and reproductive health organizations worldwide. This wave of censorship, impacting over 50 organizations since October, includes bans on Facebook, Instagram, and WhatsApp, particularly affecting groups in Europe, the UK, Asia, Latin America, and the Middle East. While Meta denies an escalating trend, campaigners report a significant increase in account removals and restrictions compared to the previous year. Organizations affected by these actions, such as Women Help Women and Jacarandas, have expressed concerns about the lack of transparency, vague explanations for bans, and the potential life-threatening consequences of misinformation.
Read More
The State Department is instructing staff to deny visa applications to individuals involved in fact-checking, content moderation, and related activities, citing concerns about “censorship” of American speech. This directive targets H-1B visa applicants, particularly those in the tech sector, and instructs consular officers to scrutinize their work histories for activities combating misinformation or managing online content. The policy stems from the Trump administration’s criticism of tech companies and their efforts to regulate online content, with the administration claiming censorship of Americans. First Amendment experts are criticizing this guidance as a potential violation of free speech rights.
Read More
France is taking legal action against the Australian streaming platform Kick following the death of a French user during a livestream. The 46-year-old user, known online as “Jean Pormanove,” died during a 12-day live streaming marathon, prompting scrutiny of the platform’s handling of dangerous content. French authorities are investigating Kick for potential violations of laws regulating online content and the EU’s Digital Services Act, with penalties including potential imprisonment and fines. The probe will examine whether Kick knowingly broadcast content that attacked the user’s personal integrity.
Read More
A victim of child sexual abuse, identified as Zora, is pleading with Elon Musk to remove links to her abusive images on X. The BBC’s investigation uncovered the presence of these images within a global trade of child sex abuse material, with an X account offering them for sale and linking to a trader in Indonesia. Despite X’s claims of zero tolerance, Zora and other victims are still suffering, as images of their abuse circulate online. The investigation also revealed the difficulty in stopping the traders from creating new accounts to replace those that get taken down.
Read More
X, formerly Twitter, is suing New York State over the Stop Hiding Hate Act, arguing that the law’s requirement for disclosure of content moderation policies violates the First Amendment by forcing the release of constitutionally protected speech. The act mandates social media companies report on their efforts to combat hate speech and extremism. New York lawmakers defended the law, countering that social media platforms are havens for hate and misinformation. X’s suit cites a previous successful challenge to a similar California law and alleges the New York legislation is similarly flawed.
Read More
TikTok Users Report Anti-ICE Video Censorship, Company Cites “Tech Issues”
Comedian Megan Stalter and other TikTok users reported difficulties uploading videos critical of ICE, leading to accusations of censorship. These issues arose around the same time a new joint venture, partly owned by Oracle with ties to the Trump administration, took control of TikTok’s US operations. While TikTok attributed the glitches to a power outage, the timing of the issues raised concerns about content moderation and data security among users. Experts like Casey Fiesler highlight the lack of trust in social media platforms and the potential for perceived censorship, especially given the platform’s changing ownership.
Read More