Following Elon Musk’s acquisition, X’s UK revenue plummeted 66.3% to £69.1 million in 2023, resulting in a significant profit decrease. This downturn is attributed to reduced advertising spending due to brand safety and content moderation concerns. The company’s UK workforce also experienced substantial cuts, falling from 399 to 114 employees. Despite these challenges, X’s overall value has since recovered, and a new AI-focused subsidiary, X.AI London, was recently established.
Read More
Elon Musk privately contacted Reddit CEO Steve Huffman following public criticism of Reddit’s content moderation. Subsequently, Reddit banned a subreddit containing violent threats against DOGE employees, a thread Musk highlighted. This action, while seemingly addressing violent content, also removed non-violent posts and prompted concerns among Reddit moderators about Musk’s undue influence. The incident follows a pattern of Musk blocking competitor links on X, raising questions about his methods and impact on platform governance. Reddit maintains that they address policy violations regardless of the source of the report.
Read More
Users can report offensive comments by selecting a reason from a list including “foul language,” “slanderous,” and “inciting hatred against a certain community.” A report triggers moderator review and potential action. The reporting process requires the user to provide their name. This system allows for the flagging and handling of inappropriate content. Moderators will then assess the reported comment.
Read More
Elon Musk shared, then removed a post that minimized the actions of dictators responsible for genocide. This action, swiftly followed by the removal of the post, has sparked considerable outrage and discussion online. The fleeting nature of the post only amplified the controversy, leaving many to speculate on the reasons behind both its publication and subsequent deletion.
Elon Musk shared, then removed a post that attempted to justify or excuse the atrocities committed by dictators. The initial posting suggests a perspective that downplayed the severity of genocide, causing widespread condemnation. The act of sharing such a post, even briefly, highlights the significant power wielded by influential figures like Musk and the potential for misuse of that power to disseminate potentially harmful ideologies.… Continue reading
Following user reports of increased violent and graphic content appearing in their Instagram Reels feeds, Meta acknowledged a system error responsible for the inappropriate recommendations. The company issued an apology, stating the error has been rectified. Users reported seeing this content despite having sensitive content controls enabled to the highest setting. Meta employs a large team and AI technology to moderate content, aiming to prevent such issues, but this instance highlights a lapse in their system.
Read More
A Paris cybercrime unit has opened an investigation into X’s algorithms, prompted by concerns over algorithm manipulation and potential distortion of its automated data processing system. The investigation follows reports alleging algorithm changes led to the over-representation of certain political content and preferential treatment of Elon Musk’s posts. This action utilizes a novel legal interpretation, applying existing hacking laws to algorithm manipulation on social media platforms. The investigation coincides with broader European scrutiny of X’s content moderation and algorithm practices.
Read More
The European Medicines Agency (EMA) has ceased using X, citing that the platform no longer meets its communication needs, and will now utilize Bluesky. This decision follows the European Commission’s investigation into X’s compliance with EU social media regulations, specifically regarding algorithms and content moderation. The EMA will maintain its X account to prevent impersonation and monitor public health discussions. The agency’s departure is one among many, with numerous organizations and universities also abandoning the platform due to concerns over its management.
Read More
Le Monde has ceased sharing its content on X (formerly Twitter) due to Elon Musk’s increasingly partisan use of the platform, which has rendered Le Monde’s presence less effective and more vulnerable to negative consequences. This decision follows the platform’s transformation into an extension of Musk’s political actions, blurring the lines between commerce and ideology. The resulting rise in toxicity and reduced visibility prompted Le Monde to prioritize its content elsewhere, recommending similar action to its journalists. Concerns about other platforms, particularly TikTok and Meta, are also prompting increased vigilance.
Read More
The European Commission has expanded its investigation into X’s recommendation algorithm, demanding internal documents detailing recent changes and future modifications. This follows complaints alleging the algorithm’s promotion of far-right content, particularly from Germany’s Alternative for Germany (AfD) party, which Elon Musk publicly supports. The investigation includes requests for information on content moderation and amplification practices. The Commission insists the probe is independent of political considerations, aiming to ensure compliance with EU legislation promoting a fair and democratic online environment. X has yet to comment.
Read More
Paris Mayor Anne Hidalgo deactivated her X account in late 2023, citing the platform’s role in spreading disinformation and hate speech as a threat to democracy. Hidalgo’s statement condemned X’s lack of content moderation and its contribution to societal polarization, characterizing it as a “weapon of mass destruction.” The city of Paris affirmed its commitment to factual information and peaceful discourse, highlighting the platform’s detrimental impact on objective communication. This decision follows Elon Musk’s 2022 acquisition of X (formerly Twitter), and reflects growing concerns about the platform’s impact on public discourse.
Read More