New research from Eko reveals that Meta and X approved numerous ads containing violent anti-Muslim and anti-Jewish hate speech before Germany’s federal election. X approved all ten test ads submitted, while Meta approved half, despite policies against such content. These ads, including AI-generated imagery, used slurs, called for violence against minorities, and spread disinformation. Eko shared these findings with the European Commission, highlighting failures in both platforms’ content moderation systems and raising concerns about compliance with the Digital Services Act.
Read the original article here
A recent study revealed that Meta and X, formerly known as Twitter, approved ads containing violent anti-Muslim and antisemitic hate speech in the lead-up to the German elections. This is deeply concerning, raising serious questions about the platforms’ responsibility in protecting their users and preventing the spread of dangerous rhetoric.
The ads reportedly included hateful imagery and language, dehumanizing Muslim refugees and calling for violence against them. Terms like “virus,” “vermin,” and “rodents” were used to describe entire groups of people, while calls for sterilization, burning, and gassing were also present. This is not just offensive speech; it is inciting violence and promoting hatred against a vulnerable population.
Equally disturbing were ads targeting the Jewish community. One ad explicitly called for the torching of synagogues, linking Jews to a supposed “globalist rat agenda.” This kind of antisemitic rhetoric is dangerously reminiscent of historical events and has no place in a democratic society. The fact that these ads were not only approved but actively sponsored is extremely troubling.
The timing of these ads, just before a major election, is particularly egregious. Social media platforms already play a significant role in shaping public opinion, and the dissemination of this kind of inflammatory content could have had a measurable impact on the election results. It suggests a disturbing lack of oversight and a failure to prioritize the integrity of the electoral process.
Many are calling for stronger regulations and even outright bans on these platforms, at least temporarily during election periods. The argument is that the platforms have become too powerful, and their lack of accountability allows for dangerous misinformation and hate speech to thrive. The potential for foreign interference in elections through such platforms is also a major concern.
This incident highlights a larger issue regarding the spread of online hate speech. While free speech is a fundamental right, it doesn’t extend to inciting violence or spreading harmful stereotypes. The platforms have a responsibility to implement more robust content moderation policies and actively combat hate speech, rather than simply allowing it to proliferate.
The sheer scale of the problem is also evident. Numerous accounts describe witnessing other forms of harmful content, including pornographic spam and ads promoting illegal activities. This raises doubts about the effectiveness of the current moderation systems and suggests a systemic issue within these platforms.
Some argue that the platforms are actively complicit, either through negligence or deliberate intent. The suspicion that these ads were not merely approved but possibly created by the platforms themselves to increase engagement cannot be dismissed. This is an especially concerning proposition, and further investigation is needed.
The financial incentives behind this issue are also crucial. The profitability of allowing such content to circulate, even if it risks alienating some users, demonstrates a dangerous prioritization of profit over social responsibility. The immense financial resources of these companies allow them to exert undue influence on political processes and public discourse.
The reaction from various governments and international organizations will be critical in determining how to effectively regulate these platforms. The lack of action in the past has emboldened these companies, leading to a dangerous escalation of hateful rhetoric and misinformation. Stronger regulations are needed to hold these companies accountable for the content they host and to prevent future incidents of this nature.
The situation in Germany mirrors similar concerns in other countries. The spread of disinformation and hate speech on social media is a global problem, threatening democratic processes and fostering social division. A coordinated international response is necessary to address this increasingly serious issue. The silence of regulatory bodies and the lack of effective enforcement continue to fuel this fire.
In conclusion, the approval of violent anti-Muslim and antisemitic ads on Meta and X ahead of the German election is deeply troubling and underscores the urgent need for significant changes in how these powerful platforms operate. The lack of accountability and the prioritization of profit over social responsibility are unacceptable, and a concerted effort from governments, regulatory bodies, and civil society is required to prevent this from happening again.