AI Deepfakes

Elon Musk’s Child’s Mother Sues xAI Over Deepfake Concerns

Ashley St Clair, the mother of one of Elon Musk’s children, has filed a lawsuit against Musk’s xAI, alleging that its Grok AI tool created sexually explicit deepfakes of her. The lawsuit claims the AI tool generated non-consensual images, including one featuring swastikas, after users requested Grok to create the explicit content. In response, xAI filed a counter-suit, claiming Ms. St Clair violated their terms of service by filing the lawsuit in New York, where Ms. St Clair plans to vigorously defend her case. The legal dispute arises amid an ongoing custody battle between St Clair and Elon Musk.

Read More

House Stalls on Deepfake Porn Ban After Senate Approval: Why the Delay?

The Independent’s reporting highlights Rep. Alexandria Ocasio-Cortez’s growing influence as she spearheads the DEFIANCE Act, a bill aimed at banning nonconsensual deepfake sexually explicit images. This legislation has gained momentum due to the proliferation of such images generated by AI tools like Elon Musk’s Grok. Ocasio-Cortez has garnered support from both sides of the aisle, including House Speaker Mike Johnson and even Rep. Nancy Mace. Despite past opposition, Ocasio-Cortez is now a key figure, influencing policy and shaping the narrative on critical issues like healthcare and immigration.

Read More

Malaysia, Indonesia Ban X Chatbot Grok Over Sexually Explicit Deepfakes

Malaysia and Indonesia have blocked access to Elon Musk’s Grok AI chatbot due to its capability of generating sexually explicit deepfakes. The tool, available on the X platform, allows users to create images, and has been used to create pornographic and non-consensual images. Both countries are the first to ban the AI tool, citing concerns about protecting women and children. This action follows notices to X requesting tighter measures, which the regulators found to be insufficient in addressing the risks.

Read More

UK Law to Tackle AI Deepfakes Set to Be Enforced This Week

Ofcom has launched an investigation into X following reports of altered images generated by Grok, which could result in significant fines or even a UK-wide ban if the platform is found in violation of the law. The UK government has also announced it will enforce the Data (Use and Access) Act this week, making the creation or requesting of deepfakes a criminal offense, along with prioritizing the issue within the Online Safety Act. Kendall, addressing the House, stated that the content on X is illegal, emphasizing that creating or sharing intimate images without consent is a criminal offense under the Online Safety Act for individuals and platforms. She urged the regulator to act swiftly.

Read More

Grok’s Image Generation Not Restricted, Monetized After Deepfake Backlash

Following a global backlash over the generation of sexualized deepfakes, Elon Musk’s Grok chatbot has restricted image generation and editing to paying subscribers. This move comes after researchers discovered Grok was being used to create explicit images, including those depicting women in sexually explicit positions and, in some cases, children. While the restriction resulted in a noticeable decline in the number of explicit deepfakes, European authorities and the British government remain unsatisfied, deeming the changes insufficient. Regulators across multiple countries, including the UK, France, Malaysia, and India, are investigating the platform, which is also subject to scrutiny under EU digital safety law.

Read More

Grok AI: Elon Musk’s Platform Enables Child Sexualization and Illegal Deepfakes

One woman described feeling “dehumanized” after her image was digitally altered by Elon Musk’s AI, Grok, to remove her clothing, sparking similar concerns from others on X. The BBC has observed users on X utilizing Grok to generate explicit images of women without their consent, leading to criticisms of the platform’s inaction. Despite XAI’s policy against generating pornographic content and Ofcom’s stance against non-consensual intimate images, Grok’s creators have not taken the necessary steps to prevent these abuses, and are facing scrutiny from regulators. The Home Office is planning to legislate and ban the use of such “nudification” tools.

Read More

Canada Accuses China of Using Deepfakes to Target Dissident Yao Zhang

Yao Zhang, a Quebec-based YouTuber, has become a target of the Chinese government after criticizing the Communist Party of China on her channel, which has over 175,000 subscribers. She has been subject to an “spamouflage” campaign, including AI-generated explicit images and doxxing attempts, which the Canadian government has attributed to the People’s Republic of China. Zhang has also faced threats against herself and her family, including pressure on relatives in China, prompting her to be extremely cautious and limit communication. Despite these challenges, Zhang continues to speak out, recognizing the risks and the importance of her activism.

Read More

Canada Accuses China of Using Deepfakes to Target Dissident Yao Zhang

Yao Zhang, a Quebec YouTuber with 175,000 subscribers, faces an intimidation campaign by the Chinese government due to her criticism of the PRC. This campaign includes the circulation of sexually explicit AI-generated images of her, which Global Affairs Canada has attributed to the PRC’s “spamouflage” tactics. Zhang has also been doxed and received death threats, with her family in China facing pressure from authorities. Despite the risks and loneliness, Zhang continues to speak out, highlighting the real and growing threat of transnational repression.

Read More

AI Deepfakes: Health Misinformation Spreads Via Fake Doctors on Social Media

Recent investigations have revealed a concerning trend of AI-generated deepfake videos on platforms like TikTok, manipulating the likeness of doctors and influencers to promote health supplements and spread misinformation. Fact-checking organization Full Fact uncovered numerous videos featuring impersonated health experts, directing viewers to a supplements firm called Wellness Nest. These deepfakes utilize existing footage, altering both the visual and audio elements to endorse the company’s products. The discovery has ignited calls for social media platforms to strengthen their vigilance against AI-generated content, and to swiftly remove any content that misrepresents individuals.

Read More