To combat the misuse of AI-generated deepfakes, the Danish government plans to grant citizens property rights over their likeness and voice. This proposed legislation would allow individuals to request the removal of deepfakes featuring their image or voice from hosting platforms. The bill, which has cross-party support and is expected to pass this fall, aims to protect artists, public figures, and ordinary people from digital identity theft, addressing concerns highlighted by instances like AI-generated music mimicking popular artists. Further legislation is planned to potentially fine companies that fail to comply with takedown requests, reflecting Denmark’s commitment to both freedom of speech and individual rights in the age of generative AI.
Read More
The Danish government is planning to amend copyright law to protect individuals from AI-generated deepfakes by granting them ownership over their likeness, voice, and facial features. This proposed law, which enjoys cross-party support, aims to combat the misuse of digital imitations. Once approved, the legislation will allow individuals to demand the removal of unauthorized deepfake content and could result in compensation for those affected, with potential for severe fines for non-compliant tech platforms. The government intends to use its upcoming EU presidency to share these plans and encourage other European countries to adopt similar protections, hoping to send a clear message about individual rights in the age of AI.
Read More
President Trump signed the Take It Down Act, criminalizing the nonconsensual online distribution of authentic and AI-generated intimate images and videos. The legislation mandates website removal of such content within 48 hours of victim requests and imposes penalties on violators, including restitution and imprisonment. Bipartisan support led to the bill’s unanimous Senate passage and overwhelming House approval. The Act addresses the growing problem of deepfakes and online harassment, particularly impacting women and young people. First Lady Melania Trump championed the legislation, emphasizing its importance in protecting individuals from online abuse.
Read More
Ramsey Khalid Ismael, known as “Johnny Somali,” faces potential imprisonment of up to 31 years in South Korea following his October 2024 arrest. His arrest stems from multiple charges, including creating and sharing deepfakes—a sex crime in South Korea—and disrespectful acts at the Statue of Peace, a site commemorating victims of wartime sexual slavery. Each deepfake charge carries a maximum sentence of 10.5 years. This incident follows a pattern of controversial behavior in other countries, where he has faced less severe consequences for his actions.
Read More
Mr. Deepfakes, a major online hub for deepfake pornography, has shut down following the withdrawal of support from a critical service provider. The site, known for hosting both celebrity and non-celebrity deepfake content, allowed users to upload, share, and trade non-consensual material. This closure comes shortly after the passage of the “Take It Down Act,” though a direct link isn’t confirmed. While experts celebrate this as a positive step in combating deepfake abuse, the issue persists and will likely migrate to less visible platforms.
Read More
The Take It Down Act, overwhelmingly approved by Congress, mandates the removal of non-consensual intimate images, including deepfakes, from social media platforms within 48 hours of notification. The bill criminalizes the knowing publication of such images. Supported by both Democrats and Republicans, including Senators Klobuchar and Cruz, the legislation aims to protect victims from online abuse and hold perpetrators accountable. Its passage follows previous legislative attempts, thwarted last year due to objections to unrelated budgetary concerns.
Read More
X, Elon Musk’s social media platform, is suing Minnesota, alleging its new deepfake law violates free speech rights. The lawsuit argues the law’s vague language compels platforms to over-censor content to avoid potential criminal penalties for even ambiguous violations. This, X contends, stifles valuable political discourse and contravenes core First Amendment protections. The company maintains existing robust content moderation policies already address problematic content and seeks a declaration that the Minnesota law is unconstitutional. State officials are reviewing the lawsuit.
Read More
A new report reveals that over a quarter of Canadians have encountered sophisticated, politically polarizing fake content on social media during the federal election. This includes a surge in Facebook ads mimicking legitimate news sources to promote fraudulent investment schemes, often involving cryptocurrency, despite Meta’s news ban. Researchers highlight the concerning trend of deepfake videos, such as those falsely featuring Prime Minister Carney, used to promote these scams. While the content itself may not significantly sway voters, the erosion of trust in legitimate news sources and the inadequate response from tech platforms pose a substantial risk. The report emphasizes the need for increased protections against online disinformation.
Read More
Russia has infiltrated AI chatbots around the world, spreading false narratives and propaganda with alarming effectiveness. A significant portion of leading AI chatbots have been observed repeating disinformation originating from sources like the Pravda network, highlighting a worrying trend in the spread of misinformation.
Russia’s influence on AI chatbots isn’t just a matter of occasional errors; it demonstrates a coordinated effort to manipulate global narratives. The sheer volume of false information echoed by these chatbots suggests a deep penetration into their algorithms and data sources. This raises serious concerns about the reliability of information obtained from these increasingly popular tools.
The ability of these AI chatbots to convincingly present fabricated information is especially troubling.… Continue reading
The U.K. government will criminalize the creation and sharing of sexually explicit deepfake images, addressing the alarming rise of this form of online abuse, particularly against women and girls. This new offense, part of the Crime and Policing Bill, expands existing child protection laws to include adults and will carry a potential two-year prison sentence. Further legal updates will increase penalties for taking intimate images without consent and installing equipment to facilitate such acts, also punishable by up to two years in prison. These measures aim to provide law enforcement with stronger tools to combat non-consensual intimate image abuse and hold perpetrators accountable.
Read More