Mr. Deepfakes, a major online hub for deepfake pornography, has shut down following the withdrawal of support from a critical service provider. The site, known for hosting both celebrity and non-celebrity deepfake content, allowed users to upload, share, and trade non-consensual material. This closure comes shortly after the passage of the “Take It Down Act,” though a direct link isn’t confirmed. While experts celebrate this as a positive step in combating deepfake abuse, the issue persists and will likely migrate to less visible platforms.

Read the original article here

Mr. Deepfakes, the AI-generated porn site, recently shut down. This closure followed its service provider cutting off support, a move likely precipitated by the newly enacted “Take It Down Act.”

The timing strongly suggests the site’s operators were attempting to preempt legal action. This leads to speculation about their financial preparedness for potential legal battles; perhaps they were already financially strained, leaving them unprepared to defend against lawsuits. The question of whether the American Civil Liberties Union (ACLU) would have taken on such a case, on free speech grounds, is certainly intriguing.

The “Take It Down Act” itself presents some interesting ambiguities. While it aims to criminalize the posting of nonconsensual sexual imagery, including deepfakes, its wording leaves room for interpretation. One specific clause raises concerns. It stipulates that the depicted act wasn’t voluntarily exposed by the individual in a public or commercial setting. This could potentially create a loophole. If someone willingly posts an intimate photo online, could anyone then legally use that image to create deepfakes? This seems counterintuitive and suggests the law could benefit from clearer wording. A simpler law focusing solely on the prohibition of using someone’s likeness for intimate/pornographic depictions would have been more straightforward and effective.

The fact that the owner of Mr. Deepfakes remains unknown is telling. It highlights the clandestine nature of this type of operation and the lengths some will go to to remain anonymous while profiting from illegal activity. The contrast between the long-standing existence of fake static images and the recent public outcry over deepfake videos is also striking. It seems the ease with which realistic moving images can be created has amplified the issue considerably.

There are varying perspectives on the ethical implications. While some believe that deepfakes of celebrities freely shared online are less problematic, the creation and distribution of nonconsensual deepfakes of ordinary individuals is widely condemned. The argument that this isn’t truly AI, but rather deepfake technology, is technically accurate but misses the point; deepfake technology is dependent on advancements in AI. Furthermore, the fact that Mr. Deepfakes allowed users to buy and sell custom nonconsensual content elevates the issue beyond mere artistic expression.

The site’s actions until shutdown appear to be driven by a calculated risk. They operated until encountering legal roadblocks in various jurisdictions. They initially banned users from the UK and The Netherlands due to existing legislation, and the US ban was likely the final straw. The site’s operators may have chosen to shut down rather than face protracted legal battles and the associated costs and reputational damage. The notion of them having squandered their funds on frivolous pursuits instead of legal representation is certainly a possibility, but it’s just as likely that the shutdown was a calculated move to avoid prosecution. After all, they clearly knew the legal landscape was shifting against them.

The free speech implications are complex. While the creation and dissemination of nonconsensual deepfake pornography is undeniably harmful, some believe it’s problematic to use this as a reason to curtail free speech. The core issue revolves around consent. The individuals depicted in the deepfakes haven’t consented to the creation or distribution of the material, thereby making it an act of violation, not an exercise of free speech. Moreover, the argument against this being “antithetical to free speech” is weak; free speech does not encompass the right to commit crimes, especially those involving malicious intent and the violation of a person’s privacy. The argument against deepfakes isn’t that people have no right to self-expression, but rather that one’s person’s self-expression must end before it infringes on the rights of others.

The legal arguments are equally intricate. Analogies to sound-alike parodies or unlicensed biographies are inaccurate; those are often forms of commentary or critique, while deepfake pornography lacks such redeeming qualities. The comparison to a Taylor Swift crochet doll is even less relevant. The comparison hinges on the degree of realism; a drawing or painting wouldn’t be as convincing as a well-executed deepfake. The lack of consent is crucial here. It’s the misuse of someone’s likeness in a highly intimate and damaging context. It is an issue of privacy and the protection from reputational harm. This situation falls under several legal violations, namely the dissemination of revenge porn, and defamation.

The technological aspect is significant. The rapid advancement of AI and deepfake technology has outpaced legal frameworks. The definition of “AI” itself is fluid. While some argue that deepfake technology shouldn’t be classified as AI, that’s a semantic debate that ignores the reality of the technology’s basis in artificial intelligence. Attempting to legislate this nascent technology carries the risk of accidentally criminalizing legitimate uses. The lack of clear legal precedents necessitates carefully crafted legislation. A test case would greatly benefit the judicial system to understand the full complexity of the issue and provide clear guidance for future cases. The problem is less about the technology itself and more about how it’s being used to violate the privacy and inflict harm on individuals. The existing legal framework should already address the harm caused. The challenge is in effectively applying existing laws to the novel context of deepfake technology.