Ofcom has launched an investigation into X following reports of altered images generated by Grok, which could result in significant fines or even a UK-wide ban if the platform is found in violation of the law. The UK government has also announced it will enforce the Data (Use and Access) Act this week, making the creation or requesting of deepfakes a criminal offense, along with prioritizing the issue within the Online Safety Act. Kendall, addressing the House, stated that the content on X is illegal, emphasizing that creating or sharing intimate images without consent is a criminal offense under the Online Safety Act for individuals and platforms. She urged the regulator to act swiftly.
Read the original article here
The UK to bring into force law this week to tackle Grok AI deepfakes. It’s about time, really. The potential for misuse of AI, especially in creating deepfakes, is a serious concern, and this new law seems like a step in the right direction. It’s focused on the source, targeting the companies that supply the tools used to generate non-consensual intimate images. That’s a sensible approach, going after the infrastructure that enables the problem. Hopefully, other countries will follow suit quickly, because this isn’t just a UK problem.
Deepfakes are undeniably terrible. They can be used to create false narratives, spread misinformation, and inflict real harm on individuals and society. The law aims to prevent the creation and distribution of these harmful images, which is critical. Now, here’s hoping this law doesn’t get bogged down by political wrangling or the threat of trade tariffs. And it’s a relief to see parliament getting ahead of the curve for once, although it took a tool like Grok AI to highlight the issue.
The intention is clear: prevent the tools that make these images from being easily available. It’s also crucial that companies found to be violating the law face real criminal liability, not just token fines. Otherwise, the fines just become a cost of doing business, and the problem persists. It’s a tricky balance, but the goal is to make the creation and distribution of these deepfakes unprofitable, not just a slap on the wrist.
Of course, the debate inevitably shifts to platforms like X, and whether a blanket ban is the solution. Some advocate for it, citing the platform’s role in facilitating the spread of harmful content, while others worry about the chilling effect on free speech. The focus of the law itself is not to ban X, but the tools that are used to create the content, which is a key distinction.
The effectiveness of this law will depend on its implementation and enforcement. How well will it be able to target the tools and the companies that supply them? What kind of penalties will be effective enough to deter this behavior? These are questions that remain. However, with good enforcement, the law could prove effective in curbing the creation and distribution of harmful deepfakes, and help protect people from becoming victims.
The focus of the law on “non-consensual intimate images” is a welcome move. People’s privacy is under attack and this law is a start on defending it. AI-generated images of people in states of undress without their consent are not harmless. They are weapons of abuse. This should be viewed as a positive development, regardless of political affiliation. Hopefully other nations will follow the UK’s example and put similar laws into effect.
There is a concern that this legislation could be overly broad and catch more tools and technologies than necessary. Any rushed legislation that tries to deal with tech can have unforeseen consequences. It’s also understood that, in the wrong hands, powerful PCs can be used to generate deepfakes locally, making enforcement a challenge. However, the law aims to tackle the problem at its source, which is the right approach.
The debate also inevitably turns to the platform X and the responsibility it holds in the circulation of deepfakes. It’s a real question how the platform’s response to these images will affect the outcome of the law. There is a general feeling that the platform should be held accountable for the distribution of these images, so its actions here are important.
While the primary intent is to limit the ability of platforms to host AI deepfakes, there is a risk that the law will not address the root causes of the problem. However, the UK is taking steps to protect the public. The issue is clear, and the actions being taken could have a positive impact.
