Recent investigations have revealed a concerning trend of AI-generated deepfake videos on platforms like TikTok, manipulating the likeness of doctors and influencers to promote health supplements and spread misinformation. Fact-checking organization Full Fact uncovered numerous videos featuring impersonated health experts, directing viewers to a supplements firm called Wellness Nest. These deepfakes utilize existing footage, altering both the visual and audio elements to endorse the company’s products. The discovery has ignited calls for social media platforms to strengthen their vigilance against AI-generated content, and to swiftly remove any content that misrepresents individuals.
Read the original article here
AI deepfakes of real doctors spreading health misinformation on social media is a truly concerning development, and it’s something that feels like it’s becoming increasingly prevalent. The ability to convincingly replicate a person’s likeness and voice, then use that to disseminate false or misleading health advice, is a recipe for disaster. Think about it: a doctor you trust, whose opinions you value, seemingly endorsing a dubious supplement or promoting a dangerous treatment on your social media feed. It’s incredibly easy to see how people could be swayed by such deceptive content.
AI deepfakes of real doctors, in my assessment, represent a significant threat to public health. It’s not just about the potential to mislead individuals; it’s about eroding trust in the medical profession as a whole. People are already bombarded with information, and distinguishing between credible sources and misinformation can be tough. When the lines are blurred even further by AI, it can cause confusion and ultimately could lead to individuals making harmful healthcare decisions.
The problem, in my view, is exacerbated by the sheer reach of social media platforms. These platforms are designed to amplify content, often prioritizing engagement over accuracy. Deepfakes, by their nature, are attention-grabbing. They can spread rapidly, reaching a vast audience before anyone can effectively debunk them. And, let’s be honest, the incentives often aren’t aligned with the public good. There’s a lot of money to be made in the wellness industry, and that financial incentive often drives the spread of misinformation.
Frankly, it’s not all that surprising. We’ve seen actors pretending to be doctors in advertisements for years. The new technique is to generate an AI replica of someone and then have the AI create content, but it’s the same scam. It’s really just the same old deception with a new technique. The potential for damage is undeniable.
One of the most immediate consequences of this is that it can encourage people to reject sound medical advice in favor of potentially dangerous alternative treatments. If a deepfake of a trusted doctor is promoting a product or procedure that lacks scientific backing, people are likely to follow it. This can lead to serious health complications, and in some cases, even death. It’s also worth considering the long-term impact: it can create a general sense of distrust in doctors and the medical establishment, making it harder for people to seek and receive the care they need.
The question that arises is, what can be done? It’s easy to get frustrated and feel like this is a problem that’s impossible to solve. However, there are definitely steps that could be taken. The most obvious would be to regulate the use of AI. There should be laws in place, requiring clear labeling of AI-generated content on social media. This would help users quickly identify deepfakes and assess their credibility. But, more than that, there should be rules around using someone’s likeness without their permission. That should be a given.
There are also technological solutions that could be employed. AI could be used to detect deepfakes, and social media platforms could develop algorithms to identify and remove misleading content. Furthermore, doctors themselves can actively fight back. They can inform their patients about the threat of deepfakes and alert the public to any instances where their likeness is being used without consent.
But the issue isn’t really the AI itself, the tool. The focus should be on the people using the tool. AI is just a tool, like a hammer. It can be used to build something useful, or it can be used to cause damage. It all depends on the user’s intent. The bad actors, the “AI terrorists”, are the ones who need to be dealt with.
It’s clear that this is a rapidly evolving issue. We’re at a point where society is vulnerable. It demands our attention, collaboration, and a proactive response to protect public health and maintain trust in credible sources. As the technology continues to advance, so must our efforts to safeguard against its misuse.
