China’s Cyberspace Administration of China (CAC) mandates that all AI-generated content, encompassing text, images, and audio, must be explicitly labeled by September 1, 2025. This regulation requires service providers to add labels visible to users and embedded in metadata, with exceptions for specific social or industrial needs requiring unlabeled content. The CAC prohibits the alteration or removal of these labels, aiming to combat disinformation and enhance transparency regarding AI-generated content. Failure to comply could result in legal action from the Chinese government.
Read More
Senate Bill 1120, the “Physicians Make Decisions Act,” prohibits California health insurers from denying claims based solely on AI algorithms. Driven by a high rate of claim denials (approximately 26% in California) and concerns about AI misuse, the law ensures human oversight in coverage decisions for medically necessary care. While not banning AI entirely, SB 1120 mandates that human judgment remains central, safeguarding patient access to quality care. The Department of Managed Health Care will enforce the law, auditing denial rates and imposing deadlines for authorizations, with potential fines for violations. This California law is garnering national attention, with other states and Congress considering similar legislation.
Read More
Criminals are increasingly leveraging AI’s accessibility for malicious purposes, including sophisticated fraud schemes like deepfake heists costing millions. A significant portion of this criminal AI activity involves the creation and distribution of child sexual abuse material, numbering in the thousands of images. Furthermore, AI facilitates sextortion and enhances hacking capabilities by identifying software vulnerabilities. Law enforcement agencies must urgently adapt to these evolving threats to prevent a dramatic rise in AI-enabled crime in the coming years.
Read More