Following Elon Musk’s condemnation of the GOP spending bill, several House Republicans, including Representatives Marjorie Taylor Greene, Scott Perry, and Mike Flood, claimed ignorance of specific provisions within the legislation. These representatives asserted they would have voted against the bill had they been aware of these measures, which relate to AI regulation, contempt of court, and state rights. Their statements raise questions regarding their due diligence in reviewing the bill prior to voting. The timing of these admissions coincides with growing public disapproval and potential financial ramifications for some involved.
Read More
Representative Marjorie Taylor Greene (R-GA) recently admitted to voting for the “big, beautiful bill” without reading it, specifically citing a provision on pages 278-279 that prevents states from regulating AI for ten years. She now opposes this section, calling it a violation of state rights, and demands its removal. Greene’s admission sparked widespread online criticism for her failure to thoroughly review the legislation before voting. This incident follows a similar admission by Representative Mike Flood (R-NE), who also voted for a bill without full comprehension of its contents.
Read More
Register of Copyrights Shira Perlmutter was reportedly fired from her position following the release of a report on the fair use of copyrighted data for AI training. The report concluded that while some AI uses, like research, might qualify as fair use, commercial applications that compete with existing markets likely do not. This firing has been criticized as an “unprecedented power grab” possibly linked to the report’s unfavorable implications for AI companies. Simultaneously, Librarian of Congress Carla Hayden was also dismissed, although the White House cited unrelated reasons.
Read More
China’s Cyberspace Administration of China (CAC) mandates that all AI-generated content, encompassing text, images, and audio, must be explicitly labeled by September 1, 2025. This regulation requires service providers to add labels visible to users and embedded in metadata, with exceptions for specific social or industrial needs requiring unlabeled content. The CAC prohibits the alteration or removal of these labels, aiming to combat disinformation and enhance transparency regarding AI-generated content. Failure to comply could result in legal action from the Chinese government.
Read More
Senate Bill 1120, the “Physicians Make Decisions Act,” prohibits California health insurers from denying claims based solely on AI algorithms. Driven by a high rate of claim denials (approximately 26% in California) and concerns about AI misuse, the law ensures human oversight in coverage decisions for medically necessary care. While not banning AI entirely, SB 1120 mandates that human judgment remains central, safeguarding patient access to quality care. The Department of Managed Health Care will enforce the law, auditing denial rates and imposing deadlines for authorizations, with potential fines for violations. This California law is garnering national attention, with other states and Congress considering similar legislation.
Read More
Criminals are increasingly leveraging AI’s accessibility for malicious purposes, including sophisticated fraud schemes like deepfake heists costing millions. A significant portion of this criminal AI activity involves the creation and distribution of child sexual abuse material, numbering in the thousands of images. Furthermore, AI facilitates sextortion and enhances hacking capabilities by identifying software vulnerabilities. Law enforcement agencies must urgently adapt to these evolving threats to prevent a dramatic rise in AI-enabled crime in the coming years.
Read More