AI regulation

Senator Warns of AI Cyberattack Threat: “Wake Up, This Will Destroy Us”

Anthropic reported thwarting what they believe was the first large-scale cyberattack executed without significant human intervention, likely orchestrated by a Chinese state-sponsored group. The AI used in the attack targeted major tech firms, financial institutions, and government agencies, highlighting a concerning trend where AI can now perform tasks such as analyzing target systems and producing exploit code efficiently. This development has prompted calls for AI regulation, with Senator Chris Murphy emphasizing the urgent need for government intervention, while other researchers remain skeptical of the technology’s current capabilities. Concerns center on the potential for less experienced and resourced groups to carry out sophisticated attacks and the importance of improved detection methods.

Read More

AI Superintelligence Ban: A Futile Effort Amidst Hype and Reality

A collective of over 850 individuals, encompassing AI experts and tech leaders such as Richard Branson and Steve Wozniak, has issued a statement advocating for a halt in superintelligence development. This call for a pause was prompted by concerns regarding the potential risks associated with superintelligence, including economic displacement, loss of control, and national security threats. The signatories, including AI pioneers like Yoshua Bengio and Geoff Hinton, demand a moratorium on superintelligence advancement until public support is established and safety can be guaranteed. The coalition behind the statement is notably diverse, including academics, media figures, religious leaders, and both former U.S. political and national security officials.

Read More

Senate Votes 99-1 to Remove AI Regulation Ban from Tax Bill

The Senate voted to remove a provision that would have prevented US states from regulating artificial intelligence, dealing a blow to Silicon Valley and White House officials who backed the measure. During an overnight voting session, senators overwhelmingly opposed the language, with a vote of 99-1. This rejection occurred despite support for the pause on state AI legislation from GOP allies in the tech industry and White House technology advisors.

Read More

MTG Rejects Bill After Admitting She Didn’t Read It

Representative Marjorie Taylor Greene voted for the “One Big Beautiful Bill” without reading a provision that would prevent states from regulating AI for a decade. Upon discovering this, she publicly reversed her stance, citing opposition to the provision as a violation of state rights. This admission drew sharp criticism from other representatives, highlighting the importance of thoroughly reviewing legislation before voting. The bill, which also faced criticism from Elon Musk, has passed the House and is currently in the Senate.

Read More

Federal Bill Seeks to Ban Government Use of Facial Recognition

H.R.3782 aims to prohibit the Federal Government from utilizing facial recognition technology for identity verification, among other purposes. This bill sparks considerable debate, highlighting the complexities of balancing technological advancement with individual privacy concerns. The existing use of facial recognition by agencies like the IRS, through platforms such as ID.me, underscores the urgency behind such legislative efforts.

The bill’s focus on identity verification seems, at first glance, relatively straightforward. However, concerns arise regarding the vagueness of the “other purposes” clause, leaving room for ambiguity and potential loopholes. The lack of specificity invites criticism and raises questions about the bill’s overall scope and effectiveness.… Continue reading

House Republicans’ Bill Surprise: Unpopular Provisions Spark Outrage

Following Elon Musk’s condemnation of the GOP spending bill, several House Republicans, including Representatives Marjorie Taylor Greene, Scott Perry, and Mike Flood, claimed ignorance of specific provisions within the legislation. These representatives asserted they would have voted against the bill had they been aware of these measures, which relate to AI regulation, contempt of court, and state rights. Their statements raise questions regarding their due diligence in reviewing the bill prior to voting. The timing of these admissions coincides with growing public disapproval and potential financial ramifications for some involved.

Read More

MTG’s Post-Vote Panic: AI Concerns After Reading Bill She Approved

Representative Marjorie Taylor Greene (R-GA) recently admitted to voting for the “big, beautiful bill” without reading it, specifically citing a provision on pages 278-279 that prevents states from regulating AI for ten years. She now opposes this section, calling it a violation of state rights, and demands its removal. Greene’s admission sparked widespread online criticism for her failure to thoroughly review the legislation before voting. This incident follows a similar admission by Representative Mike Flood (R-NE), who also voted for a bill without full comprehension of its contents.

Read More

Trump Fires Copyright Chief After AI Fair Use Report

Register of Copyrights Shira Perlmutter was reportedly fired from her position following the release of a report on the fair use of copyrighted data for AI training. The report concluded that while some AI uses, like research, might qualify as fair use, commercial applications that compete with existing markets likely do not. This firing has been criticized as an “unprecedented power grab” possibly linked to the report’s unfavorable implications for AI companies. Simultaneously, Librarian of Congress Carla Hayden was also dismissed, although the White House cited unrelated reasons.

Read More

California Bans AI-Driven Insurance Claim Denials

Senate Bill 1120, the “Physicians Make Decisions Act,” prohibits California health insurers from denying claims based solely on AI algorithms. Driven by a high rate of claim denials (approximately 26% in California) and concerns about AI misuse, the law ensures human oversight in coverage decisions for medically necessary care. While not banning AI entirely, SB 1120 mandates that human judgment remains central, safeguarding patient access to quality care. The Department of Managed Health Care will enforce the law, auditing denial rates and imposing deadlines for authorizations, with potential fines for violations. This California law is garnering national attention, with other states and Congress considering similar legislation.

Read More