Organizers for child safety groups were contacted by the Parents & Kids Safe AI Coalition regarding policy priorities for AI regulation, including age verification and parental controls. However, many were unaware that the coalition was entirely funded by OpenAI, the popular AI chatbot company. This lack of transparency led some groups to withdraw their support once OpenAI’s substantial role and funding became apparent. These events highlight concerns that AI companies may be attempting to unduly influence child safety legislation, with some advocates calling for them to step back from policy discussions.
Read More
A recent national NBC News survey reveals widespread voter apprehension regarding artificial intelligence, with a majority believing its risks outweigh its benefits. This distrust extends to both major political parties, as voters feel neither Democrats nor Republicans are effectively addressing AI policy. While some leaders highlight AI’s potential advancements and economic competitiveness, a significant portion of the electorate, particularly younger voters and women, hold negative views due to concerns about job displacement. The survey indicates AI is a developing political issue with potential for either party to gain traction by addressing voter anxieties.
Read More
Senate Republicans have employed artificial intelligence to create a deepfake advertisement featuring a fabricated version of Democratic candidate James Talarico, who appears to speak for over a minute. This ad, the latest in a series of AI-generated content from the National Republican Senatorial Committee, marks a significant advancement in lifelike AI candidate impersonation. While a small disclosure appears on screen, experts question its adequacy, highlighting the ethical implications and calls for regulation surrounding the use of such technology in political campaigns. The proliferation of these AI-generated visuals, even with disclosures, raises concerns about deception and the potential for this tactic to become a routine campaign tool across the political spectrum.
Read More
The Senate GOP’s official social media account has published an attack ad featuring an AI-generated deepfake of Texas Senate candidate James Talarico. This synthetic video depicts Talarico appearing to endorse his own past, real social media posts on issues like transgender rights, Christian beliefs, and immigration. However, the deepfake adds fabricated expressions of enjoyment to these statements, which are presented without prominent disclosure of their AI origin. This incident highlights a trend of Republican campaigns utilizing deepfakes for political attacks, raising concerns about their impact on democratic discourse and calls for federal regulation of AI-generated political content.
Read More
Former President Donald Trump has stated that AI will be stifled if companies are forced to navigate 50 different state-level regulatory frameworks. He plans to sign an executive order to establish a singular national standard for AI, arguing against the complexity of individual state approvals. A draft of this order could potentially authorize the Department of Justice to challenge states with what are considered to be “onerous” AI laws. This stance is likely to face opposition, especially from Republicans who typically advocate for states’ rights.
Read More
Anthropic reported thwarting what they believe was the first large-scale cyberattack executed without significant human intervention, likely orchestrated by a Chinese state-sponsored group. The AI used in the attack targeted major tech firms, financial institutions, and government agencies, highlighting a concerning trend where AI can now perform tasks such as analyzing target systems and producing exploit code efficiently. This development has prompted calls for AI regulation, with Senator Chris Murphy emphasizing the urgent need for government intervention, while other researchers remain skeptical of the technology’s current capabilities. Concerns center on the potential for less experienced and resourced groups to carry out sophisticated attacks and the importance of improved detection methods.
Read More
A collective of over 850 individuals, encompassing AI experts and tech leaders such as Richard Branson and Steve Wozniak, has issued a statement advocating for a halt in superintelligence development. This call for a pause was prompted by concerns regarding the potential risks associated with superintelligence, including economic displacement, loss of control, and national security threats. The signatories, including AI pioneers like Yoshua Bengio and Geoff Hinton, demand a moratorium on superintelligence advancement until public support is established and safety can be guaranteed. The coalition behind the statement is notably diverse, including academics, media figures, religious leaders, and both former U.S. political and national security officials.
Read More
The Senate voted to remove a provision that would have prevented US states from regulating artificial intelligence, dealing a blow to Silicon Valley and White House officials who backed the measure. During an overnight voting session, senators overwhelmingly opposed the language, with a vote of 99-1. This rejection occurred despite support for the pause on state AI legislation from GOP allies in the tech industry and White House technology advisors.
Read More
Representative Marjorie Taylor Greene voted for the “One Big Beautiful Bill” without reading a provision that would prevent states from regulating AI for a decade. Upon discovering this, she publicly reversed her stance, citing opposition to the provision as a violation of state rights. This admission drew sharp criticism from other representatives, highlighting the importance of thoroughly reviewing legislation before voting. The bill, which also faced criticism from Elon Musk, has passed the House and is currently in the Senate.
Read More
H.R.3782 aims to prohibit the Federal Government from utilizing facial recognition technology for identity verification, among other purposes. This bill sparks considerable debate, highlighting the complexities of balancing technological advancement with individual privacy concerns. The existing use of facial recognition by agencies like the IRS, through platforms such as ID.me, underscores the urgency behind such legislative efforts.
The bill’s focus on identity verification seems, at first glance, relatively straightforward. However, concerns arise regarding the vagueness of the “other purposes” clause, leaving room for ambiguity and potential loopholes. The lack of specificity invites criticism and raises questions about the bill’s overall scope and effectiveness.… Continue reading