Anthropic Claude

Judge Rules Against Pentagon’s “Woke” Attack on Anthropic

The Pentagon initiated a supply chain risk designation for Anthropic due to concerns about its AI technology’s potential misuse. This action stemmed from Anthropic’s refusal to agree to new contract terms, which the Pentagon viewed as a threat to national security. The designation was deemed necessary to mitigate risks associated with government and military reliance on Anthropic’s widely used AI systems.

Read More

Altman Admits OpenAI Can’t Control Pentagon AI Use

OpenAI CEO Sam Altman stated that the company does not control the Pentagon’s operational decisions regarding their AI products, even as the military reportedly uses AI in operations like the seizure of Nicolás Maduro and targeting in the conflict with Iran. This comes amidst employee and public concern that OpenAI has crossed ethical lines that rival Anthropic refused to, particularly after the Pentagon declared Anthropic a “supply-chain risk” for refusing a deal. Despite Altman’s assurances of legal use and efforts at damage control, Anthropic’s CEO accused OpenAI of “safety theater” and political motivations behind their Pentagon agreement.

Read More

Military Uses Claude for Iran Strikes Despite Ban

Despite President Trump’s directive to sever ties with Anthropic, the US military reportedly utilized Claude AI for intelligence gathering, target selection, and battlefield simulations during the joint bombardment of Iran. This incident highlights the intricate integration of AI within military operations and the challenges of rapid disengagement. The controversy stemmed from Claude’s prior use in a Venezuelan raid, which Anthropic objected to based on its terms of service prohibiting violent applications. While the defense secretary criticized Anthropic’s stance, he acknowledged the need for a transition period, allowing continued service for up to six months for a seamless withdrawal.

Read More

Claude Surges to App Store Top Spot Amidst User Exodus from ChatGPT Over Pentagon Stance

Anthropic’s Claude has experienced a surge in users migrating from ChatGPT, particularly following OpenAI’s announcement of an agreement to deploy its AI models within the Department of Defense’s classified network. This development has unsettled some ChatGPT users, sparking online discussions about ethical implications and prompting a notable shift towards Claude. As a result, Claude has ascended to the top position among productivity apps on the Apple App Store, with numerous users publicly sharing their transitions on social media platforms like X and Reddit.

Read More

Defense Secretary Designates AI Company Anthropic a Supply Chain Risk

Anthropic’s actions have been deemed a betrayal and a failure in business practices with the United States Government, particularly regarding the Department of War’s need for unrestricted access to their models. The company, through its CEO, is accused of attempting to dictate military operational decisions under the guise of “effective altruism,” prioritizing Silicon Valley ideology over national security. Consequently, Anthropic has been designated a Supply-Chain Risk to National Security, leading to a complete cessation of business with the United States military. This decision permanently alters their relationship with the Armed Forces and Federal Government, with a six-month transition period for existing services.

Read More

OpenAI Contracts Pentagon Deal Amidst Anthropic Ban Over AI Ethics

Following a directive to cease federal use of its AI tools, Anthropic faces a “supply chain risk” designation from the Pentagon. In contrast, OpenAI has secured a Pentagon deal for its AI tools within classified systems, contingent upon similar safety restrictions. These restrictions reportedly include prohibitions on domestic mass surveillance and the requirement for human responsibility in the use of force, specifically concerning autonomous weapon systems. OpenAI will embed engineers to ensure model safety, advocating for these terms to be offered to all AI companies to encourage de-escalation from governmental actions towards mutually agreed-upon terms.

Read More

Anthropic Resists Government Pressure on Autonomous Weapons and Surveillance

Secretary of War Pete Hegseth has directed the Department of War to designate Anthropic as a supply chain risk due to an impasse in negotiations over exceptions to the lawful use of its AI model, Claude. These exceptions concern mass domestic surveillance of Americans and fully autonomous weapons, which Anthropic maintains are unreliable for the latter and a violation of rights for the former. Anthropic asserts that this unprecedented designation, if formally adopted, would not legally affect individual or commercial customers, nor would it restrict Department of War contractors’ use of Claude for non-contractual purposes. The company intends to challenge any such designation in court and reaffirms its commitment to supporting American warfighters within its principled boundaries.

Read More

Anthropic Challenges Pentagon Supply Chain Risk Designation in Court

Anthropic is gearing up to challenge a significant designation made by the Pentagon, and it’s a move that’s sparking considerable discussion. The Pentagon has labeled Anthropic, a prominent AI company, as a supply chain risk, a move that Anthropic intends to contest in court. This situation feels, to some observers, like a modern-day echo of past instances where novel technologies were met with unwarranted suspicion, much like how rock music was once viewed with apprehension. The company’s decision to take a stand against the government on this matter is a notable first, and many are finding themselves rooting for them, hoping they succeed in their legal challenge.… Continue reading

Anthropic Declines Pentagon Request Praised for Ethical Stance

Despite the Pentagon’s offer to modify their contract, Anthropic has refused to alter its terms, citing ongoing concerns that its AI system, Claude, could be weaponized for mass surveillance or autonomous warfare. Defense Secretary Pete Hegseth threatened to cancel Anthropic’s $200 million contract and label them a “supply chain risk” if their AI model is not permitted for “all lawful purposes.” Anthropic maintains that while they support AI’s role in national defense, certain applications like mass surveillance and fully autonomous weapons fall outside the bounds of safe and ethical technological use. The company stated that the Pentagon’s revised language, despite appearing as a compromise, contained loopholes allowing safeguards to be overridden, thus solidifying their refusal to comply with the request.

Read More

Anthropic Ditches Safety Promises Amid Pentagon AI Deal Here’s why this headline is direct and concise, and captures the main themes: * **Anthropic Ditches Safety Promises:** This directly addresses the core action and the core value being compromised. * **Amid Pentagon AI Deal:** This succinctly states the context and the catalyst for the decision. It avoids overly emotional language from the input while still conveying the essence of the criticism: a company prioritizing profit/survival over its stated ethical commitments, especially in a controversial military application.

Anthropic, an AI company initially founded by former OpenAI employees with a strong focus on safety, is now adopting a more flexible approach to its self-imposed AI development guardrails. Citing shortcomings in its previous Responsible Scaling Policy and the rapid pace of the AI market, the company has moved to a nonbinding safety framework. This change, detailed in a recent blog post, allows for dynamic adjustments to its safety guidelines, separating internal plans from broader industry recommendations. The announcement follows increasing pressure and competition, including potential repercussions from the Pentagon over AI red lines.

Read More