OpenAI Contracts Pentagon Deal Amidst Anthropic Ban Over AI Ethics

Following a directive to cease federal use of its AI tools, Anthropic faces a “supply chain risk” designation from the Pentagon. In contrast, OpenAI has secured a Pentagon deal for its AI tools within classified systems, contingent upon similar safety restrictions. These restrictions reportedly include prohibitions on domestic mass surveillance and the requirement for human responsibility in the use of force, specifically concerning autonomous weapon systems. OpenAI will embed engineers to ensure model safety, advocating for these terms to be offered to all AI companies to encourage de-escalation from governmental actions towards mutually agreed-upon terms.

Read the original article here

It’s fascinating, and frankly a little unsettling, to observe the rapid dance between cutting-edge AI development and the complex demands of national security. The news that OpenAI has struck a deal with the Pentagon, mere hours after Anthropic reportedly declined a similar arrangement, paints a striking picture of shifting priorities and perhaps, differing interpretations of ethical boundaries. The core of the issue seems to revolve around the concept of human responsibility, particularly when it comes to the potential use of autonomous weapon systems.

When we talk about “human responsibility” in this context, the devil truly lies in the details. It’s a phrase that can be interpreted in many ways, and it’s easy to see how it might not equate to a robust, ironclad commitment to keeping a human firmly “in the loop” for every critical decision, especially concerning lethal force. One can envision a scenario where oversight is more about reviewing logs or recordings after the fact, a passive acknowledgment rather than an active, real-time veto. This is a stark contrast to the idea of a human needing to explicitly confirm a lethal action *before* it is carried out.

This distinction appears to be the crux of the matter regarding Anthropic’s stance. It’s suggested that Anthropic stood their ground on this fundamental principle, refusing to budge on the need for stringent human control. OpenAI, on the other hand, seems to have found a way to navigate these concerns with what could be perceived as a more flexible, perhaps even a face-saving, clause. This allows them to publicly appear as if they oppose autonomous killing machines, while potentially not implementing the hard technical restrictions that would truly prevent such outcomes. The timing, immediately after the Pentagon’s well-known desires for mass surveillance capabilities and robotic forces, makes this all the more poignant.

The narrative emerging is that Anthropic said a firm “no” to the more ethically fraught aspects, and in contrast, OpenAI responded with an enthusiastic “yes.” This swift agreement, occurring so soon after Anthropic’s refusal, certainly fuels the perception that OpenAI was more amenable to the Pentagon’s broader objectives. It’s almost as if the Pentagon, facing a potential roadblock with Anthropic, turned to OpenAI, who was apparently ready and willing to forge ahead.

It’s difficult not to draw parallels between this situation and broader discussions about the role and responsibility of AI developers. The comment comparing the energy required to train AI to the energy required to train a human – twenty years of life and sustenance – is a thought-provoking, albeit somewhat cynical, perspective on the immense investment in AI. Yet, when AI is poised to make life-and-death decisions, the comparison falters. The idea that AI, which still struggles with basic navigation without assistance, could be trusted to make critical judgments on when to deploy force is deeply concerning.

Furthermore, the timing of this deal also raises questions. Some speculate it might serve as a distraction from other significant events, such as the release of sensitive files. The fact that the leader of OpenAI, now partnering with the military on AI, has made statements downplaying the “energy” of AI training while simultaneously engaging in what many see as a potentially dangerous collaboration, creates a significant disconnect. This also ties into anxieties about privacy, with concerns that such partnerships might lead to the sale of American citizens’ data to facilitate these technological advancements, perhaps to alleviate financial burdens.

The potential for this collaboration to escalate into a “doomsday scenario,” a concept frequently discussed in relation to advanced AI, is palpable. The phrase “humanity is on the way out” echoes a deep-seated fear that we are hurtling towards a future where autonomous systems, rather than humans, dictate critical outcomes. The notion that this deal could be a “bailout” for OpenAI, making them too important for the government to let fail financially, adds another layer of complexity to the motivations at play.

The contrasting positions of Anthropic and OpenAI highlight a critical ethical divergence in the AI landscape. While Anthropic’s decision to prioritize human oversight and potentially refuse to participate in projects involving autonomous killing or mass surveillance is seen as a principled and brave move, OpenAI’s agreement with the Pentagon suggests a different set of priorities. This has led many users to reconsider their subscriptions and explore alternatives like Claude, Anthropic’s AI, hoping for a more responsible approach to AI development and deployment. The very idea of AI operating military drones or being involved in widespread surveillance is terrifying to many, evoking dystopian visions of the future.

The speed at which this news broke, potentially leading to less than perfect headlines, underscores the urgency and perhaps the hurried nature of such agreements. The fear of an impending “friendly fire” incident due to AI error is a stark reminder of the potential consequences. The lack of clarity on how AI would be used for surveillance leaves many baffled and apprehensive. The concerns extend beyond immediate military applications, touching upon the fundamental question of who controls these powerful technologies and what safeguards are truly in place.

The sentiment that Sam Altman is “lying” or misrepresenting OpenAI’s true commitment to ethical AI development is prevalent. The suggestion that he might be capitulating to “fascist warmongers” speaks to the deep mistrust and ethical reservations many harbor. The repeated assertion that OpenAI consistently makes the “wrong” decision when presented with an ethical choice further solidifies this perception. The looming threat of ads on ChatGPT, coupled with concerns about spying and operating potentially oppressive systems, paints a bleak picture for users.

In stark contrast, Anthropic’s stance, described as principled and brave, has significantly boosted its brand value. They are seen as having taken a stand, enduring potential consequences for their refusal to compromise on crucial ethical lines. The public’s reaction is a powerful indicator of the growing demand for AI that is developed and deployed responsibly, with transparency and a commitment to human well-being at its core. The idea of major corporations partnering with OpenAI, thus indirectly supporting their ventures, is also something users are urged to reconsider, suggesting a broader call for ethical consumerism in the tech space.

Ultimately, the situation underscores a critical juncture in our relationship with artificial intelligence. The deal between OpenAI and the Pentagon, set against the backdrop of Anthropic’s principled refusal, raises profound questions about the direction of AI development, the ethics of its application in warfare and surveillance, and the responsibility of the companies creating these powerful tools. The future hinges on our ability to navigate these complex issues with clarity, integrity, and a steadfast commitment to human values, ensuring that AI serves humanity rather than endangering it.