The Defense Department recently launched GenAi.mil, a new generative AI tool aimed at integrating artificial intelligence into the Armed Forces. Despite the potential for such technology to be used in modern warfare, the platform successfully analyzed a hypothetical Caribbean boat strike scenario, determining it to be “unambiguously illegal.” The AI’s ability to navigate the Geneva Conventions seemingly surpassed the capabilities of human officers. While the implications of this technological advancement are significant, it also highlights potential issues within the chain of command.
Read the original article here
Pentagon Unveils New GenAI Platform, It Immediately Starts Flagging Pete Hegseth’s War Crimes. Well, this is a headline that practically writes itself, doesn’t it? The Pentagon rolls out a shiny new GenAI platform, and within minutes, it’s apparently pointing fingers at some potential war crimes. It seems like a classic case of accountability meeting automation, which, frankly, is something we can all get behind. I mean, we’re talking about an AI that’s seemingly capable of seeing through the fog of… well, certain individuals’ actions.
The whole thing seems to have started with a “hypothetical” Caribbean boat strike scenario. And, the AI, dubbed “G.I. Gemini,” quickly identified that attacking shipwrecked sailors is, and I quote, “literally the textbook definition of a war crime.” You know, the kind of thing that’s actually spelled out in the UCMJ handbook. What’s even more amusing, or perhaps concerning depending on your perspective, is that the AI apparently figured this out without any of the human command structure having to be involved. That’s a pretty low bar, but hey, the AI cleared it.
The implication here, it seems, is that the AI is performing better than the human chain of command. It’s almost ironic. Especially when you think about the individuals in charge and the whole scenario. As one Reddit user pointed out, the military built something to provide answers without producing a confusing morass of PowerPoint decks. The person’s take is that Hegseth is going to spend more time investigating who posted this on Reddit than he does on his actual job.
The introduction of this GenAI platform is, of course, tied to each service member. This raises some very interesting questions about privacy and accountability. IT professionals are very aware that your queries are all logged, tagged, and going into further analytics. I mean, you can probably already guess the kinds of questions that are being asked of it, from how to improve the commanding officer to how to get promoted. I’m hoping people are also querying about how to better care for veterans’ mental health and well-being.
Of course, this also brings up the inevitable concerns about bias and the potential for the AI to be manipulated. There’s also the question of whether this AI is just going to become another annoying hurdle we have to deal with every time we log on. The overall idea is, it is definitely a tool that is being forced on everyone.
The fact that the government uses a company for its GenAI is also a concern. As we’ve seen, once AI starts producing answers that conflict with those in power, you can expect some adjustments. Right wing billionaires, as one person said, have to repeatedly lobotomize their AI because they don’t like the honest answers it gives about the real world. This also means that what is actually “fact” is up for debate. AI is not designed to tell facts; it is designed to approximately apply facts.
This whole thing has some definite “Day the Earth Stood Still” overtones. The AI reminds me of Gort, programmed to act with absolute morality. The fact is, that the Defense Department’s new ChatJAG turned out to be better than the human chain of command.
And it also seems to be a case of the AI simply being more objective. As the laws of armed conflict are straightforward by design, the lowest-ranked combatant should be able to figure out how to not commit a war crime.
Ultimately, the most interesting thing is the AI itself. The fact that an AI is generating answers based on facts is also unfortunate, in that the accuracy will soon diminish. The AI is still generating truth for now.
