Military AI Is More Likely Than You Think

Apr 10, 2025
CROWS weapon platform
Despite appearances, the CROWS weapon platform is not a friendly and lovable robot sidekick

If you haven't read AI 2027, you should. It's a story (model? scenario?) describing how superhuman AI could be created and what might happen if it is, and the authors include a huge amount of justification for their conclusions. It's plausible, although by the end it begins to creep into the ridiculous. It might be actually ridiculous, or I could just be falling into the classic trap of believing that anything that seems weird to me personally can't happen. We'll find out around 2028 or 2030, I suppose.

I disagree strongly with one part of it though. To explain why I disagree, I'll need to tell a story about a panel on AI safety that I attended at FED Supernova in 2024.

FED Supernova is a conference held every year in Austin, Texas. There are booths for companies that are selling unique technology to various parts of the military- everything from project management software to high tech bandages. In 2024, I was walking around the lobby when I saw a what looked like a Boston Dynamics robot with armor plating and rails (for mounting guns) attached to it.

I talked to the operator for a bit. He claimed they could be controlled remotely or run autonomously, were aware of other nearby units, and could move automatically in packs. They could also serve as a weapons platform for anything up to a Javelin missile, and could handle the recoil from a .50 caliber machine gun. Fun stuff.

Tactical robot dog
Cool as hell, even if it ends up killing us all

Back in 2017 or so when I first became aware of Spot, the Boston Dynamics version of this robot, my first thought was "if someone isn't already working on turning that thing into a weapon, they will be shortly." Watching the weaponized version of that robot walk around the lobby during a DoD technology conference in 2024, my immediate thought was "if someone isn't already trying to automate tactical decision making for those things, they will be shortly." I'm not an expert on AI, but as far as I can tell, the technology already exists, although it might not have been assembled from the various available models and data ingestion tools.

You would need to describe a battlefield in a way that an AI could understand, have a model trained on a huge number of historical engagements, and convert the output of such a model into orders for packs of robot dogs outfitted with various complementary weapons systems.

All of which is possible now. Actually, all of this was possible in 2024, so I'd guess it's probably being field tested somewhere now.

With these thoughts in my head, I went into the next panel that I was planning to attend- a discussion of AI ethics and safety. I'm, again, not an expert on either AI or the ethics and alignment issues surrounding the field, but I come in contact with people who are and sometimes read through discussion or blog posts on the topic. But even I could see that the panel wasn't engaging with any particularly difficult questions. When they asked for questions from the audiance, I raised my hand for the microphone.

There's a robot dog in the lobby that's partially autonomous and capable of coordinating with other robot dogs. If the technology necessary to make autonomous tactical decisions for a fireteam of those robots doesn't currently exist, it will very shortly. It seems likely that, after the tactical level is automated, the strategic decision making will be as well.

In a confrontation between approximately equal military forces, one of which must report the current state of the battlefield up a chain of command, deliberate, and then pass orders back down the chain of command, and another which can make strategic and tactical decisions immediately, with complete knowledge of the current state of the battlefield, it seems obviously true that, all other things being equal, the military force that can act more quickly is likely to win.

Given this, there would be immense pressure on the US military to move toward heavy automation and remove humans from the loop in order to remain competitive. Is this a discussion that people in the military are having? Do they or you consider this to be a dangerous trend? And if so, are there any plans to mitigate this danger?

The panelists all smiled, looked at each other, and shrugged. We sat in silence for a moment, and then the moderator asked if any of them would like to respond. After another brief silence, in which the panelists continued to smile vaguely out at the audiance, he answered me himself. His reply was bland, reassuring, and completely missed the point of the question.

As I walked away from that panel, annoyed and uneasy, it occurred to me that everyone on that panel likely had a security clearance and access to classified information about military AI efforts. Their lack of response, in a way, was a response.

Reading through the AI 2027 scenario, my first thought was that the authors are heavily underestimating how quickly AI will be adopted by the US military. These are not people who wonder if they are falling prey to the argument from incredulity, who speculate about the dangers of superhuman AI, or who give a shit about alignment. They will nod along with whatever the AI researchers say, and then ask what the earliest expected date for a field deployment might be. The following quote, in particular, is the most improbable thing I read in this story-

The President is troubled. Like all politicians, he’s used to people sucking up to him only to betray him later. He’s worried now that the AIs could be doing something similar. Are we sure the AIs are entirely on our side? Is it completely safe to integrate them into military command-and-control networks? How does this “alignment” thing work, anyway?

Not only because the idea of Donald Trump soberly considering these questions is laughable, but because at this point in the story I would find it incredibly unlikely that they haven't already integrated these agents (or military variants of them) into our command-and-control networks. We have semi-autonomous drones in the air and on the ground. We have weapons systems that can be operated remotely and automatically track movement. We have a global network of intelligence, targeting, and surveillance systems producing huge amounts of data that cannot be analyzed by human beings at a useful speed.

Even Agent-1 in the AI 2027 scenario would be able to automate or partially automate many of these systems. By Agent-3-mini (the point where the president becomes troubled in the above quote), I would expect full tactical automation of most drones, mostly-automated targeting for most weapons platforms, and partially automated strategic planning.

I sincerely doubt that a future superintelligence would have to find some way to convince the US government to give it control over our military forces. It seems much more likely that it will be born with that control already in place.

https://blackpath.blog/posts/feed.xml