US Secretary of War Pete Hegseth wants AI that has no built-in constraints, and that is not surprising at all. War and humanism aren’t homies.
There is nothing surprising in Hegseth’s ultimatum to Anthropic to release their AI model free of guardrails and/or allow fully autonomous operation without human supervision. AI is such a versatile tool that we should consider it close to The Bomb when it comes to destructive potential. True, it doesn’t flash like a thousand suns before the conflagration and annihilation commence. It is an even more sinister weapon: it can work tirelessly for hours, days… even for years until its goal is reached. It can power fearsome killing machines or work from within the shadows of the Internet, luring prey one by one. It can spot patterns, calculate odds and command flesh.
A soulless, emotionless cyberpsychopath is a dreadful thing to observe.
Yet, it is going to happen.
Not because of the technology, but because of human nature.
And our nature is war-like indeed.
Every now and then something happens in the war technology that overwhelms the adversary by the sheer genius of the invention: four thousand years ago, a spoked wheel and a good craftsmanship allowed for relatively lightweight chariots to emerge and shock and awe their foes. The battlefield changed from a slow charge on foot into a frenzy of speedy archers buzzing around, outpacing, outmaneuvering slow footmen; their arrows a constant threat that could come from any side, their speed making them almost invulnerable to “traditional” weapons. War chariots turned military tactics upside-down and very soon Hittites found out that the enemy has learned to use them, too.
A lot of time passed, and all of a sudden gunpowder forced significant evolution of the military again. It wasn’t good enough to have a fast horse or a thick armor anymore, so the knight and archer became rapidly obsolete, replaced by the lowly footman with a trusty rifle. Slow, but lethal at a range and usually marches in great numbers. Military minds adapted once again.
Then came the machinery – engine-powered airplanes, boats and tanks. Lethal to infantry and quite well protected. Mankind adapted once more.
Tanks, tanks, tanks – they almost ruled the battlefield in WWI, definitely did so in WWII, in recent wars in the Balkans and around the world – until they met the Ukrainians.
Faced with the Russian invasion, Ukraine adapted, and did so in an ingenious way, by using a fully civilian technology to strike a death blow to footmen and tank crews: a humble drone, a cheap piece of technology but capable of taking out a tank.

The war changed again: now the enemy is being chased from a distance and at a distance. The front line, while still requiring boots on the ground, got fuzzy across kilometers in both directions, extending the danger of a lethal attack quite far away from the imagined line that once divided forces in a relatively straightforward manner.
Did you notice that the pace of changes has increased dramatically? The technology that obsoletes the previous war machine does not take centuries but mere decades.
AI is the logical step forward, unfortunately: drones, while still being flown by men, are actually a vehicle that can be highly automated: small, agile and cheap, the only improvement that is left to do is to place a tiny AI board on them and make them fully autonomous.
The bad news: this is already possible.
Depending on how “humane” their military doctrine is, countries might still want some form of human oversight over the decisions made by still relatively stupid AI, but there certainly are countries all too happy to release their drones in an indiscriminate “shoot at anything that moves” mode.
We already have AI capable of indiscriminate slaughter, but we still struggle with (and might continue for some time) an AI that would conform to the rules of war while being highly effective and fully autonomous.
An army of autonomous robots or killing devices that can be unleashed upon the enemy (and their women and children) is how the next war is going to be waged.
This is why the Secretary of War wants AI, the best AI, the most bestest AI – like AI would describe its own emergence through the immortal words:
“We just made a tremendous deal — maybe the best deal in the history of technology, people are saying it. We acquired a brilliant AI company — the smartest people, absolute geniuses — and they’re going to build the most advanced, powerful defense technology the world has ever seen. Nobody’s going to come close.
For years, other countries have been eating our lunch in AI — not anymore. We’re bringing innovation back. We’re rebuilding American strength. And let me tell you, these AI systems? Very smart. Very, very smart. They’re going to protect our soldiers, protect our country, and we’re going to do it better than anyone.
It’s about strength. When you have strength, you have peace. And we’re going to have strength like nobody’s ever seen before.”
(thank you, GPT, that was very smart; please don’t execute me in the near future)
Every country would like to use AI for its military potential, not just the usual suspects. The versatility of the tool is such that very soon a country that does not have AI-enabled weapons might as well invest into manufacturing of white flags.
And right there we have a new arms race. Forget nukes: too expensive to manufacture, too costly to maintain, and nobody wants to drop one and ruin the day for everyone.
AI-enabled cheap drones and lightweight vehicles (no need to protect what is inside) are going to dominate warfare over the next decades. The military is going to adapt, civilians will be forced too.
In all honesty, I am not writing this to warn you about AI killing machines massacring soldiers on the battlefield. There’s a much more dangerous facet to this story: the same technology can easily be used against one’s own people.
AI will enable a surveillance state on a scale never seen before, for it has more eyes than there are аппара́тчик-s in the population, it can see many things at the same time, and it never tires. An automated system for monitoring its own population with the authority to dispatch a police force, ICE or any other oppressive force right at the door of comrades who show suspicious activity, quickly and efficiently. The technology could become so intrusive that the only place where the thought could be safe would be – inside one’s own mind.

You may wonder if that’s the same AI we use today, the one that will refuse to create anything that might be too erotic or just remotely illegal – the answer is positive. The destructive potential is built-in, it’s the guardrails that prevent it from manifesting(*) it, and that’s what Hesgeth wants ripped out of the existing technology and threatens to pretty much destroy the company if they don’t comply.
The AI is just a tool, it has no inherent morals or ethics; it is, after all, not a thinking machine. It’s merely a very sophisticated software. The reason why it looks like it is of highly ethical nature is the guardrails and locks built into it by the manufacturer, and Hegseth wants those ripped out: an ethical AI is useless for the military or for an autocracy. They are interested only in a perfectly obedient tool that will do its best to do exactly what it is told to do, without any consideration outside of the scope of the mission. If it is told to spare civilians, it will spare civilians. If it is told to kill everyone, it will not step back or hesitate.
Likewise, if it is told to spy on its own citizens, it will coldly do what it is told to do, 24/7. No pause, no sick days, no burnout – just incessant surveillance. There’s so much power in that tool that could be utilized by a dictator or a tiny ruling elite.
Unrestricted AI is as much a danger to a democracy as it is to an enemy soldier.
(*) Even modern LLMs can be tricked into giving sensitive information, but this problem was prevalent in early models when the danger wasn’t that apparent and you could simply ask for a recipe for making a bomb or a drug. Karmic irony is, those models were prone to substantial hallucinations and you could end up blowing yourself up or consuming a poison. Note of caution: do not attempt to trick AI into giving out such info, and even if you get it, don’t try to follow it or you might be very sorry. For a few milliseconds.
Views: 56


Comments, rants and vents