I’m reading Patrik Svensson’s Den lodande människan : havet, djupet och nyfikenheten. It beautifully chronicles how humanity’s ability to navigate the seas has evolved over time. Some five thousand years ago, Polynesian seafarers could traverse vast distances using only their senses – a feat so astonishing that scientists long considered it impossible, until a modern-day expedition proved, beyond doubt, that it could be done.
From there, technology gradually made navigation easier. The Chinese invented the compass and developed methods for measuring speed in “knots,” paving the way for more accurate dead reckoning. The British gave us the marine chronometer. Eventually, the Global Positioning System made the human navigator all but superfluous.
As I was thinking about this trajectory — and where it might end — I was reminded of a thesis defence I recently attended.
An acquaintance of mine was crossing the finishing line after a journey that, in its way, must have been just as daring as that of the old Polynesian explorers. After lifelong service as a special forces officer, he had chosen to pursue the scholarly track. His subject: how artificial intelligence can be deployed on the battlefield for faster and more accurate targeting.
The military — always fond of abbreviations — refers to the steps of targeting as the OODA loop. As in: Observe, Orient, Decide, Act.
The OODA loop was conceptualised by American fighter pilot John Boyd, known to his comrades as “Forty-Second Boyd” — since, apparently, it never took him longer than that to win a dogfight.
Boyd argued that speed and adaptability are key to “getting inside the opponent’s OODA loop,” after which the adversary is forced into a reactive stance, unable to shape the engagement themselves. Confusion sets in, and their ability to make effective decisions breaks down.
One of the questions raised during the defence — by a seriously sharp doyen of international war studies — concerned the ethical risks of abdicating too much control to AI, especially in light of the algorithmically fuelled war crimes currently being perpetrated in Gaza.
The answer was framed in terms of the OODA loop, and served as a compelling example of how theory can help discipline our thinking.
The candidate pointed out that technology — from satellite imagery to a host of high-tech sensors — is already doing much of the Observing. This has led to an exponential growth in incoming data, to the point where human cognition can no longer keep up. We’re effectively unable to stay Oriented without relying heavily on technology — including AI.
It’s true that the Decision step in the OODA loop is still largely under the control of a flesh-and-blood individual — this is literally what “human in the loop” means. But recent academic papers show that the average time a commanding officer in Ukraine has to act on incoming data is now down to just 2.7 seconds.
Meaning: we’re kidding ourselves if we think we have the option not to cede control to AI — or even that AI is what it’s really all about. Technology has been encroaching on our cognition ever since the Polynesians set sail. And although the current moment may feel profoundly disruptive, it’s really not.
As Carl von Clausewitz quipped: “The nature of war is constant, even if its character changes.”
So where does that leave us on the question of ethics?
Again, let’s not kid ourselves. War has always brought out our basest instincts. It didn’t take autonomous murderbots to flatten Hiroshima or reduce Dresden to ash — along with much of their civilian populations. Weaponised AI is simply the continuation of that same trajectory.
At the end of the day, we should take a page from the old Polynesian navigators and find our way back to that internal compass. We need it now more than ever.