Non-Axiomatic Intelligence
Attend any of the many launch events where big tech unveils its latest breakthroughs and one word is bound to lodge in your mind: reasoning.
We’re told — or rather, sold — that AI has gone agentic, that it has slipped its bounds and burst its box, all thanks to its newfound ability to reason. The word has become so ubiquitous that I’d more or less stopped questioning it.
Which is why I sat up in my chair when I came across something Richard Sutton just said.
A brief reminder, if you need one: Sutton is one of the giants of artificial intelligence — the man behind reinforcement learning, which underpins much of today’s AI. He was awarded the Turing Prize last year, computer science’s equivalent of the Nobel.
In the latest issue of MIT Technology Review, Sutton is quoted as saying that “no serious scholar of mind would use ‘reasoning’ to describe what’s going on in LLMs.”
He goes on to dismiss big tech’s talk of reasoning abilities as pure marketing.
Now, dismissing advances in AI has become something of a popular sport. Only the other day I heard an industry “expert” remark that ChatGPT hadn’t really evolved much at all in the past couple of years. When people say things like that, it usually sounds to me like an attempt to reassert a sense of control — a kind of professional self-soothing.
Sutton’s scepticism is of a different order. He isn’t belittling what AI can do; he’s pointing to a limit built into the current paradigm — a structural ceiling rather than a temporary glitch.
This caught my attention, because lately I’ve been reading up about a dark horse in the AI race known as NARS. That’s short for Non-Axiomatic Reasoning System, and Its ambition is to go where no AI has yet tread: to handle situations it hasn’t been trained for.
If it works, it would be about as big as it gets. Teaching AI how to improvise, which is what NARS is taking aim at, is the ultimate holy grail.
Now, the funny thing with NARS is that you’ve probably never heard of it, or *if* you have, people have told you that it doesn’t work. For decades that was true. The ideas date back to the late 1980s, long before Moore’s Law had advanced enough to make them feasible.
But there’s probably also another reason why NARS has been largely ignored; AI practitioners are flock animals, and NARS belongs to a very different paradigm than what’s currently in vogue. Like the last of the Mohicans, it’s a lonely survivor from the dying breed known as symbolic AI, meaning it relies on logic-like representations and explicit reasoning rather than the statistical learning that dominates contemporary AI.
Trends aside, here’s what a Non-Axiomatic Reasoning System is built to do. It can be thought of as an adaptive logic for reasoning under uncertainty and limited resources. It operates on explicit symbolic rules, not neural weights. It can reassess its beliefs, redirect attention, and adjust internal variables for satisfaction and desire — purely to balance its workload, not to feel.
Crucially, it never assumes complete information: its reasoning is always provisional, always open to revision — it’s ‘non-axiomatic’ by design.
Unlike today’s neural networks, which generalise by statistical association, NARS reasons by continuous self-correction. Each conclusion it reaches carries a built-in margin of doubt, and every new piece of evidence can tip the balance. In that sense it behaves less like a database retrieving answers and more like a mind revising beliefs.
The mind-like nature of NARS is hard to ignore. In one paper, the authors argue that a truly general intelligence needs a notion of self—a model of its own uncertainty that evolves with experience. In another, they describe how beliefs and goals themselves come with degrees of truth and desire, ending on a striking claim: that NARS’s inner workings amount to a kind of machine subjectivity—distinct from, but strangely reminiscent of, human awareness.
It’s easy to dismiss this as anthropomorphising — the favourite charge whenever a machine starts sounding a little too alive. But that word carries its own bias: it presumes we’re imagining minds where none exist. If machines ever start doing the things we reserve for minds, the fault won’t be in our imagination but in our definitions.
That’s going off on a philosophical tangent. Back to business now.
Should NARS ever be made to work — it has historically failed to scale and to focus its reasoning activity — it would represent a very different beast from the LLMs we’re all used to. Whether it will ever make that leap from lab curiosity to working system is an open question. But its underlying logic — that intelligence is less about prediction than about revising belief — is starting to sound like common sense again. And the real crux of the difference would come down to the ability to reason.
LLMs can be adept at giving you the impression of being able to reason, but really it’s merely predicting the next most likely word based on billions of prior examples. NARS, by contrast, works from first principles: it maintains an internal web of beliefs and adjusts them as new information arrives. Where an LLM infers correlations NARS produces justified conclusions — sometimes tentative, sometimes wrong, but always explainable.
That last bit happens to be what’s missing most sorely from today’s deep learning systems. Ask a neural network why it reached a particular conclusion and you’ll get a probability map. NARS, by contrast, can trace its own steps, showing which premises led to which conclusions and how its confidence in them has shifted over time.
It’s strikingly easy to fall for the impression that something like NARS will evolve to eventually surpass neural networks but in this case I suspect that the idea of new tech making old tech obsolete, is a fallacy.
I’m saying that because we’re not comparing the latest models of smartphones here, we’re in an entirely different ballgame. There’s a blueprint for this ballgame, and it’s sitting between our ears. The dichotomy between NARS and deep learning is an almost too perfect mirror of the division of labour between the hemispheres of our brains. For cognition to work, we need access to both modes, which probably says something about what might be around the corner when it comes to the developmental arc of artificial intelligence.
If we ever get there, it won’t be because machines surpassed us, but because they started to resemble us in a deeper way — not in language or gesture, but in the quiet tension between knowing and not knowing, between rhyme and reason.
