A Conversation With Patrick Hammer
Non-Axiomatic Reasoning Systems, NARS, may be the most intriguing development in AI you’ve never heard of. Patrick Hammer is an AI researcher and practitioner. We met to talk about the legacy of NARS, its founder Pei Wang, and the work of Hammer’s late collaborator Tony Lofthouse.
***
Let’s start with the basics. What is NARS?
I think you summed it up very well in your own post. NARS attempts to capture empirical reasoning under the working conditions the human mind has evolved to excel in: knowledge is often incomplete, fallible and subjective; memory capacity is limited; the world moves ahead while information gets processed; and many actions have irreversible consequences, adapting quickly can be critical. In short: Assumption of Insufficient Knowledge and Resources (AIKR).
Ok. Now can you help me trace its roots?
Herbert Simon introduced Bounded Rationality, the idea that humans make decisions with limited time, knowledge, and computational power. To me at first it seemed to be the same as AIKR. After all, both talk about limits of knowledge and resources, right?
However, AIKR hits even harder. Let me explain why:
Bounded rationality is like solving a big but fully specified puzzle with a small toolbox — you can’t search the whole space or find the optimal solution, so you rely on heuristics that are good enough to produce a workable solution.
AIKR, by contrast, is like facing a puzzle that never stops growing and changing — you never even see all the pieces, and no amount of knowledge or resources could ever make it complete. You can solve fragments, but never the whole thing.
In real life you never have “all the missing pieces”. You never know everyone who walked past you, what every building looked like, what every sign said, or how many details you ignored just to reach your destination. The world presents far more information than you can ever absorb, and while you learn a few pieces, millions more appear behind you: unseen, unordered, and never fully captured. Your knowledge is always fundamentally insufficient, because the world is open-ended and changing, and you only ever see a tiny slice of it.
That’s NARS’ working assumptions, and how about its logic?
For this we need to look further back. In Prior Analytics (around 350 BC), Aristotle introduced term logic: a structured form of reasoning in which, once certain premises are accepted, a conclusion follows necessarily from their relationships. It was a breakthrough for its time, but it analysed validity, not cognition — it assumed fixed, true premises and did not address how minds reason under uncertainty or limited resources, just as modern formal systems typically do.
Two millennia later, first-order predicate logic was elevated to the foundation of modern formal reasoning, and Aristotle’s work declared obsolete. FOPL then became the basis of mathematics and, by historical misstep, also of GOFAI (“Good Old Fashion AI”), importing assumptions of perfect knowledge, unlimited memory, and static premises — assumptions fine for mathematical theorem proving but useless for empirical reasoning. Modal logic and non-monotonic logics tried to patch these flaws, but they amount to duct-tape fixes on a broken foundation rather than genuine models of cognition.
Logic could have taken a different path — toward cognitive logics with continuous, evidential uncertainty calculi and adaptive, hypothesis-driven machinery that allows for incremental belief revision as new information arrives.
Instead, we got brittle logic-based expert systems like Cyc and rule-based cognitive architectures such as Soar and ACT‑R, all reliant on hand‑crafted knowledge rather than learning.
When these models were proposed early neural networks were also starting to gain traction, a movement sometimes referred to as connectionism. Pei Wang envisioned NARS as a way to bridge the rift between symbolic logic and adaptive learning. I entered the field much later, during my undergraduate studies at the Technical University of Graz. It was also then that I first met Tony Lofthouse, at an AGI conference following an insightful presentation by Jürgen Schmidhuber.
Tony went on to become your long-time collaborator and friend. He recently passed away after a long struggle with cancer. What can you tell me about him?
Tony was an extraordinary person—intense, energetic, and all-in with everything he did. When I visited him years ago, he was training for a marathon; despite being nearly twice my age, I struggled to keep up. He brought that same drive to research, leaving a successful corporate career at Microsoft in 2008 to work as an independent scientist on his own dime.
History’s rarest breakthroughs often come from fiercely independent minds working outside institutions, and Tony was one of them. He even oversaw the construction of his own boat and later sailed it across the Atlantic with his wife—a fitting metaphor for how he steered NARS into completely uncharted waters.
What was it, exactly, that Tony brought to NARS?
To see that, let’s recall where NARS struggled. Despite its philosophical clarity, practical implementations were bogged down by AIKR’s demands: the system had to constantly juggle limited resources across perception, inference, memory, and decision-making. This often led to a combinatorial explosion in terms of hundreds or thousands of inference tasks per cycle, and without strong prioritisation, even simple problems became computationally overwhelming.
Tony recognised that this was fundamentally an attention-allocation problem. Previous implementations drowned in tasks because they couldn’t separate relevant events from internal noise. His Spiking Neural Network-inspired mechanism changed that, allowing NARS to focus its reasoning on what actually mattered.
Let’s pause there for a moment — what’s a spiking neural network, and why did the analogy matter?
SNNs communicate through discrete spikes rather than continuous activations, so computation happens only when something fires. This makes them sparse, event-driven, and energy-efficient. Tony saw that NARS could work the same way: instead of updating everything every cycle, the system should revisit only the beliefs that have changed or matter in the current context. It’s a very brain-like idea — process only when something catches your attention or primes related memories.
SNNs are only now gaining wider interest, especially for energy-efficient transformers, but Tony anticipated this shift years earlier.
How did you perceive these ideas back when you first encountered them?
I wasn’t convinced at first — nor was Pei Wang. He had a different vision for inference control, but it didn’t work well in practice. It took time before we realised Tony’s approach was practically superior. His 2017 presentation and live demo finally convinced us, and we incorporated his ideas into new NARS architectures, including OpenNARS for Applications: Architecture and Control. Looking back, it’s not my best paper – but it showed that Tony’s inference-control principles successfully let NARS run continuously and answer queries responsively while processing streaming input.
That was important to us as the history of AI is full of fascinating ideas that never quite made it beyond constrained lab settings. That’s not the case with NARS — we already know it works, and that it’s broadly applicable. The question now is how far we can take it.
Is it even relevant to compare NARS to machine learning?
Large language models trained on internet-scale text extract patterns of human reasoning, allowing us to build AGI-leaning systems without understanding the underlying mechanisms of human thought. Crucially, without human-created data these models wouldn’t work at all — much like expert systems without their knowledge bases. The principles are in the data, and transformers are the first architectures capable of reliably pulling them out of the noise.
But LLMs are not “life-like”: they cannot truly adapt beyond their limited context window. They let us build impressive static systems that already approximate adult human reasoning in many areas — But researchers still don’t understand how reasoning itself works, nor how humans track, update, and organise their memories — domains traditionally explored in GOFAI, though with fundamentally misguided approaches.
They’re stochastic parrots…?
It depends on who you ask. I’m somewhere in the middle myself. Their capabilities are objectively beyond anything that came before, so it’s no surprise that adoption was a flash-flood and that much of the public already treats them as AGI-like. And of course, this whole wave has given the AGI field a level of credibility it never had before — even Altman seemed convinced that sheer scale, with enough data and compute, could eventually extract something like human intelligence from the training signal.
Looking back, the notion that we ‘couldn’t build AGI’ / human-level intelligence feels like scientific robbery — a belief that discouraged real exploration until transformers forced everyone to reconsider their assumptions.
I also have deep respect for the researchers who dive into the depths of transformer models to build the next generation of large language models, even though it is completely separate from my own research.
When people look back at the history of machine learning, the 2017 paper Attention Is All You Need is often seen as a turning point for the entire field. Was there ever a paper that played a comparable role for NARS?
It’s difficult to say, given that there’s usually a lag. The significance of the paper you’re referring to was only recognised in hindsight. Even Google, where the authors worked at the time, didn’t realise what they had. As it turns out, Transformers reshaped the field by providing a single architecture that works almost everywhere, often outperforming what came before, as long as enough data and compute are available.
On our orthogonal journey toward empirical reasoning systems, we haven’t yet reached a “transformer moment”, but we did solve the inference-control problem that had long stalled progress in NARS. Tony’s 2019 paper, ALANN: An Event-Driven Control Mechanism for a Non-Axiomatic Reasoning System, was the turning point. Earlier NARS versions relied on the “bag model,” where tasks and concepts were sampled probabilistically in a sequential process with tricky, non-trivial dynamics we failed to get right. Tony replaced this with an event-driven, spiking-like mechanism that processes novel information once while priming related concepts to steer the flow of reasoning. This finally allowed the system to focus its effort where it mattered. Suddenly NARS wasn’t only philosophically elegant — it was efficient.
When you’re describing this history, I get the impression that the development of NARS seem to have happened pretty much in isolation from the rest of the AI community
It’s true to an extent, but not entirely. Jeff Hawkins and Sandra Blakeslee’s On Intelligence came out in 2004, and their Hierarchical Temporal Memory (HTM) model explored some related ideas — hierarchy, prediction, attention, sparse activation, and especially continuous online adaptation. But HTM came from a very different philosophy, grounded in speculative neuroscience rather than logic, and it never approached the kind of uncertainty handling or resource-bounded reasoning that NARS is built around. So yes, there was overlap in topics, particularly in the focus on adaptation, but NARS still evolved on a largely orthogonal track.
Oh wow, that’s true, I remember reading that book when it came out and feeling: these people are either wrong or far ahead of their time!
Exactly. That’s what made On Intelligence so interesting — it sounded visionary, but it was also highly speculative. Many people dismissed it because it didn’t match the dominant methods, and in retrospect that was partly justified. Some of the themes — sparse activity, prediction, hierarchy — show up again today, but mostly in very different, far more effective forms. Deep learning ultimately delivered far more than HTM ever did, while HTM carried the promise of real-time, adaptive, neuro-inspired AI. So they weren’t entirely wrong — just far more optimistic than the science could support at the time.
NARS has been evolving for several decades now. How close do you think it comes to fulfilling that original ambition — the old dream of artificial general intelligence?
In Tony’s view, scale mattered a lot. Pei Wang’s early implementations topped out at a few thousand concepts, while Tony pushed this into the range of ten million by improving multi-threading and data structures. That’s still nowhere near the scale or complexity of the brain, of course. We can’t reverse-engineer a cortical column, and we don’t really understand how memory formation works — damage the hippocampus and the ability to form new memories disappears, and we still can’t explain why. So any analogy to the brain has to be taken cautiously.
There’s also the OpenWorm project, where researchers tried to simulate the 302-neuron nervous system of C. elegans. Even with a complete connectome, the behaviour didn’t emerge. If we can’t fully reproduce a worm’s nervous system, in my opinion it shows how far we likely are from “simulating brains.”
More near term though, what do you see as the next frontiers?
I think we’ll see a lot of progress in neuro-symbolic systems, where we try to combine the strengths of both worlds. My collaborator Peter Isaev and I recently published a paper exploring how NARS can be integrated with a generative pre-trained transformer. The idea is simple: use GPT as the natural-language interface and let NARS act as the reasoning engine underneath. GPT turns language into structured representations, and NARS reasons on those representations in real time, whereby GPT can inspect NARS’s memory for question answering. That gives you something useful: the flexibility of language from GPT, and the experiential learning and uncertainty handling from NARS.
For me, the next frontier is robotics — bringing NARS out of simulation and into the physical world. Once you give a reasoning system a body and senses, the feedback changes completely; it starts to experience its surroundings directly and build hypotheses about how the environment works. That’s when things get interesting, adaptive embodied AI through empirical reasoning as we attempted in our recent works where NARS acts as core decision making unit of an autonomous mobile robot. It’s still early days, and I’ll admit it sometimes feels like working slightly off to the side of the mainstream. But that’s probably how Tony felt too. You stay on the margins if you believe the path you’re on leads somewhere worthwhile.
