What’s in a Symbol
Lately, I find myself circling a new class of artificial intelligence known (mostly to connoisseurs) as NARS — Non-Axiomatic Reasoning Systems.
After numerous deep-diving conversations and valiant attempts to penetrate the literature, I think I’ve formed an idea of what kind of animal NARS is. In my book, it’s the present-day heir of what was once known as expert systems — now generally referred to as symbolic AI.
Following that instinct, I wrote a long post positioning NARS as a paradigmatic contrast to neural networks. That post triggered an exchange with Boyang Xu, who’s been a PhD student under Pei Wang, the originator of NARS. Boyang seemed genuinely delighted to see anyone outside their small circle taking an interest, but he had one objection: he doesn’t consider NARS to be symbolic.
I struggled to understand his line of argument — it gets fairly technical — but I found it intriguing. Later in the day he sent me a note quoting something he had recently recorded Pei Wang saying:
When people argue about whether a system is symbolic, they often mix two very different meanings of “symbol”.
In the broad sense, anything with identifiers can be called symbolic. By that definition, NARS is symbolic, and even neural networks are symbolic because neurons are indexed. But that definition is not useful because it makes everything symbolic.
The real issue is the narrow sense of “symbol”: a token whose meaning depends entirely on interpretation and can represent different things in different contexts. This is the sense that creates the symbol grounding problem and the Chinese room worry.
An identifier is different. Ordinary natural language words already have grounded meanings and cannot be freely reinterpreted unless you deliberately redefine them. Proper nouns are also grounded. Pronouns, however, work like true symbols in the narrow sense: “he” and “she” can refer to anything depending on context and have no fixed meaning without interpretation.
So whether a system is “symbolic” in the meaningful sense depends on whether its tokens can be freely reinterpreted or whether they are grounded in experience. If they are grounded, the usual symbolic system problems do not apply.
17 November, 2025
