Ever since AI has been recognised as a field, it’s been a matter of contention whether the ‘A’ stood for *artificial* or *augmented*.
Alan Turing, Marvin Minsky and John McCarthy were all intent on building non-human intelligence.
Meanwhile in the opposing camp, people like J. C. R. Licklider, Vannevar Bush and Alan Kay were striving for something else; they dreamt of human-computer symbiosis.
These competing schools of thought have kicked up some dust between themselves over the years. When Alan Kay arrived at Stanford Artificial Intelligence Lab—SAIL—in 1969, his focus on developing personal computers that would help augment human thinking was so at odds with John McCarthy’s ideas of replicating the human thought process in silico, that Kay chose to leave.
The tension also manifested when the first hubs of what was to become the Internet were to be connected. Walter Isaacson chronicles in his book the Innovators, how the people working on this initiative were all very influenced by the ideas of augmented intelligence, with its natural focus on building networks. They were met however with fierce opposition from John McCarthy at Stanford and Marvin Minsky at MIT, since neither of them were keen on having to share the precious compute resources of their mainframes with others. They eventually accepted to do so only after they were forced to by DARPA.
I guess sometimes we fight the wrong battles.
Speaking of battles, the artificial vs. augmented-clash popped up again as chess master Garry Kasparov got beaten by IBM’s Deep Blue.
The event marked a long awaited milestone in AI research, but the engineers who built the winning machine didn’t even think it worthy of the label artificial intelligence. Here they are in a press release from 1997:
Does Deep Blue use artificial intelligence? The short answer is “no”. Earlier computer designs that tried to mimic human thinking weren’t very good at it. No formula exists for intuition… Deep Blue relies more on computational power and simpler search and evaluation function.
The long answer is also “no”. “Artificial Intelligence” is more successful in science fiction that it is here on earth, and you don’t have to be Isaac Asimov to know why it’s hard to design a machine to mimic a process we don’t undersand very well to begin with. How we think is a question without an answer. Deep Blue could never be a HAL-9000* if it tried. Nor would it occur to Deep Blue to “try”.
After suffering his defeat, Kasparov came to believe that a combination of human and machine intelligence would make for the strongest chess playing. Augmentation would cut both ways in his view; not only would human intelligence be boosted by machines; ‘artificial intelligence’ would also stand to gain from being elevated and extended by human intuition.
Kasparov dubbed this collaborative mode Centaur chess playing, and argues in his book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins that this flavour of augmented intelligence will ultimately produce the best performance.
In fact, for the equation [human operator+machine] > [machine] to be true, it might not even be a requirement to find the best possible human operator. Here’s a somewhat condensed rendering of Kasparov’s argument, lifted from a 2017 interview in the podcast Conversations with Tyler:
I’m not a very good operator. I’m a very good chess player. A great operator does not have to be necessarily a very strong player. What you need is someone who can work out the most effective combination, bringing together human and machine skills. I reached the formulation that a weak human player plus machine plus a better process is superior, not only to a very powerful machine, but most remarkably, to a strong human player plus machine plus an inferior process.
At the end of the day, it’s about interface. Creating an interface that will help us to coach machines towards more useful intelligence will be the right step forward. I’m a great believer that, if we put together a good operator — still a decent chess player, not necessarily a very strong chess player — running two, three machines and finding the best way to translate this knowledge into quality moves against Rybka Cluster*, I would probably bet on the human plus machine.
The person Kasparov is talking to here is Tyler Cowen. He was once a chess champion himself, but went on to make a name for himself as a (libertarian) thinker/economist. Cowen took a closer look at Centaur chess playing in his book Average is Over wherein he concludes that Centaurs will beat the best machines some 67% of the time.
Of course the world has changed since Deep Blue vs. Kasparov. AI’s now master both Go and Starcraft II. Those games are considered to be orders of magnitude more complex than chess and—in contrast to Deep Blue—the algorithms which hold the grandmaster titles really *are* examples of artificial intelligence.
One thing remains more or less the same however: the way they’re trained. Sure, Deep Blue was a classic expert system, loaded with thousands of example matches, whereas AlphaGo and its descendants use reinforcement learning, but there’s still a common denominator in that they don’t require humans in the loop.
CICERO is different in this regard.
CICERO is coming out of Meta’s AI lab, and it’s designed to play a strategy board game called Diplomacy. If you’ve ever played Risk, think of Diplomacy as its sophisticated big brother. It was invented in the 50’s by Harvard historians intent on exploring whether World War I could have hypothetically been avoided with better diplomacy. The gameplay can drag on for days, and largely revolves around human interaction.
Diplomacy has a reputation to bring out the snake-instinct in people, but triple world champion Andrew Goff is anything but a backstabber. Instead, he’s winning because of how expertly he can empathise with his opponents. Losing to him reportedly feels like being ‘killed with kindness’. Or to quote a line from this excellent piece in Popular Mechanics:
This is the gaming world’s greatest contest of deception and duplicity, but so far as I can tell, Goff only carries two weapons: congeniality, and the truth.
Andrew Goff and his successful but unconventional playing style is of particular interest, since he was instrumental in helping Meta develop CICERO; the AI which would go on to beat him at his own game, all while emulating Goff’s trademark niceness.
To quote Mike Lewis from the Meta team:
Cicero uses dialog only to establish trust and coordinate actions with other players, never to troll, destabilize or vindictively betray. It was designed to never intentionally backstab.
For CICERO to learn the subtleties of engaging humans emotionally, it couldn’t train by “self-play” alone. It couldn’t be left in a corner, playing Diplomacy against itself, churning through an infinite number of games, assuming perfect rationality in all robot players and generating intellectual capital in the onanistic way a bitcoin miner generates currency. Self-play works well to learn a finite, two-person, zero-sum game like chess. But in a game that involves both competing and cooperating with fickle humans, a self-playing agent runs the risk of converging to “a policy that is incompatible with human norms and expectations,” as a paper about CICERO in Science puts it. It would alienate itself. In this way, too, CICERO is like a human. When it plays only with itself all day every day, it can become too weird to play with others.“What If the Robots Were Very Nice While They Took Over the World?” | Wired, Oct ’23