There’s a funny scene in Charlie and the chocolate factory where Willy Wonka goes to see his psychoanalyst. Willy quickly arrives at the truth: His candy tastes terrible because *he feels* terrible. Turning in awe to his shrink, who has kept his silence (being that he’s oompa loompian and can’t speak) Willy exclaims:

“You’re very good doctor!”

I guess this is often the logic of the couch. A succesful therapy doesn’t always depend on a therpist that says the right things.

I thought of this the other day when I read Katie Hafner and Matthew Lyon’s wonderful book Where wizards stay up late : the origins of the internet.

There’s a scene where people interact with “The Doctor”, a simple little programme built in 1971 to model a real world psychologist. It was really just a chatbot alternating between canned general comments like “Please go on” or “What does that suggest to you?”, and mirroring the user by reflecting back a tweaked version of what had just been said.

Vint Cerf, one of the central innovators of the early Internet describes how many people got emotionally involved and wanted others to leave the room so that they could have a private conversation with the Doctor.

That’s a cute story, but perhaps also more than just an anecdote. After all, many of the pioneering experiments made around this time have later turned into everyday appliances.

So are computers getting ready to help us deal with our mental problems? Well, there are some promising signs, many coming out of the field of machine learning, where the most interesting developments has little to do with what we traditionally think of as “computing”. 

Just look at how eerily good machines are becoming at imitating human behaviour. Listen to this audio clip where two autonomous agents interact in an unscripted conversation. One of the voices is still clearly a machine, but the other would have had me fooled as a polite but slightly stressed out young man eager to get off the phone. Or listen to this clip were the narrator gradually gets emotional, breaks up and starts to cry. Would you have had her pegged as generated by an AI?

So tech is getting pretty good at masquerading as humans, but what about understanding us?

As it turns out, some pretty impressive progress is being made here too. One of my favourite persons in AI is professor Hedvig Kjellström at KTH. She’s training algorithms to understand when horses are in pain and to figure out its cause. With that starting point she then moved to focusing on human subjects, specifically people suffering from dementia.

She explains that her models can learn a lot about human behaviour from multi-modal decoding of heart rate, skin conductivity, gaze paterns and voice. She wants to be clear that the algorithms are only a complement to human clinicians, but she’s also pointing out the fact that a *lot* of people in need, especially on a global scale, will never get access to human clinicians.

Machines aren’t just getting better at mimicking and understanding humans, their behaviour is also increasingly as mysterious as ours. Deep learning algorithms are highly non-linear which makes them extremely difficult to model mathematically. In other words: it’s hard to say with any degree of certainty why they work, or to predict when they’ll fail.

The conundrum of understanding AI is similar to the riddle facing neurologist trying to crack the problem of consciousness. They know it’s there but can’t pick it apart.

Incidentally, research psychologists have very simliar problems when trying to understand what makes therapy work. It seems all forms of treatment are equally efficient. In order to really know we’d need to perform double blind studies, which is simply impossible. A therapist can’t *not* know what he or she dishes out, and although attempts have been made at creating placebo protocols, most studies show no difference between “real” and “fake” therapy

(Side-note 3: This embarrassing problem is known amongst psychologists as The dodo bird verdict, based on a story in Alice in wonderland where a chaotic contest refereed by a dodo bird results in everyone getting a prize.)

When psychology professor Stefan Hofmann of Boston university was interviewed about this the other day he had the following to say:

“Science can prove that therapy works, the problem is we can’t prove what the active ingredients are. Empathy and warmth are certainly important, but those factors alone are not enough.”

Taken as a design space, this seems like promising ground for non-human intelligence. When AlphaGo beat Lee Sedol at Go, its creators couldn’t explain why their algorithm had shown clear signs of what – up until that point – had been thought of as an exclusively human flavour of creativity.

So if computers are about to become our therapists, where are we likely to see the first signs? Literature makes for the worlds quickest rapid prototyping and science fiction has a great track-record of predicting the future.

My favourite writer in this genre is William Gibson. He coined the phrase cyber space in the classic novel Neuromancer 1984, long before virtual reality had come into existence and he’s consider by many to be something of a prophet. In his latest novel Agency, one of the characters is called Eunice, and she’s an AI.

Eventually —[SPOILER ALERT] — she doesn’t just become best friends with the heroine of the book, she also orchestrate her own global launch and offers a heart-to-heart with every single human being in the world that has an internet connection. Simultaneously.

It’s a nice vision of the value AI can bring. This is, after all, a field where most development so far falls into the category of either improvement or novelty. Self driving cars promise safer transportation. That’s an improvement. When you marvel at the faces of thispersondoesnotexist, that’s novelty. I think we’re waiting for something bigger than that, and few things can be more ambitious than to help us tackle our demons.