Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Saturday 19 January 2013

The Uncanny Valley


This is a well known psychological phenomenon amongst people who take an interest in AI, and the possibility of androids in particular. Its discovery and consequential history is discussed in the latest issue of New Scientist (12 January 2013, pp. 35-7) by Joe Kloc, a New York correspondent.

The term was originally coined by Japanese Roboticist, Masahiro Mori, in 1970, in an essay titled, “Bukimi No Tani” – 'The Valley of Eeriness' (direct translation). But it wasn’t until 2005 that it entered the Western lexicon, when it was translated by Karl MacDorman, then working at Osaka University, after he received a late night fax of the essay. It was MacDorman, apparently, who gave it the apposite English rhyming title, “the uncanny valley”.

If an animate object or visualised character is anthropomorphised, like Mickey Mouse for example, we suspend disbelief enough to go along with it, even though we are not fooled into thinking the character is really human. But when people started to experiment with creating lifelike androids (in Japan and elsewhere) there was an unexpected averse reaction from ordinary people. It’s called a ‘valley’ in both translations, because if you graph people’s empathy as the likeness increases (albeit empathy is a subjective metric) then the graph rises as expected, but plummets dramatically at the point where the likeness becomes uncomfortably close to humans. Then it rises again to normal for a real human.

The New Scientist article is really about trying to find an explanation and it does so historically. MacDorman first conjectured that the eeriness or unease arose from the perception that the androids looked like a dead person come to life. But he now rejects that, along with the idea that ‘strange’ looking humans may harbour disease, thus provoking an unconscious evolutionary-derived response. Work by neuroscientists using fMRI machines, specifically Thierry Chaminade of the Advanced Telecommunications Research Instituted in Kyoto and Ayse Saygin at the University of California, San Diego, suggest another cause: empathy itself.

There are 3 different categories of empathy, according to neuroscientists: cognitive, motor and emotional. The theory is that androids create a dissonance between two or more of these categories, and the evidence suggests that it’s emotional empathy that breaks the spell. This actually makes sense to me because we don’t have this problem with any of the many animals humans interact with. With animals we feel an emotional empathy more strongly than the other two. Robotic androids reverse this perception.

The author also suggests, in the early exposition of the article, that cartoon characters that too closely resemble humans also suffer from this problem and gave the box office failure of Polar Express as an example. But I suspect the failure of a movie has more to do with its script than its visuals, though I never saw Polar Express (it didn’t appeal to me). All the PIXAR movies have been hugely successful, but it’s because of their scripts as much as their animation, and the visual realism of Gollum in Peter Jackson’s Lord of the Rings trilogy (and now The Hobbit) hasn’t caused any problems, apparently. That’s because movie characters; animated, motion-capture or human; evoke emotional empathy in the audience.

In my own fiction I have also created robotic characters. Some of them are deliberately machine-like and unempathetic in the extreme. In fact, I liked the idea of having a robotic character that you couldn’t negotiate with – it was a deliberate plot device on my part. But I created another character who had no human form at all – in fact, ‘he’ was really a piece of software – this was also deliberate. I found readers empathised with this disembodied character because ‘he’ developed a relationship with the protagonist, which was an interesting literary development in itself.

Addendum: Images for the uncanny valley.

2 comments:

Jim Hamlyn said...

Hi Paul,
This is really interesting - the empathy dissonance especially sounds quite plausible.

I think there might be other factors involved though in the wider discussion that you and the article (going by what you say of it) raise. I think we're so used to seeing and using representations that we easily forget that the means by which they function is actually quite radically different. As Freud pointed out, we don't ever experience the uncanny in fairy stories for the reason that our disbelief is already suspended (ie: we're fully aware that it is a representation). I would argue, by extension - that we do not sense the uncanny in linguistic or pictorial representations in general. The reason the uncanny valley works is surely because of nature of the animated three dimensional representation which is difficult to distinguish as a representation. Although I might feel fear, for example, in response to a picture, there is never any doubt as to wether I am viewing a representation or not. I would hypothesise that the uncanny Valley effect is a consequence, not just of empathetic dissonance, but of the relative indistinguishability of a certain kind of representation from reality.

Best

Jim

Paul P. Mealing said...

Hi Jim,

You raise a good point. It's probably not as clear cut as we think. It's like a doll come to life - something we can deal with in fiction but not in reality.

Regards, Paul.