Imagine that somewhere in the world there is a special library. This library has a huge vaulted ceiling, with shelves adorning the walls all the way up to the rafters. But instead of books, there are rows of labeled ring binders. Inside the ring binders are pointers to other binders down one side of the page, and Chinese characters down the other. The job of the sole librarian is to take Chinese messages that fall into a small mailbox in one corner of the library, and, following the English instructions encoded in the intricate network of ring binders, spit out answers to these messages. The rules of the ring binder system are complex enough to provide the librarian with coherent answers to these questions.
To a person outside the library, it appears that the librarian speaks Chinese.
This is a version of John Searle’s “Chinese room” thought experiment. The argument runs that although the room is carrying out complex computations, and can respond in coherent Chinese, no-one in the room actually understands Chinese. With many psychologists subscribing to a computational view of mind, Searle’s challenge is to ask them whether their theories are enough. We might explain computation, he says, but we have yet to explain how we understand.
There are several objections to this argument, but let’s park them for a minute. Instead, I want to draw a parallel between the Chinese room and the neurons firing away in your head as you read these words.
In a series of elegant experiments, Mike Shadlen, Bill Newsome and colleagues identified a circuit for decision-making in the primate brain. We usually think of decisions as deliberative things, such as over which restaurant to visit of an evening. In fact, your brain is continuously making decisions, settling on one or other interpretation of your surroundings. The task developed to study these phenomena has now become a workhorse in psychology labs around the world. On a computer screen is a patch of random dots. Some dots are moving to the left or right; others are moving randomly, like static on a poorly tuned TV. The job of the subject is to look in the direction that the dots are moving. Depending on how much randomness there is in the motion, this task can be made easier or harder to get right.
Shadlen and Newsome discovered that neurons in an area known as MT, towards the back of the brain, respond to the direction the dots were moving, and influence the eventual decision. Other regions appear to integrate evidence in support of one or other choice over time. A brain region known as the frontal eye field receives similar information, and is able to trigger eye movements in particular directions in order to make the response. Putting it altogether leads to the following picture of the system (reproduced from Paul Glimcher’s excellent review):
In other words, our neural circuit for a perceptual decision is akin to the Chinese room. The random dot stimulus is fed in, and an eye movement results. In between, there may be complex computations, but no individual brain area “understands” the task it is doing.
While much evidence supports this model, its details are still being worked out, particularly with regard to implementation in realistic neural circuits. Still, these studies have rapidly become classics in the literature. Part of the reason is that they close a gap in our conception of the mind – the system becomes fully mechanistic, with neurons encoding the motion of the dots, integrating evidence supporting a choice, and finally triggering an eye movement. There is no magic “I” sitting in between input and output.
Understanding thus seems to be a property of the system as a whole, and not its individual parts. That is not to say there is an absence of subjective experience while doing the dots task. As I know from doing the task myself, all manner of musings, zoning out, and otherwise difficult-to-model things are going on in my head. Nevertheless, subjects’ behaviour in the dots task is predicted remarkably well by computational models such as the one outlined here.
An intriguing idea is that this type of circuit forms a bread-and-butter decision system, that may or may not be accompanied by conscious deliberation. Along these lines, Stanislas Dehaene and Antoine Del Cul have extended the evidence accumulation model to capture dissociations between subjective awareness and behaviour. They propose two “routes” through the brain: one (akin to Shadlen and Newsome’s model) that is accumulating evidence to decide which response to make on this trial, and another that is accumulating evidence towards a threshold for subjective report. Usually these two accumulators are well-aligned, but occasionally they come apart, leading to interesting cases in which correct responses are made in the absence of conscious awareness.
One possibility, then, is that some computations are more aligned to our subjective experience of understanding than others. Why this should be the case is unclear at present. Like novice librarians in the Chinese room, we are surrounded by information, but are only just beginning to get to grips with its complexities.