The neural Chinese room

Imagine that somewhere in the world there is a special library. This library has a huge vaulted ceiling, with shelves adorning the walls all the way up to the rafters. But instead of books, there are rows of labeled ring binders. Inside the ring binders are pointers to other binders down one side of the page, and Chinese characters down the other. The job of the sole librarian is to take Chinese messages that fall into a small mailbox in one corner of the library, and, following the English instructions encoded in the intricate network of ring binders, spit out answers to these messages. The rules of the ring binder system are complex enough to provide the librarian with coherent answers to these questions.

To a person outside the library, it appears that the librarian speaks Chinese.

This is a version of John Searle’s “Chinese room” thought experiment. The argument runs that although the room is carrying out complex computations, and can respond in coherent Chinese, no-one in the room actually understands Chinese. With many psychologists subscribing to a computational view of mind, Searle’s challenge is to ask them whether their theories are enough. We might explain computation, he says, but we have yet to explain how we understand.

There are several objections to this argument, but let’s park them for a minute. Instead, I want to draw a parallel between the Chinese room and the neurons firing away in your head as you read these words.

In a series of elegant experiments, Mike Shadlen, Bill Newsome and colleagues identified a circuit for decision-making in the primate brain. We usually think of decisions as deliberative things, such as over which restaurant to visit of an evening. In fact, your brain is continuously making decisions, settling on one or other interpretation of your surroundings. The task developed to study these phenomena has now become a workhorse in psychology labs around the world. On a computer screen is a patch of random dots. Some dots are moving to the left or right; others are moving randomly, like static on a poorly tuned TV. The job of the subject is to look in the direction that the dots are moving. Depending on how much randomness there is in the motion, this task can be made easier or harder to get right.

Shadlen and Newsome discovered that neurons in an area known as MT, towards the back of the brain, respond to the direction the dots were moving, and influence the eventual decision. Other regions appear to integrate evidence in support of one or other choice over time. A brain region known as the frontal eye field receives similar information, and is able to trigger eye movements in particular directions in order to make the response. Putting it altogether leads to the following picture of the system (reproduced from Paul Glimcher’s excellent review):

In other words, our neural circuit for a perceptual decision is akin to the Chinese room. The random dot stimulus is fed in, and an eye movement results. In between, there may be complex computations, but no individual brain area “understands” the task it is doing.

While much evidence supports this model, its details are still being worked out, particularly with regard to implementation in realistic neural circuits. Still, these studies have rapidly become classics in the literature. Part of the reason is that they close a gap in our conception of the mind – the system becomes fully mechanistic, with neurons encoding the motion of the dots, integrating evidence supporting a choice, and finally triggering an eye movement. There is no magic “I” sitting in between input and output.

Understanding thus seems to be a property of the system as a whole, and not its individual parts. That is not to say there is an absence of subjective experience while doing the dots task. As I know from doing the task myself, all manner of musings, zoning out, and otherwise difficult-to-model things are going on in my head. Nevertheless, subjects’ behaviour in the dots task is predicted remarkably well by computational models such as the one outlined here.

An intriguing idea is that this type of circuit forms a bread-and-butter decision system, that may or may not be accompanied by conscious deliberation. Along these lines, Stanislas Dehaene and Antoine Del Cul have extended the evidence accumulation model to capture dissociations between subjective awareness and behaviour. They propose two “routes” through the brain: one (akin to Shadlen and Newsome’s model) that is accumulating evidence to decide which response to make on this trial, and another that is accumulating evidence towards a threshold for subjective report. Usually these two accumulators are well-aligned, but occasionally they come apart, leading to interesting cases in which correct responses are made in the absence of conscious awareness.

One possibility, then, is that some computations are more aligned to our subjective experience of understanding than others. Why this should be the case is unclear at present. Like novice librarians in the Chinese room, we are surrounded by information, but are only just beginning to get to grips with its complexities.

Advertisements

Can the brain control itself?

Close your eyes, and try to imagine a purple polar bear.

If you were at all successful, you were witness to a miracle of brain function. You read the words “purple”, “polar” and “bear” embedded in the context of an everyday sentence, and up popped a real image of an animal that has never been seen in reality. If you were particularly good at what psychologists call imagery, it might have seemed so real that you could have reached out to touch it.

Imagery is one example of top-down control – there is no visual input from a purple polar bear, and yet you manage to recruit the same neural resources that would give rise to you seeing a purple polar bear, should one exist. How is this trickery of the mind achieved?

In an experiment recently carried out by researchers at CalTech, Moran Cerf and his colleagues were able to record neural activity in real time during the process of imagery. Patients with severe epilepsy are sometimes treated through a surgical operation to remove the part of the brain causing the seizures. However, the surgeons do not always know which part should be removed. In order to find out, they first implant electrodes into the brain to monitor the patient and pinpoint the focus of the seizure when it occurs. For the most part, these electrodes are picking up otherwise normal brain activity. And it is during these periods of quiescence that Moran and his colleagues asked patients if they would like to take part in a ground-breaking experiment.

In a previous paper, the same team showed that individual neurons in the medial temporal lobe (a structure deep in the brain involved in storing memories) had surprisingly specific properties. One neuron, for instance, only fired to pictures of Halle Berry, but not other actresses. This cell also fired to Halle Berry’s name printed on the screen – in other words, the cell was concerned with the concept of Halle Berry, but not with how that concept was triggered. In their new study, they recorded similar neurons, found out which concept they liked, and then connected them up to a decoder.

The crucial twist was to use the decoded activity to control noisy pictures of the concepts favoured by different neurons, and then feed these pictures back to the patient on a computer monitor. The experimenters chose one of these concepts as a “target”, and one as a “distractor”. If the activity of the target concept increased, the target image was made more prominent in the display. If the activity of the distractor increased, its image was made clearer. The patient’s task was to control the activity of these single neurons in order to “fade in” the target image on the display.

Let’s pause for a moment to re-read the previous sentence. The patient’s task was to control the activity of single neurons. There are several 100 billion neurons in the human brain. How can the patient begin to know which neuron needs to increase in activity to complete the task? The researchers left this part up to the patients, letting them explore strategies until amazingly, they succeeded. On trials in which they had to bring Marilyn to the fore, they managed to drive up the activity of the Marilyn neuron in order for the decoder to give them more of Marilyn on the screen. This can be seen in the image above – when Marilyn was the target, then the pattern of activity in the volunteer’s medial temporal lobe (represented by the horizontal traces) became more Marilyn-like. When Josh Brolin was the target, the activity diverged in the opposite direction. By the power of thought alone, the patients were able to instruct their medial temporal lobes to bring forth a category at will, and produce an image on the screen.

Admittedly, the experiment was designed to make the patient’s job a bit easier. When the category Marilyn was activated, this produced a bit more Marilyn on the screen, which presumably led to more Marliyn category activation, thus more Marilyn on the screen, and so on. This positive feedback process might have made thought-control easier than if the same concepts were used to drive something more abstract, such as the position of a cursor. Still, the question needs to be asked – how were they doing it? The researchers suggest that because the medial temporal lobe has many neurons dedicated to a single category, “cognitive control strategies such as object-based selective attention permit subjects to voluntarily, rapidly and differentially up- and downregulate the firing activities of distinct groups of spatially interdigitated neurons”.

In other words, by focusing on the purple polar bear, you alter the activity of neurons representing particular visual properties, such as purple, bear, etc. But if one part of your brain is being focused, which is the part doing the focusing? Where is the “you” in control? Things get even more complicated when we learn that certain areas of prefrontal cortex (a region traditionally involved in self-control) can themselves be self-controlled.

An alternative view is that “exerting control” just is the dynamic interplay of ongoing brain activity, visual input, and the experimenter’s instructions. How these interactions play out in real time is currently beyond the grasp of brain science. By decoding the concept “Marilyn Monroe” from deep within a person’s brain, the CalTech team has taken us one step closer to that goal.

What’s the point of being conscious?

You stack the final few dishes on the sideboard, ready to be dried and put away. Your mind is elsewhere, perhaps remembering an email that still needs to be sent, or musing over plans for the weekend. At least, that is until your stray elbow silently tips a wine glass off the worktop. You watch in horror, helpless to do anything about it. Then, barely milliseconds into its descent, your hand shoots out of its own accord, saving what would have surely been a shattered mess. You only become aware of the catch when the glass is safely in your hand.

We surprise ourselves when events such as this occur. The miracle catch feels like a reflex, unbound from our inner sense of control. But if we can carry out automatic feats like this, what’s the point in being conscious in the first place? Recent experiments in neuroscience have been pushing the limits of the unconscious in order to find out. The idea is that if we find the unconscious can catch falling glasses but not understand Shakespeare, then perhaps we have found a fault line beyond which consciousness becomes useful.

Using a technique called visual masking, Hakwan Lau and Dick Passingham made a set of invisible symbols, but still left them capable of altering behaviour. The trick was to make the invisible symbol an instruction about what to do in an upcoming task. Responding to instructions is usually thought of as requiring conscious planning. However, not only did the symbols affect which task participants were preparing to carry out, they also modulated activity in the dorsolateral prefrontal cortex, a brain region traditionally associated with conscious control.

More recently, neuroscientist Simon van Gaal has been spearheading the work on the limits of the unconscious. In a recent paper reviewing his work, he outlines how visually masked stimuli can engage complex functions ranging from inhibiting one’s response to resolving conflict between competing actions. Surprisingly, he and his colleagues have yet to find anything the unconscious cannot do.

One striking thing about the miracle glass catch is that you can’t explain it to others. “I don’t know how I did that, it just happened…”. In these circumstances, we take no credit for our behaviour, and have no confidence in its causes. Chris Frith has proposed that conscious awareness gives us the ability to share our reasons for acting, and infer that others have similar reasons. Imagine a world in which every action was like the glass catch: we would be constantly surprised at ourselves, and social interactions would become meaningless. Distinguishing whether someone acted voluntarily is crucial for punishment and cooperation.

Perhaps, then, the point of being conscious is so that we can discuss, among other things, the point of being conscious!

The puzzle of self-reflection

After dipping my toe into the waters of the blogosphere a couple of years ago, I have decided to give it another, more focussed go.

The broad topic I will be considering is the self – the “I” that we carry around with us from one day to the next.

A fair few articles and books have been written recently on how the conscious self is only the tip of a neural iceberg, with computations going on behind the scenes that “we” have very little input into. For example, where the salad is placed in the canteen influences our decision to pick it up for lunch, without us being aware of this influence. Experimental results such as these are striking, and contribute to a view that the self is weak, and perhaps inconsequential to the real work being done on the shop floor.

But there is a paradox here. Humans (and perhaps other animal species) have the ability to self-reflect. Did I make the right decision? Am I really feeling sad, or is it just the weather? How am I doing in terms of being a good person?

So why do we engage in self-reflection at all? What if we were just highly complex automatons? Would it make any difference if there were no “tip” to the iceberg? Our society currently says that yes, it would make a big difference. Insight into our behaviour is taken as a signature of rational choice (think of a time when you excused your behaviour with “I just wasn’t thinking”). And the boundaries of self-reflection are therefore becoming central to how society ascribes blame and punishment, how we approach psychiatric disorders, and how we view human nature.

There has been recent scientific progress in understanding the brain mechanisms underlying subjective experience and self-reflection.  For example, we now have greater understanding of why certain sensory stimuli burst into awareness; how reported pain can be influenced by expectations; and why we are aware of making some errors but not others. In other words, modern cognitive neuroscience aims to understand not only the low-level machinery that underlies perception and action, but also our beliefs and sense of self that accompany it.

These are exciting times for a science of the self. However, the science is also challenging our concepts of responsibility, insight and self-control. The earlier we debate these discoveries as a society, the better prepared we will be to decide whether, and how, they should affect our legal and healthcare institutions that rely on these concepts when making decisions.

In the coming weeks and months I plan to use this space to explore questions such as these. I’ll aim for a new post every week, or perhaps every other week, my research and the distractions of New York permitting. If anyone has any suggestions for topics, I’d love to hear about them at fleming.sm@gmail.com.