Consciousness and the law

Yesterday my piece on consciousness and the law was published in Aeon, an exciting new online magazine focussing on ideas and culture.

As fits this particular canvas, the article is painted with a broad brush, and there wasn’t room to go into detail about any particular study. But for those who are interested in the details, I’ve included below some links to the original sources along with the relevant quotes from the Aeon piece.

To quote a recent article by the psychologist Jonathan Schooler and colleagues ‘we are often startled by the discovery that our minds have wandered away from the situation at hand’.

Schooler J et al. (2011) Meta-awareness, perceptual decoupling and the wandering mind. Trends Cog Sci 15(7):319-26

…the Court of Appeal opined that the defence of automatism should not have been on the table in the first place, due to a driver without ‘awareness’ retaining some control of the car.

Attorney General’s Reference (No. 2 of 1992)

In the early 1970s, Lawrence Weiskrantz and Elizabeth Warrington discovered a remarkable patient… studies on similar patients with ‘blindsight’ confirmed that these responses relied on a neural pathway quite separate from the one that usually passes through the occipital lobes

Weiskrantz L et al. (1974) Visual capacity in the hemianopic field following a restricted occipital ablation. Brain 97(4):709-28.

Dodds C et al. (2002) A temporal/nasal asymmetry for blindsight in a localisation task: evidence for extrageniculate mediation. Neuroreport 13(5):655-8.

…a key difference between conscious and unconscious vision is activity in the prefrontal cortex…

Other research implies that consciousness emerges when there is the right balance between connectivity between brain regions, known as the ‘information integration’ theory.

…anesthesia may induce unconsciousness by disrupting the communication between brain regions.

Dehaene S & Changeux JP (2011) Experimental and theoretical approaches to conscious processing. Neuron 70(2):200-27.

Tononi G (2005) Consciousness, information integration, and the brain. Prog Brain Res 150:109-26.

Alkire MT et al (2008) Consciousness and anesthesia. Science 322(5903):876-80.

A series of innovative experiments have begun to systematically investigate mind-wandering… Under the influence of alcohol, people become more likely to daydream and less likely to catch themselves doing so.

Christoff K (2012) Undirected thought: neural determinants and correlates. Brain Res 428:51-9.

Sayette MA et al. (2009) Lost in the sauce: the effects of alcohol on mind-wandering. Psychol Sci 20(6):747-52.

In defence of cognitive neuroscience

“If the imperialist ambitions of Neuromania and Darwinitis were fully realized, they would swallow the image of humanity in the science of biology”. So begins the penultimate chapter of Raymond Tallis’ opus on the sinister forces of the mind-sciences, Aping Mankind. In a thoroughly enjoyable book, he identifies two: Neuromania, the addition of a neuro- prefix to just about every humanities discipline you can think of (art, literature, law), and Darwinitis, the reduction of human flourishing to evolutionary primitives that seethe under the surface of the psyche.

For a neuroscientist, reading Tallis is humbling. He emphasises just how far we still have to go in order to understand even the most mundane aspects of the human mind. Consider his account of taking a catch in a cricket match, an action you might think of as “automatic”:

So surely you did not catch the ball; your body did, and you were just a fortunate bystander who took the credit.

No one really thinks this, and for good reasons. First, in order to catch the ball, you had to participate in a game of cricket. This requires that you should have (voluntarily) turned up to a particular place on a particular day, that you understood and assented to the rules of cricket and that you understood the role of the fielder, in particular that of the slip fielder. More importantly, in order to make the catch, you would have had to practise. This means hours spent in the nets, preparing yourself for this moment, which would bring such glory upon you. You would have to order your affairs so that you would be able to go to the nets at the booked time: negotiating the traffic; making sure your day was clear so you could take up your booked slot; and so on.

In the same vein, Tallis grumpily dismisses hype-grabbing studies claiming to explain the enjoyment of art or Shakespeare simply by looking at a brain scan. In doing so, he reaffirms the mystery of the human condition. He has no time for sloppy thinking, complaining when “things which belong to common-sense are presented as truths uncovered by the biological sciences”.  His writing is laced with the rare authority of a scholar steeped in both the sciences and the humanities.

Which makes it all the more surprising when, in dismantling a neuroscience of the humanities, the book slips into attacking a neuroscience of, well, neuroscience. The culmination of this attack is a section entitled “Why there can never be a brain science of consciousness: the disappearance of appearance”. Unfortunately, this conclusion is based on false premises. Let us see if we can set a few things straight.

First, there is the claim that psychological functions are equivalent to locations, or patterns, of brain activity. Locations and patterns are aspects of brain activity that might be picked up by scans or neural recordings, but they do not constitute an account of function. Instead, we need to understand what a brain region is doing, how its function affects a broader network of activity, and how, ultimately, this network affects the organism’s behaviour. A brain scan might provide a glimpse onto these functional dynamics, but it is not the final story.

Instead, it is a model of the underlying brain-behaviour link that matters, rather than any particular location or pattern of activity. Models are more than ideas: as Lewandowsky & Farrell write, “Even intuitively attractive notions may fail to provide the desired explanation for behavior once subjected to the rigorous analysis required by a computational model”. For example, one can build a toy network of two populations of neurons “deciding” between two options in response to different inputs, and ask whether there are correlates of this process in the living brain. The location of such activity does not explain the ability to decide; instead, it is the dynamics (and, ultimately, the link between these dynamics and other brain circuits involved in perception and action) that can give us greater insight into what it means to make a decision. A good model accommodates the bumps and curves of individual datasets, both behavioural and neural, and provides a set of hypotheses that can be refined through further study. Discussion of models is sparse in Aping Mankind, perhaps because they are less easy to lambast than neuromanic studies of love and wisdom.

But these oversights pale in comparison to the ultimate straw-man complaint that: “neuroscience does not address, even less answer, the fundamental question of the relation(s) between matter and mind, body and mind, or brain and mind”. This is the famous “hard problem” of consciousness: how does subjectivity arise out of a lump of biological material? As Tallis writes, “Consciousness is, at the basic level, appearances or appearings-to, but neither nerve impulses nor the material world have appearances.” Quite so, and nor should we expect them to. And if they did, this would only beg the question: appearances to whom? As a squarely metaphysical, rather than empirical, discussion, it is no surprise that it is not addressed by neuroscience.

After making this confusion, Tallis is often caught in the headlights of the hard problem. He references the richness of subjective experience to parody the “laughable crudity” of cognitive neuroscience. In fact, most of the ongoing and vibrant science of consciousness adopts “bridging principles”, experimental paradigms that mediate between behaviour and subjective experience. In recent years there have been ingenious bridging principles developed to investigate neural systems underpinning perceptual awareness, the sense of agency, and the shifting nature of one’s body image. First-person satisfaction is not the primary criterion of this research program.

Of course, Tallis is right to point out that these are baby steps. His example of catching a ball highlights aspects of the mind that neuroscience has only just begun to explore; a wilderness of the prefrontal cortex that is yet to be staked out. In doing so, we would do well to keep in mind Tallis’ deconstruction of the complexity of action. But initial dissatisfaction is no reason to down tools.

Is obesity a drug addiction?

Next year the psychiatrist’s bible, the Diagnositic and Statistical Manual (DSM) will be revised into its 5th edition. It may have a dull moniker, but the DSM is more than a medical handbook. Central to its construction are the prevailing winds of societal attitudes – consider that as late as the 1973 homosexuality was still labeled a mental disorder.

These influences are illustrated by recent calls that obesity should be included in the DSM. Proponents of this position point to evidence that obesity research, both in animal models and in humans, implicates neural pathways involved in drug addiction. Drug addiction is in the DSM, so why not food addiction?

The “disease model” of drug addiction is that drugs hijack the midbrain dopamine system. Different drugs get to dopamine via diverse molecular pathways, which probably underlie their distinctive subjective properties. But all studied drugs of abuse lead to dopamine increases in the ventral striatum, one of the principal targets of the midbrain system.

Our current understanding of this brain circuit is that it is exquisitely sensitive to “prediction errors”. Imagine that you start visiting a new coffee shop on the way to work. On your first visit, you have neutral expectations of how the coffee will taste – it might be good, it might be bad. On that first morning, it tastes good, leading to a positive “error” signal with which to update your expectations. On the next morning, it’s still good, but that’s what you expected, hence no error signal.

A large body of research has linked the midbrain dopamine system to calculation of these prediction errors. When you taste the coffee for the first time, there’s a positive prediction error, and a dopamine spike.

Now, imagine that on the second morning (when there should be no change to your expectations) an evil (or friendly, depending on your outlook) scientist stimulates your midbrain dopamine system just at the moment you take that first sip of coffee. It will seem to you as though the coffee tastes better than it should have done. Various drugs, from cocaine to nicotine, may act in this way, hijacking the natural dopamine response and leading to a series of positive prediction errors, even when there’s nothing to be learnt. Over time, this may result in long-term plasticity in the striatum and other targets of the dopamine system, sometimes leading to addiction.

That, in an over-simplified nutshell, is the dopamine hypothesis of drug addiction, one that receives increasing support from neurobiological studies. Its application to obesity relies on one further logical step. Drugs hijack natural reward circuitry, so why shouldn’t potent natural rewards to do the same? As Marc Lewis puts it in his recent book Memoirs of an Addicted Brain:

 Whether in service of food or heroin, love or gambling, dopamine forms a rut, a line of footprints in the neural flesh.

There is some truth to this well-turned phrase. Sugar rewards given to rats result in dopamine spikes similar to those seen in response to drugs, and cocaine and methamphetamine can suppress appetite by interfering with the reward system. There is evidence of dopamine receptor changes in obese individuals similar to those seen in drug addicts, and such changes correlate with body mass index.

But evidence of food addiction contributing to obesity does not imply that obesity is equivalent to food addiction. Obesity is the consequence, not the cause, of an unhealthy balance between eating and exercise. Brain mechanisms involved in the processing of reward and satiety will no doubt be involved in setting this balance, and may be more involved in some cases than others (such as Binge Eating Disorder). But so will a wide range of social and economic pressures, as indicated by the graphic below. In 2009, 36% of adults in the United States were obese, and this number is rising. If obesity were to make its way into the DSM, a huge swathe of the population would be handed a psychiatric disorder overnight.

Reproduced from Ziauddeen et al. (2012), Obesity and the brain: how convincing is the addiction model?

Fortunately, wisdom is prevailing, and this is unlikely to happen (binge eating, on the other hand, will probably be added). There is an urgent need to understand the biological and social factors contributing to both obesity and addiction. As biological understanding increases, many debilitating conditions that were traditionally thought to derive from individual failure or lack of willpower (such as drug addiction) become re-interpreted, as neural hijacks. The same may be happening to obesity. This is generally a good thing: stigma is reduced, and options for treatment and therapy are increased.

But there is also a danger of fatalism, both on a societal and individual level: we currently interpret new research through the lens of a see-saw worldview: as biology dominates, personal responsibility recedes. Instead, for modern neuroscience to have a useful impact on society, it must construct explanations for complicated conditions that allow for biology and responsibility to co-exist. Redefining a social and biological phenomenon as a “brain disease” is not the way to do this. After all, absent a ghost in the machine, the owner does not melt away when the brain comes into focus.

I have just returned from UPenn where I spent some time as a Fellow in neuroethics. This blog post is adapted from a talk I gave during that time.

Signal detection, thresholds and consciousness

Towards the end of the 2nd World War, engineers in the US Air Force were asked to improve the sensitivity of radar detectors. The team drafted a working paper combining some new math and statistics – the scattering of the target, the power of the pulse, etc. They had no way of knowing at the time, but the theory they were sketching – signal detection theory, or SDT – would be one of the most influential and durable in modern psychology. By the 1960s, psychologists had become interested in applying the engineers’ theory to understand human detection – in effect, treating each person like a mini radar detector, and applying exactly the same equations to understand their performance. Fast-forward to today, and SDT is not done yet. In fact, it is beginning to break new ground in the study of human consciousness.

To understand why, we need to first cover a little of the theory. Despite the grand name, SDT is deceptively simple. When applied to psychology, it tells us that detection of things in the outside world is noisy. Imagine the following scenario. You sit down in one of our darkened testing rooms, and I ask you to carry out a relatively boring task (any reader who has participated in one of our experiments will be on familiar ground here). Each time you see a faint spot of light on the computer monitor, you should press the “yes” key. If there was no light, you should press the “no” key. If the task is made difficult enough, then sometimes you will say “yes” when there is no light present. In radar-detector speak this is equivalent to a “false alarm”. You will also sometimes say “no” when the signal was actually there – a “miss”. Why does this happen?

Consider that on each “trial” of our experiment, the faint flash of light leads to firing of neurons in the visual cortex, a region of the brain dedicated to seeing. Because the eye and the brain form a noisy system – the firing is not exactly the same for each repetition of the stimulus – different levels of activity are probabilistic. When the stimulus is actually present, the cortex tends to fire more than when it is actually absent (this is summarised by the shifted “signal” probability distribution over firing rates, X, below). But on some trials on which the stimulus was absent there will also be a high firing rate, due to random noise in the system (corresponding to the dark grey area in the figure). The crucial point is this: you only have access to the outside world via the firing of your visual cortex. If the signal in the cortex is high, it will seem as though the light flashed, even if it was absent. Your brain has no way of knowing otherwise. You say “yes” even though nothing was there.

Image

The other insight provided by SDT is that how many false alarms you make is partly up to you. If you decide to be cautious, and only say “yes” when you are really confident, then the weaker signals in cortex won’t pass the threshold, and false alarms will be reduced. The catch is that the number of “hits” you make will be reduced too. In fact, the cornerstone of SDT is that the visual system has a constant sensitivity, meaning that any increase in hit rate is also accompanied by an increase in false alarms, as shown by the performance curve above. Perception is never perfect.

When I was learning about this stuff as an undergraduate, the SDT curve confused me. It never seemed to me that perception was noisy and graded. I don’t glance at my coffee cup and occasionally mistake it for a laptop. Instead, the coffee cup is either there, or it’s not. There doesn’t seem to be any graded, noisy firing in consciousness.

Yet, in countless experiments, the SDT curve provides a near-perfect fit to the data. This is a paradox that I think is central to our understanding of consciousness. And a recent paper from Mariam Aly and Andy Yonelinas at the University of California, Davis, has begun to develop a solution. They summarize the paradox thus:

“These examples [such as the coffee cup] suggest that some conscious experiences are discrete, and either occur or fail to occur. Yet, a dominant view of cognition is that the appearance of discrete mental states is an epiphenomenon, and cognition in reality varies in a completely continuous manner, such that some memories are simply stronger than others, or some perceptual differences just more noticeable than others.”

Aly and Yonelinas propose a reconciliation of these points of view. Their experiments hinge on measuring SDT curves in different conditions, and across different thresholds (defined as different confidence levels). In the noisy, graded model, there should never be a point at which it is possible to increase hits without also increasing false alarms (the red curve above). However, a hunch that there is a particular “state” of viewing the coffee cup that is never accompanied by mistakes would correspond to the discrete boxes at either end of the SDT distributions. Adding these boxes instead predicts the blue curve (above). Yonelinas and Aly found that for simple stimuli, such as flashes of light, the red curve was a good fit to the data, indicating a graded, noisy process that differed only in strength. But for complex stimuli, such as deciding whether two photographs were the same or different, the SDT curve indeed showed a discrete “state” effect (below). You either saw it, or you didn’t.

Because most previous SDT experiments used simple stimuli, this explains why the graded curve has come to dominate the literature. Yet for the more complex objects we are used to seeing in everyday life, our intuitions are usually correct – there really is a discrete state of seeing the coffee cup. Could these discrete states be what we associate with consciousness?

To test this hypothesis, Aly and Yonelinas asked subjects to say whether their judgment on each trial was due to a conscious, perceived difference, or an unconscious feeling of knowing. They then extracted parameters describing how curvy or discrete the SDT curves were. Conscious perception was associated with a stronger estimate of the discrete state process, while unconscious knowing was associated with a more curvy, or graded, SDT curve. A separate experiment showed that the discrete change in perception occurs at an abrupt point in time, whereas unconscious knowing emerges only gradually.

The paper is a tour de force, and well worth reading for other findings I don’t have space to cover here. Suffice to say the “discreteness” of an SDT curve might provide us with a powerful tool with which to understand how the brain gives rise to consciousness, and does so by using statistical models that are relatively immune to subjective biases. It also paves the way for computational modeling aimed at understanding why graded and discrete processes arise.

But there is another, deeper insight from the paper that I want to conclude with. SDT can also be applied to memory: instead of detecting visual signals from the outside world, think of detecting a memory signal emanating from somewhere else in the system. Yonelinas was one of the first to quantify the discrete/graded distinction in memory (known as “recollection” and “familiarity”). By applying their state/strength model to a standard long-term recognition task, he and Aly found that discrete states were more common when recognising that a previous scene had been seen before (the black curve below). But by subtly altering the long-term memory task to focus on the detection of changes, they found something striking. Here are the SDT curves for the two tasks:

Both show evidence of the discrete “state” process, bridging two areas of psychology traditionally studied separately. But they do so in opposite directions. Why?

“We propose that the reason is that the detection of similarities and differences tend to play opposite roles in memory and perception. That is, in perceptual tasks, noticing even a small change between two images is sufficient to make a definitive “different” response… [In contrast], in recognition memory tasks, one expects the state of recollection to support the detection of oldness (i.e. a y-intercept) rather than the detection of newness.”

In other words, consciousness in perception and memory might both rely on discrete states. And both appear to share a common architecture optimized for different types of detection. The difference, then, is that memory might be optimized for detecting matches with our past, whereas perception seems concerned with detecting mismatches with our future.

Aly M, Yonelinas AP (2012) Bridging Consciousness and Cognition in Memory and Perception: Evidence for Both State and Strength Processes. PLoS ONE 7(1): e30231. doi:10.1371/journal.pone.0030231

Overflowing cats

The world around you, I presume, appears rich and detailed. From my desk in my apartment this evening, I can pick out the changing shades of white as the lamps play off the walls, the dusty green of a rubber plant, and the deep blue of a painting on the wall opposite, only half visible due to the reflection of one of the lamps in its glass frame. But there appears to be far more in this scene than I can talk about. My private inner world – what Ned Block dubbed “phenomenal” consciousnesss – is poorly served by our ability to access and describe it.

This intuition has received support in the lab. Experiments show that subjects can only report a subset of several objects briefly flashed on a screen. This in itself is not surprising; our ability to retain information in mind is limited. Strikingly, however, subjects can accurately identify any single object in the array if probed immediately after the set of objects has disappeared. This result shows that each individual object is ready to be accessed: phenomenal consciousnesss tends to “overflow” what we can report.

Or does it? An alternative standpoint (epistemic correlationism) says that every piece of data on consciousness requires a “correlating” report – or how else are we to really know what the subject is doing, seeing, or thinking? In other words, we cannot claim for sure that there is phenomenology without access, as even in the flashing-object experiments, we are relying on our intuition that the subjects are conscious of the whole array of objects at some point.

A lively debate has recently ensued on this issue, fuelled by a pair of original articles in TICS. In one of these articles, Cohen & Dennett propose that conscious experience (phenomenal consciousness) cannot be investigated separately from mechanisms of access. They dismiss the view that particular brain states support phenomenal consciousness – recurrency in neural circuits, for example – without being broadcast to other brain areas, and therefore without being reported. I tend to agree on this point (although their tactic of putting words in the mouths of their opponents was less than graceful). To argue that a particular pattern of brain activity can index phenomenal consciousness in the face of subjective reports to the contrary seems absurd. Such logic rests on “pre-empirical” notions that phenomenology without some sort of access can exist – if we assume for a moment that it cannot, then the fact that a brain state is not cognitively accessed would preclude it from also being phenomenally conscious (one promising resolution to this debate is the proposal by Sid Kouider and colleagues on how different levels of access may account for different types of experience).

It is rather like Schrödinger’s cat – the cat might be dead, or it might be alive, but to know you have to open the box. As for the cat, as for epistemic correlationism in the neurosciences – the only way to know is to open the box and ask the subject. Imagine that we could perform a scan on the box that would give us some image or pattern to look at. If we first assume the cat is alive, we then might be able to say, yes, this assumption corresponds nicely with the fact that the image has such-and-such properties. But if the cat is dead when we subsequently open the box, do we have enough evidence to say that the cat was alive at the time we took the image? The inference that the cat was alive rests largely on the a priori assumption that cats are indeed alive in these types of scenarios. In the same way, the inference that a particular brain state is phenomenally conscious rests on the a priori assumption that the subject is having a phenomenally conscious experience at the time it is recorded.

That is not to say prior beliefs about how the system works are inherently bad. They are the starting point for models of cognition, and can be tested against each other. But when comparing different models, parsimony is to be preferred. The philosopher Richard Brown has argued on his blog that exactly the same constraints should apply to models of consciousness. The extra complexity of positing phenomenal consciousness without access might be justified if it leads to a better account of the data. But for now, I am yet to see a piece of data that is not more parsimoniously accounted for by an assumption that the cat is dead until proven alive.

“Aha!” moments in self-knowledge

Some changes to our mental world take place gradually. An orchestra comes to a crescendo, and then fades back to nothing, all the while playing the same chord. As the listener, we experience a graded change in volume. But other changes are more immediate. When squinting to read a street sign from a distance, there is a moment when you suddenly “get it”, and know what it says. There is no way in which the sign gradually becomes more or less intelligible. This is an example of categorical perception.

Might knowledge of ourselves be similar?

A study by Katerina Fotopoulou and colleagues sheds light on this issue. She focussed on a fascinating case of recovery from anosognosia. Anosognosia (from the Greek meaning “without knowledge”) is the term given to a lack of awareness or insight into a particular neurological condition. In extreme form, anosognosia can result in very bizarre symptoms, such as Anton’s syndrome, where cortically blind patients claim to be able to see. Dr. Fotopoulou’s patient was a 67-year-old lady who suffered from hemiplegia – paralysis of half of the body following a right-hemisphere stroke. However, she claimed to be able to move her arm, and breezily asserted that she could clap her hands. At least, until Dr. Fotopoulou showed her a recorded video of herself being examined:

“As soon as the video stopped, LM immediately and spontaneously commented: “I have not been very realistic”. Examiner (AF): “What do you mean?” LM: “I have not been realistic about my left-side not being able to move at all”. AF: What do you think now?” “I cannot move at all”. AF: “What made you change your mind?” LM: “The video. I did not realize I looked like this”.”


This altered self-awareness was still present 6 months later. It appeared that allowing the patient a third-person perspective on herself had removed her anosognosia, and led to changes in the representation of her own body. While this is a single case report, and may not work for all patients, the data are tantalising. In particular, they suggest that onset of self-awareness can be sudden and transformative. This makes sense – we all have experienced the “aha” moment accompanying retrieval of the name of that actor that you couldn’t drum up during dinner the previous evening. Changes in awareness of the self may share similarities with other domains of categorical perception. Whether this mental plasticity is accompanied by a rapid form of neural reorganisation, or a “change in the software”, remains unknown.

The neural Chinese room

Imagine that somewhere in the world there is a special library. This library has a huge vaulted ceiling, with shelves adorning the walls all the way up to the rafters. But instead of books, there are rows of labeled ring binders. Inside the ring binders are pointers to other binders down one side of the page, and Chinese characters down the other. The job of the sole librarian is to take Chinese messages that fall into a small mailbox in one corner of the library, and, following the English instructions encoded in the intricate network of ring binders, spit out answers to these messages. The rules of the ring binder system are complex enough to provide the librarian with coherent answers to these questions.

To a person outside the library, it appears that the librarian speaks Chinese.

This is a version of John Searle’s “Chinese room” thought experiment. The argument runs that although the room is carrying out complex computations, and can respond in coherent Chinese, no-one in the room actually understands Chinese. With many psychologists subscribing to a computational view of mind, Searle’s challenge is to ask them whether their theories are enough. We might explain computation, he says, but we have yet to explain how we understand.

There are several objections to this argument, but let’s park them for a minute. Instead, I want to draw a parallel between the Chinese room and the neurons firing away in your head as you read these words.

In a series of elegant experiments, Mike Shadlen, Bill Newsome and colleagues identified a circuit for decision-making in the primate brain. We usually think of decisions as deliberative things, such as over which restaurant to visit of an evening. In fact, your brain is continuously making decisions, settling on one or other interpretation of your surroundings. The task developed to study these phenomena has now become a workhorse in psychology labs around the world. On a computer screen is a patch of random dots. Some dots are moving to the left or right; others are moving randomly, like static on a poorly tuned TV. The job of the subject is to look in the direction that the dots are moving. Depending on how much randomness there is in the motion, this task can be made easier or harder to get right.

Shadlen and Newsome discovered that neurons in an area known as MT, towards the back of the brain, respond to the direction the dots were moving, and influence the eventual decision. Other regions appear to integrate evidence in support of one or other choice over time. A brain region known as the frontal eye field receives similar information, and is able to trigger eye movements in particular directions in order to make the response. Putting it altogether leads to the following picture of the system (reproduced from Paul Glimcher’s excellent review):

In other words, our neural circuit for a perceptual decision is akin to the Chinese room. The random dot stimulus is fed in, and an eye movement results. In between, there may be complex computations, but no individual brain area “understands” the task it is doing.

While much evidence supports this model, its details are still being worked out, particularly with regard to implementation in realistic neural circuits. Still, these studies have rapidly become classics in the literature. Part of the reason is that they close a gap in our conception of the mind – the system becomes fully mechanistic, with neurons encoding the motion of the dots, integrating evidence supporting a choice, and finally triggering an eye movement. There is no magic “I” sitting in between input and output.

Understanding thus seems to be a property of the system as a whole, and not its individual parts. That is not to say there is an absence of subjective experience while doing the dots task. As I know from doing the task myself, all manner of musings, zoning out, and otherwise difficult-to-model things are going on in my head. Nevertheless, subjects’ behaviour in the dots task is predicted remarkably well by computational models such as the one outlined here.

An intriguing idea is that this type of circuit forms a bread-and-butter decision system, that may or may not be accompanied by conscious deliberation. Along these lines, Stanislas Dehaene and Antoine Del Cul have extended the evidence accumulation model to capture dissociations between subjective awareness and behaviour. They propose two “routes” through the brain: one (akin to Shadlen and Newsome’s model) that is accumulating evidence to decide which response to make on this trial, and another that is accumulating evidence towards a threshold for subjective report. Usually these two accumulators are well-aligned, but occasionally they come apart, leading to interesting cases in which correct responses are made in the absence of conscious awareness.

One possibility, then, is that some computations are more aligned to our subjective experience of understanding than others. Why this should be the case is unclear at present. Like novice librarians in the Chinese room, we are surrounded by information, but are only just beginning to get to grips with its complexities.