In defence of cognitive neuroscience

“If the imperialist ambitions of Neuromania and Darwinitis were fully realized, they would swallow the image of humanity in the science of biology”. So begins the penultimate chapter of Raymond Tallis’ opus on the sinister forces of the mind-sciences, Aping Mankind. In a thoroughly enjoyable book, he identifies two: Neuromania, the addition of a neuro- prefix to just about every humanities discipline you can think of (art, literature, law), and Darwinitis, the reduction of human flourishing to evolutionary primitives that seethe under the surface of the psyche.

For a neuroscientist, reading Tallis is humbling. He emphasises just how far we still have to go in order to understand even the most mundane aspects of the human mind. Consider his account of taking a catch in a cricket match, an action you might think of as “automatic”:

So surely you did not catch the ball; your body did, and you were just a fortunate bystander who took the credit.

No one really thinks this, and for good reasons. First, in order to catch the ball, you had to participate in a game of cricket. This requires that you should have (voluntarily) turned up to a particular place on a particular day, that you understood and assented to the rules of cricket and that you understood the role of the fielder, in particular that of the slip fielder. More importantly, in order to make the catch, you would have had to practise. This means hours spent in the nets, preparing yourself for this moment, which would bring such glory upon you. You would have to order your affairs so that you would be able to go to the nets at the booked time: negotiating the traffic; making sure your day was clear so you could take up your booked slot; and so on.

In the same vein, Tallis grumpily dismisses hype-grabbing studies claiming to explain the enjoyment of art or Shakespeare simply by looking at a brain scan. In doing so, he reaffirms the mystery of the human condition. He has no time for sloppy thinking, complaining when “things which belong to common-sense are presented as truths uncovered by the biological sciences”.  His writing is laced with the rare authority of a scholar steeped in both the sciences and the humanities.

Which makes it all the more surprising when, in dismantling a neuroscience of the humanities, the book slips into attacking a neuroscience of, well, neuroscience. The culmination of this attack is a section entitled “Why there can never be a brain science of consciousness: the disappearance of appearance”. Unfortunately, this conclusion is based on false premises. Let us see if we can set a few things straight.

First, there is the claim that psychological functions are equivalent to locations, or patterns, of brain activity. Locations and patterns are aspects of brain activity that might be picked up by scans or neural recordings, but they do not constitute an account of function. Instead, we need to understand what a brain region is doing, how its function affects a broader network of activity, and how, ultimately, this network affects the organism’s behaviour. A brain scan might provide a glimpse onto these functional dynamics, but it is not the final story.

Instead, it is a model of the underlying brain-behaviour link that matters, rather than any particular location or pattern of activity. Models are more than ideas: as Lewandowsky & Farrell write, “Even intuitively attractive notions may fail to provide the desired explanation for behavior once subjected to the rigorous analysis required by a computational model”. For example, one can build a toy network of two populations of neurons “deciding” between two options in response to different inputs, and ask whether there are correlates of this process in the living brain. The location of such activity does not explain the ability to decide; instead, it is the dynamics (and, ultimately, the link between these dynamics and other brain circuits involved in perception and action) that can give us greater insight into what it means to make a decision. A good model accommodates the bumps and curves of individual datasets, both behavioural and neural, and provides a set of hypotheses that can be refined through further study. Discussion of models is sparse in Aping Mankind, perhaps because they are less easy to lambast than neuromanic studies of love and wisdom.

But these oversights pale in comparison to the ultimate straw-man complaint that: “neuroscience does not address, even less answer, the fundamental question of the relation(s) between matter and mind, body and mind, or brain and mind”. This is the famous “hard problem” of consciousness: how does subjectivity arise out of a lump of biological material? As Tallis writes, “Consciousness is, at the basic level, appearances or appearings-to, but neither nerve impulses nor the material world have appearances.” Quite so, and nor should we expect them to. And if they did, this would only beg the question: appearances to whom? As a squarely metaphysical, rather than empirical, discussion, it is no surprise that it is not addressed by neuroscience.

After making this confusion, Tallis is often caught in the headlights of the hard problem. He references the richness of subjective experience to parody the “laughable crudity” of cognitive neuroscience. In fact, most of the ongoing and vibrant science of consciousness adopts “bridging principles”, experimental paradigms that mediate between behaviour and subjective experience. In recent years there have been ingenious bridging principles developed to investigate neural systems underpinning perceptual awareness, the sense of agency, and the shifting nature of one’s body image. First-person satisfaction is not the primary criterion of this research program.

Of course, Tallis is right to point out that these are baby steps. His example of catching a ball highlights aspects of the mind that neuroscience has only just begun to explore; a wilderness of the prefrontal cortex that is yet to be staked out. In doing so, we would do well to keep in mind Tallis’ deconstruction of the complexity of action. But initial dissatisfaction is no reason to down tools.

Signal detection, thresholds and consciousness

Towards the end of the 2nd World War, engineers in the US Air Force were asked to improve the sensitivity of radar detectors. The team drafted a working paper combining some new math and statistics – the scattering of the target, the power of the pulse, etc. They had no way of knowing at the time, but the theory they were sketching – signal detection theory, or SDT – would be one of the most influential and durable in modern psychology. By the 1960s, psychologists had become interested in applying the engineers’ theory to understand human detection – in effect, treating each person like a mini radar detector, and applying exactly the same equations to understand their performance. Fast-forward to today, and SDT is not done yet. In fact, it is beginning to break new ground in the study of human consciousness.

To understand why, we need to first cover a little of the theory. Despite the grand name, SDT is deceptively simple. When applied to psychology, it tells us that detection of things in the outside world is noisy. Imagine the following scenario. You sit down in one of our darkened testing rooms, and I ask you to carry out a relatively boring task (any reader who has participated in one of our experiments will be on familiar ground here). Each time you see a faint spot of light on the computer monitor, you should press the “yes” key. If there was no light, you should press the “no” key. If the task is made difficult enough, then sometimes you will say “yes” when there is no light present. In radar-detector speak this is equivalent to a “false alarm”. You will also sometimes say “no” when the signal was actually there – a “miss”. Why does this happen?

Consider that on each “trial” of our experiment, the faint flash of light leads to firing of neurons in the visual cortex, a region of the brain dedicated to seeing. Because the eye and the brain form a noisy system – the firing is not exactly the same for each repetition of the stimulus – different levels of activity are probabilistic. When the stimulus is actually present, the cortex tends to fire more than when it is actually absent (this is summarised by the shifted “signal” probability distribution over firing rates, X, below). But on some trials on which the stimulus was absent there will also be a high firing rate, due to random noise in the system (corresponding to the dark grey area in the figure). The crucial point is this: you only have access to the outside world via the firing of your visual cortex. If the signal in the cortex is high, it will seem as though the light flashed, even if it was absent. Your brain has no way of knowing otherwise. You say “yes” even though nothing was there.

Image

The other insight provided by SDT is that how many false alarms you make is partly up to you. If you decide to be cautious, and only say “yes” when you are really confident, then the weaker signals in cortex won’t pass the threshold, and false alarms will be reduced. The catch is that the number of “hits” you make will be reduced too. In fact, the cornerstone of SDT is that the visual system has a constant sensitivity, meaning that any increase in hit rate is also accompanied by an increase in false alarms, as shown by the performance curve above. Perception is never perfect.

When I was learning about this stuff as an undergraduate, the SDT curve confused me. It never seemed to me that perception was noisy and graded. I don’t glance at my coffee cup and occasionally mistake it for a laptop. Instead, the coffee cup is either there, or it’s not. There doesn’t seem to be any graded, noisy firing in consciousness.

Yet, in countless experiments, the SDT curve provides a near-perfect fit to the data. This is a paradox that I think is central to our understanding of consciousness. And a recent paper from Mariam Aly and Andy Yonelinas at the University of California, Davis, has begun to develop a solution. They summarize the paradox thus:

“These examples [such as the coffee cup] suggest that some conscious experiences are discrete, and either occur or fail to occur. Yet, a dominant view of cognition is that the appearance of discrete mental states is an epiphenomenon, and cognition in reality varies in a completely continuous manner, such that some memories are simply stronger than others, or some perceptual differences just more noticeable than others.”

Aly and Yonelinas propose a reconciliation of these points of view. Their experiments hinge on measuring SDT curves in different conditions, and across different thresholds (defined as different confidence levels). In the noisy, graded model, there should never be a point at which it is possible to increase hits without also increasing false alarms (the red curve above). However, a hunch that there is a particular “state” of viewing the coffee cup that is never accompanied by mistakes would correspond to the discrete boxes at either end of the SDT distributions. Adding these boxes instead predicts the blue curve (above). Yonelinas and Aly found that for simple stimuli, such as flashes of light, the red curve was a good fit to the data, indicating a graded, noisy process that differed only in strength. But for complex stimuli, such as deciding whether two photographs were the same or different, the SDT curve indeed showed a discrete “state” effect (below). You either saw it, or you didn’t.

Because most previous SDT experiments used simple stimuli, this explains why the graded curve has come to dominate the literature. Yet for the more complex objects we are used to seeing in everyday life, our intuitions are usually correct – there really is a discrete state of seeing the coffee cup. Could these discrete states be what we associate with consciousness?

To test this hypothesis, Aly and Yonelinas asked subjects to say whether their judgment on each trial was due to a conscious, perceived difference, or an unconscious feeling of knowing. They then extracted parameters describing how curvy or discrete the SDT curves were. Conscious perception was associated with a stronger estimate of the discrete state process, while unconscious knowing was associated with a more curvy, or graded, SDT curve. A separate experiment showed that the discrete change in perception occurs at an abrupt point in time, whereas unconscious knowing emerges only gradually.

The paper is a tour de force, and well worth reading for other findings I don’t have space to cover here. Suffice to say the “discreteness” of an SDT curve might provide us with a powerful tool with which to understand how the brain gives rise to consciousness, and does so by using statistical models that are relatively immune to subjective biases. It also paves the way for computational modeling aimed at understanding why graded and discrete processes arise.

But there is another, deeper insight from the paper that I want to conclude with. SDT can also be applied to memory: instead of detecting visual signals from the outside world, think of detecting a memory signal emanating from somewhere else in the system. Yonelinas was one of the first to quantify the discrete/graded distinction in memory (known as “recollection” and “familiarity”). By applying their state/strength model to a standard long-term recognition task, he and Aly found that discrete states were more common when recognising that a previous scene had been seen before (the black curve below). But by subtly altering the long-term memory task to focus on the detection of changes, they found something striking. Here are the SDT curves for the two tasks:

Both show evidence of the discrete “state” process, bridging two areas of psychology traditionally studied separately. But they do so in opposite directions. Why?

“We propose that the reason is that the detection of similarities and differences tend to play opposite roles in memory and perception. That is, in perceptual tasks, noticing even a small change between two images is sufficient to make a definitive “different” response… [In contrast], in recognition memory tasks, one expects the state of recollection to support the detection of oldness (i.e. a y-intercept) rather than the detection of newness.”

In other words, consciousness in perception and memory might both rely on discrete states. And both appear to share a common architecture optimized for different types of detection. The difference, then, is that memory might be optimized for detecting matches with our past, whereas perception seems concerned with detecting mismatches with our future.

Aly M, Yonelinas AP (2012) Bridging Consciousness and Cognition in Memory and Perception: Evidence for Both State and Strength Processes. PLoS ONE 7(1): e30231. doi:10.1371/journal.pone.0030231

Overflowing cats

The world around you, I presume, appears rich and detailed. From my desk in my apartment this evening, I can pick out the changing shades of white as the lamps play off the walls, the dusty green of a rubber plant, and the deep blue of a painting on the wall opposite, only half visible due to the reflection of one of the lamps in its glass frame. But there appears to be far more in this scene than I can talk about. My private inner world – what Ned Block dubbed “phenomenal” consciousnesss – is poorly served by our ability to access and describe it.

This intuition has received support in the lab. Experiments show that subjects can only report a subset of several objects briefly flashed on a screen. This in itself is not surprising; our ability to retain information in mind is limited. Strikingly, however, subjects can accurately identify any single object in the array if probed immediately after the set of objects has disappeared. This result shows that each individual object is ready to be accessed: phenomenal consciousnesss tends to “overflow” what we can report.

Or does it? An alternative standpoint (epistemic correlationism) says that every piece of data on consciousness requires a “correlating” report – or how else are we to really know what the subject is doing, seeing, or thinking? In other words, we cannot claim for sure that there is phenomenology without access, as even in the flashing-object experiments, we are relying on our intuition that the subjects are conscious of the whole array of objects at some point.

A lively debate has recently ensued on this issue, fuelled by a pair of original articles in TICS. In one of these articles, Cohen & Dennett propose that conscious experience (phenomenal consciousness) cannot be investigated separately from mechanisms of access. They dismiss the view that particular brain states support phenomenal consciousness – recurrency in neural circuits, for example – without being broadcast to other brain areas, and therefore without being reported. I tend to agree on this point (although their tactic of putting words in the mouths of their opponents was less than graceful). To argue that a particular pattern of brain activity can index phenomenal consciousness in the face of subjective reports to the contrary seems absurd. Such logic rests on “pre-empirical” notions that phenomenology without some sort of access can exist – if we assume for a moment that it cannot, then the fact that a brain state is not cognitively accessed would preclude it from also being phenomenally conscious (one promising resolution to this debate is the proposal by Sid Kouider and colleagues on how different levels of access may account for different types of experience).

It is rather like Schrödinger’s cat – the cat might be dead, or it might be alive, but to know you have to open the box. As for the cat, as for epistemic correlationism in the neurosciences – the only way to know is to open the box and ask the subject. Imagine that we could perform a scan on the box that would give us some image or pattern to look at. If we first assume the cat is alive, we then might be able to say, yes, this assumption corresponds nicely with the fact that the image has such-and-such properties. But if the cat is dead when we subsequently open the box, do we have enough evidence to say that the cat was alive at the time we took the image? The inference that the cat was alive rests largely on the a priori assumption that cats are indeed alive in these types of scenarios. In the same way, the inference that a particular brain state is phenomenally conscious rests on the a priori assumption that the subject is having a phenomenally conscious experience at the time it is recorded.

That is not to say prior beliefs about how the system works are inherently bad. They are the starting point for models of cognition, and can be tested against each other. But when comparing different models, parsimony is to be preferred. The philosopher Richard Brown has argued on his blog that exactly the same constraints should apply to models of consciousness. The extra complexity of positing phenomenal consciousness without access might be justified if it leads to a better account of the data. But for now, I am yet to see a piece of data that is not more parsimoniously accounted for by an assumption that the cat is dead until proven alive.

What’s the point of being conscious?

You stack the final few dishes on the sideboard, ready to be dried and put away. Your mind is elsewhere, perhaps remembering an email that still needs to be sent, or musing over plans for the weekend. At least, that is until your stray elbow silently tips a wine glass off the worktop. You watch in horror, helpless to do anything about it. Then, barely milliseconds into its descent, your hand shoots out of its own accord, saving what would have surely been a shattered mess. You only become aware of the catch when the glass is safely in your hand.

We surprise ourselves when events such as this occur. The miracle catch feels like a reflex, unbound from our inner sense of control. But if we can carry out automatic feats like this, what’s the point in being conscious in the first place? Recent experiments in neuroscience have been pushing the limits of the unconscious in order to find out. The idea is that if we find the unconscious can catch falling glasses but not understand Shakespeare, then perhaps we have found a fault line beyond which consciousness becomes useful.

Using a technique called visual masking, Hakwan Lau and Dick Passingham made a set of invisible symbols, but still left them capable of altering behaviour. The trick was to make the invisible symbol an instruction about what to do in an upcoming task. Responding to instructions is usually thought of as requiring conscious planning. However, not only did the symbols affect which task participants were preparing to carry out, they also modulated activity in the dorsolateral prefrontal cortex, a brain region traditionally associated with conscious control.

More recently, neuroscientist Simon van Gaal has been spearheading the work on the limits of the unconscious. In a recent paper reviewing his work, he outlines how visually masked stimuli can engage complex functions ranging from inhibiting one’s response to resolving conflict between competing actions. Surprisingly, he and his colleagues have yet to find anything the unconscious cannot do.

One striking thing about the miracle glass catch is that you can’t explain it to others. “I don’t know how I did that, it just happened…”. In these circumstances, we take no credit for our behaviour, and have no confidence in its causes. Chris Frith has proposed that conscious awareness gives us the ability to share our reasons for acting, and infer that others have similar reasons. Imagine a world in which every action was like the glass catch: we would be constantly surprised at ourselves, and social interactions would become meaningless. Distinguishing whether someone acted voluntarily is crucial for punishment and cooperation.

Perhaps, then, the point of being conscious is so that we can discuss, among other things, the point of being conscious!