I have a new piece in Aeon magazine outlining recent research on indecision and changes of mind. Contrary to popular assumption, these findings challenge conventional wisdom that acting on an initial “gut feeling” leads to better decisions.
I have a new piece in Aeon magazine outlining recent research on indecision and changes of mind. Contrary to popular assumption, these findings challenge conventional wisdom that acting on an initial “gut feeling” leads to better decisions.
The human brain is an incredibly complex object. With billions of cells each with thousands of connections, it is difficult to know where to begin. Neuroscientists can probe the brain with electrodes, see inside it with scanners, and observe what happens to people when bits of it are damaged in accidents and disease. But putting all this information together is rather like reconstructing a puzzle without the picture on the box for guidance.
We could take inspiration from the Human Genome Project. The genome is also extremely complex, with billions of building blocks. Despite these challenges, the genome was successfully unraveled at a cost of around $3.8 billion in 2003. The knowledge generated by the Human Genome Project is estimated to have produced $141 in the economy for every $1 spent on research.
Now the Obama administration plans to do the same for the human brain, on a similarly ambitious scale ($3 billion over ten years). The goal of the “Brain Activity Map” (BAM) is to map the activity every neuron and connection in the living brain. Because activity of the brain determines our mental lives, the hope is that a comprehensive roadmap will help us understand how memories are formed, how particular drugs might alleviate psychiatric disorders, and even how the brain generates consciousness. The relevant technologies (multi-electrode recording, optogenetics) are advancing rapidly, and large-scale studies are already providing new insights into how networks of cells interact with each other. A successful Brain Activity Map is well within our grasp.
But what will success look like? Will a map of the human brain be useful in the same way that a map of the human genome is useful? In genetics, success allows us to understand and control physical characteristics. In neuroscience, success should lead to an equivalent understanding of the mind. We would be able to use the map to help reduce aberrant emotions in post-traumatic stress disorder, to lift mood in depression, and to reverse the decline of Alzheimers. Yet all these applications rely on a thorough understanding of the mind as well as the brain.
The computer scientist David Marr noted that the mind can only be fully understood by linking three levels: the function of the system, the computations the system carries out, and how these computations are implemented in the brain. Recording brain cells firing away on their own, even thousands of them, will only get us so far. Imagine being able to visualize the electronics of your computer while tapping away at an email. The patterns you see might tell you broadly how things are working, but you could not divine that you had a web browser open, and certainly not that you were writing to an old friend. Instead, to gain a full understanding of the computer, you would need to understand the software itself, as well as how it is implemented in hardware. In an article in the journal Neuron, the scientists behind the BAM proposal remind us that brain function emerges “from complex interactions among constituents”. They seem to agree with Marr. But while we don’t know the full details of the proposal, in its current form the majority of BAM funding will be thrown at understanding only one of his three levels: implementation.
Studying one level without the other is rather like building the Large Hadron Collider without also investing in theoretical physics. Psychologists and cognitive scientists are experts at bridging the gap between the workings of the mind and brain. For example, by carefully designing behavioral tests that can probe mental dysfunction, they are beginning to delve beneath the traditional classifications of mental disorders to understand how particular components of the mind go awry. These individuals need to walk hand in hand with the technologists on the frontline of brain science. The new technologies championed by the BAM scientists will produce a rich harvest of data about the brain, and they are a crucial part of a long-term investment in the brain sciences. But without similar investment in the mind sciences we will be left puzzling over how the pieces fit into our everyday lives. Only by considering the mind when tackling the brain will we get more BAM for our buck.
A shorter form of this review might be appearing in The Psychologist at some point, but I thought I’d post the whole thing here so that books on consciousness can fill some stockings this Christmas…
At the beginning of The Ravenous Brain, Daniel Bor reminds us “There is nothing more important to us than our own awareness”. Western society’s focus on brain, rather than cardiac, death as the natural endpoint to a meaningful life is testament to this assertion.
But only 20 years ago, consciousness science was regarded as a fringe endeavour. Now, particularly in the UK, consciousness is going mainstream, spearheaded by the Sackler Center for Consciousness Science at the Universtiy of Sussex, where Bor is based. Of course, in varying degrees, all psychologists study consciousness: attention and working memory are core components of high-level conscious function. But only recently has a deeper question been tackled: how might these functions come together to underpin awareness? Why are humans blessed with a rich, private consciousness that might not be present in other animals? And how should we tackle the all-too-frequent disorders and distortions of consciousness in neuropsychiatric disorders?
With infectious enthusiasm, Bor takes us on a tour of the latest research into how the brain generates consciousness. His scope is broad, ranging from experiments on aneasthesia and subliminal priming, to our sense of self and progress on communicating with patients in a vegetative state. One of the most difficult questions in the field is addressing what consciousness is for. Circular answers often result: if language is usually associated with consciousness, for instance, than maybe consciousness is for producing language. Bor’s answer is that consciousness is for innovation, and dealing with novelty. Again, I am not convinced that this proposal completely slips the bonds of circularity – is innovation possible without awareness? – but it opens up new avenues for future research.
This is an accessible, engaging account from a practitioner who is well aware of the messy reality of science. Bor is that rare combination of working scientist, story-teller and lucid explainer. The Ravenous Brain reads as a dispatch from a foreign country engaged in a revolution – one that is far from over.
Awareness is a private affair. For instance, I can’t say for sure whether the other customers in the coffee shop where I’m sitting are conscious in the same way that I am. Perhaps instead they are zombies, doing a good impression of acting like self-aware human beings.
By talking to each other, we can quickly disregard this possibility. When it comes to animals, however, the jury is out. Is a chimpanzee self-aware? How about a cow? An insect?
This is not idle speculation. These questions matter. Our moral intuitions are based on the assumption that the person we are interacting with is consciously aware. And our legal system is imbued with the notion that consciousness matters. If we were to find that another animal species had a consciousness very similar to that of humans, then it may be remiss of us not to extend the same rights and protections to that species.
Recently, a prominent group of neuroscientists signed a declaration stating that several non-human animal species are conscious. They reasoned that many mammals share brain structures – the thalamus, neocortex – that are involved in consciousness in humans, and display similar behavioral repertoires, such as attentiveness, sleep and capacity for decision-making. Therefore it is more likely than not that they have a similar consciousness.
While this seems intuitive, we need to stop and examine their reasoning. It all comes down to the kind of consciousness we are talking about. No one doubts, for example, that animals have periods of both sleep and wakefulness. What is at issue is whether they are aware in the same way that you and I are aware when we are awake.
Imagine you are in the cinema, engrossed in the latest blockbuster. There’s a good chance (especially if the film is any good) that while you are experiencing the film, you are not aware that you are experiencing the film. “Meta”-awareness is absent. Now imagine that you are condemned to spend the rest of your life without meta-awareness, continuously engrossed in the film of your own life. I’d wager it wouldn’t be much of an existence; as Socrates suggested, the unexamined life is not worth living.
Whether or not animals have this capacity for meta-awareness is unclear. Without the ability to report mental states, it is notoriously difficult to assess. But one particularly promising test involves judgments of control, or “agency”. Consider playing an arcade game after the money has run out – at some point, you realize that rather than steering your digital car through the streets of Monte Carlo, your efforts at the wheel are having no effect whatsoever. This realization – that you are no longer in control – is known as a judgment of agency, and may be intimately linked to meta-awareness.
In a recent study conducted in Kyoto, Japan, researchers asked whether chimpanzees could make judgments of agency. The task was to move a computer cursor to bump into another target displayed on the screen. The twist was that another decoy cursor was also present on the screen, whose movements were replayed from a previous trial. Thus the chimpanzee had control of one of the cursors, but not the other, even though visually they were identical. After the trial ended, the animals were trained to indicate the cursor that they had been controlling. All three chimpanzees correctly indicated this “self” cursor around 75% of the time. As the experimenters note, “Because both the self- and distractor cursor movements were produced by the same individual, the movements were presumably indistinguishable to a third person (and to the experimenters), who passively observed the display.” In other words, the only way to do the task is to monitor internal states, which is a pre-requisite for meta-awareness.
Judgments about another species’ consciousness should not be taken lightly. In particular, we should be careful about what kind of consciousness we are talking about. The kind that matters most from a moral and legal perspective is the capacity to be aware of our actions and intentions. Initial evidence suggests that some animals, particularly the great apes, may have this higher-order reflective capacity. This should give us greater pause for thought than the presence of primary or phenomenal consciousness in lower animals.
*This post was cross-posted from Psychology Today
More than 100 years ago, the great neuroscientist Ramon y Cajal identified a clutch of “diseases of the will” that could derail a young scientist’s career. In today’s computerised world, dominated by smartphones and the Internet, this list deserves an update. In particular, a constant connection to the web has given rise to the Compulsive Emailer*.
The compulsive emailer has a particular rhythm and pattern to his day. Upon waking, he reaches for his smartphone, and, bleary-eyed, checks to see whether anything of import occurred during the course of the night. Science being a rather solitary and gradual endeavour, this is unlikely. Instead, a tinge of disappointment greets the usual automated adverts from journals, and the deluge of mail from obscure technical discussion groups.
Arriving at work, the compulsive emailer checks his smartphone in the elevator, and, after making coffee, sits down at his desk to deal promptly with any urgent missives received during the intervening five minutes. For the rest of the day, metronomic checking is never more than a couple of clicks away. The worst cases may even dedicate a separate screen to their inbox, allowing checking to be done with no more than a glance of the eyes.
Sometimes, in moments of reflection, the compulsive emailer will become frustrated with his lot, and yearn for a job in which email is center-stage, such as a political aide, or a journalist. At least then, his affliction would become useful, rather than be wasted on the continual archiving of dubious invitations to attend far-away conferences.
The curse reaches fever pitch, of course, a few weeks after submission of a paper. Convinced that the letter deciding the paper’s fate will arrive at any second, he hits the refresh button with renewed intensity. Fortunately such occasions are relatively rare, as time spent on any real work is dwarfed in comparison to that spent toiling at the inbox.
The compulsive emailer would do well to restrict e-communications to a particular time of day, perhaps the late afternoon, after time has been given for things “important and unread” to accumulate. He will be pleasantly surprised how quickly email can be dealt with, and dismissed for another day, while relishing the expanses of time that will open up for doing real science.
Retiring for the evening, he makes a few final checks of the smartphone, explaining to any company present that he is expecting to receive some new data from a research assistant. What he is going to do with those files at midnight on a Sunday is anyone’s guess, but the implication is that they are really rather important.
*The author, being a Compulsive Emailer, is well-qualified to describe this condition.
As fits this particular canvas, the article is painted with a broad brush, and there wasn’t room to go into detail about any particular study. But for those who are interested in the details, I’ve included below some links to the original sources along with the relevant quotes from the Aeon piece.
To quote a recent article by the psychologist Jonathan Schooler and colleagues ‘we are often startled by the discovery that our minds have wandered away from the situation at hand’.
…the Court of Appeal opined that the defence of automatism should not have been on the table in the first place, due to a driver without ‘awareness’ retaining some control of the car.
In the early 1970s, Lawrence Weiskrantz and Elizabeth Warrington discovered a remarkable patient… studies on similar patients with ‘blindsight’ confirmed that these responses relied on a neural pathway quite separate from the one that usually passes through the occipital lobes
…a key difference between conscious and unconscious vision is activity in the prefrontal cortex…
Other research implies that consciousness emerges when there is the right balance between connectivity between brain regions, known as the ‘information integration’ theory.
…anesthesia may induce unconsciousness by disrupting the communication between brain regions.
A series of innovative experiments have begun to systematically investigate mind-wandering… Under the influence of alcohol, people become more likely to daydream and less likely to catch themselves doing so.
“If the imperialist ambitions of Neuromania and Darwinitis were fully realized, they would swallow the image of humanity in the science of biology”. So begins the penultimate chapter of Raymond Tallis’ opus on the sinister forces of the mind-sciences, Aping Mankind. In a thoroughly enjoyable book, he identifies two: Neuromania, the addition of a neuro- prefix to just about every humanities discipline you can think of (art, literature, law), and Darwinitis, the reduction of human flourishing to evolutionary primitives that seethe under the surface of the psyche.
For a neuroscientist, reading Tallis is humbling. He emphasises just how far we still have to go in order to understand even the most mundane aspects of the human mind. Consider his account of taking a catch in a cricket match, an action you might think of as “automatic”:
So surely you did not catch the ball; your body did, and you were just a fortunate bystander who took the credit.
No one really thinks this, and for good reasons. First, in order to catch the ball, you had to participate in a game of cricket. This requires that you should have (voluntarily) turned up to a particular place on a particular day, that you understood and assented to the rules of cricket and that you understood the role of the fielder, in particular that of the slip fielder. More importantly, in order to make the catch, you would have had to practise. This means hours spent in the nets, preparing yourself for this moment, which would bring such glory upon you. You would have to order your affairs so that you would be able to go to the nets at the booked time: negotiating the traffic; making sure your day was clear so you could take up your booked slot; and so on.
In the same vein, Tallis grumpily dismisses hype-grabbing studies claiming to explain the enjoyment of art or Shakespeare simply by looking at a brain scan. In doing so, he reaffirms the mystery of the human condition. He has no time for sloppy thinking, complaining when “things which belong to common-sense are presented as truths uncovered by the biological sciences”. His writing is laced with the rare authority of a scholar steeped in both the sciences and the humanities.
Which makes it all the more surprising when, in dismantling a neuroscience of the humanities, the book slips into attacking a neuroscience of, well, neuroscience. The culmination of this attack is a section entitled “Why there can never be a brain science of consciousness: the disappearance of appearance”. Unfortunately, this conclusion is based on false premises. Let us see if we can set a few things straight.
First, there is the claim that psychological functions are equivalent to locations, or patterns, of brain activity. Locations and patterns are aspects of brain activity that might be picked up by scans or neural recordings, but they do not constitute an account of function. Instead, we need to understand what a brain region is doing, how its function affects a broader network of activity, and how, ultimately, this network affects the organism’s behaviour. A brain scan might provide a glimpse onto these functional dynamics, but it is not the final story.
Instead, it is a model of the underlying brain-behaviour link that matters, rather than any particular location or pattern of activity. Models are more than ideas: as Lewandowsky & Farrell write, “Even intuitively attractive notions may fail to provide the desired explanation for behavior once subjected to the rigorous analysis required by a computational model”. For example, one can build a toy network of two populations of neurons “deciding” between two options in response to different inputs, and ask whether there are correlates of this process in the living brain. The location of such activity does not explain the ability to decide; instead, it is the dynamics (and, ultimately, the link between these dynamics and other brain circuits involved in perception and action) that can give us greater insight into what it means to make a decision. A good model accommodates the bumps and curves of individual datasets, both behavioural and neural, and provides a set of hypotheses that can be refined through further study. Discussion of models is sparse in Aping Mankind, perhaps because they are less easy to lambast than neuromanic studies of love and wisdom.
But these oversights pale in comparison to the ultimate straw-man complaint that: “neuroscience does not address, even less answer, the fundamental question of the relation(s) between matter and mind, body and mind, or brain and mind”. This is the famous “hard problem” of consciousness: how does subjectivity arise out of a lump of biological material? As Tallis writes, “Consciousness is, at the basic level, appearances or appearings-to, but neither nerve impulses nor the material world have appearances.” Quite so, and nor should we expect them to. And if they did, this would only beg the question: appearances to whom? As a squarely metaphysical, rather than empirical, discussion, it is no surprise that it is not addressed by neuroscience.
After making this confusion, Tallis is often caught in the headlights of the hard problem. He references the richness of subjective experience to parody the “laughable crudity” of cognitive neuroscience. In fact, most of the ongoing and vibrant science of consciousness adopts “bridging principles”, experimental paradigms that mediate between behaviour and subjective experience. In recent years there have been ingenious bridging principles developed to investigate neural systems underpinning perceptual awareness, the sense of agency, and the shifting nature of one’s body image. First-person satisfaction is not the primary criterion of this research program.
Of course, Tallis is right to point out that these are baby steps. His example of catching a ball highlights aspects of the mind that neuroscience has only just begun to explore; a wilderness of the prefrontal cortex that is yet to be staked out. In doing so, we would do well to keep in mind Tallis’ deconstruction of the complexity of action. But initial dissatisfaction is no reason to down tools.
Next year the psychiatrist’s bible, the Diagnositic and Statistical Manual (DSM) will be revised into its 5th edition. It may have a dull moniker, but the DSM is more than a medical handbook. Central to its construction are the prevailing winds of societal attitudes – consider that as late as the 1973 homosexuality was still labeled a mental disorder.
These influences are illustrated by recent calls that obesity should be included in the DSM. Proponents of this position point to evidence that obesity research, both in animal models and in humans, implicates neural pathways involved in drug addiction. Drug addiction is in the DSM, so why not food addiction?
The “disease model” of drug addiction is that drugs hijack the midbrain dopamine system. Different drugs get to dopamine via diverse molecular pathways, which probably underlie their distinctive subjective properties. But all studied drugs of abuse lead to dopamine increases in the ventral striatum, one of the principal targets of the midbrain system.
Our current understanding of this brain circuit is that it is exquisitely sensitive to “prediction errors”. Imagine that you start visiting a new coffee shop on the way to work. On your first visit, you have neutral expectations of how the coffee will taste – it might be good, it might be bad. On that first morning, it tastes good, leading to a positive “error” signal with which to update your expectations. On the next morning, it’s still good, but that’s what you expected, hence no error signal.
A large body of research has linked the midbrain dopamine system to calculation of these prediction errors. When you taste the coffee for the first time, there’s a positive prediction error, and a dopamine spike.
Now, imagine that on the second morning (when there should be no change to your expectations) an evil (or friendly, depending on your outlook) scientist stimulates your midbrain dopamine system just at the moment you take that first sip of coffee. It will seem to you as though the coffee tastes better than it should have done. Various drugs, from cocaine to nicotine, may act in this way, hijacking the natural dopamine response and leading to a series of positive prediction errors, even when there’s nothing to be learnt. Over time, this may result in long-term plasticity in the striatum and other targets of the dopamine system, sometimes leading to addiction.
That, in an over-simplified nutshell, is the dopamine hypothesis of drug addiction, one that receives increasing support from neurobiological studies. Its application to obesity relies on one further logical step. Drugs hijack natural reward circuitry, so why shouldn’t potent natural rewards to do the same? As Marc Lewis puts it in his recent book Memoirs of an Addicted Brain:
Whether in service of food or heroin, love or gambling, dopamine forms a rut, a line of footprints in the neural flesh.
There is some truth to this well-turned phrase. Sugar rewards given to rats result in dopamine spikes similar to those seen in response to drugs, and cocaine and methamphetamine can suppress appetite by interfering with the reward system. There is evidence of dopamine receptor changes in obese individuals similar to those seen in drug addicts, and such changes correlate with body mass index.
But evidence of food addiction contributing to obesity does not imply that obesity is equivalent to food addiction. Obesity is the consequence, not the cause, of an unhealthy balance between eating and exercise. Brain mechanisms involved in the processing of reward and satiety will no doubt be involved in setting this balance, and may be more involved in some cases than others (such as Binge Eating Disorder). But so will a wide range of social and economic pressures, as indicated by the graphic below. In 2009, 36% of adults in the United States were obese, and this number is rising. If obesity were to make its way into the DSM, a huge swathe of the population would be handed a psychiatric disorder overnight.
Fortunately, wisdom is prevailing, and this is unlikely to happen (binge eating, on the other hand, will probably be added). There is an urgent need to understand the biological and social factors contributing to both obesity and addiction. As biological understanding increases, many debilitating conditions that were traditionally thought to derive from individual failure or lack of willpower (such as drug addiction) become re-interpreted, as neural hijacks. The same may be happening to obesity. This is generally a good thing: stigma is reduced, and options for treatment and therapy are increased.
But there is also a danger of fatalism, both on a societal and individual level: we currently interpret new research through the lens of a see-saw worldview: as biology dominates, personal responsibility recedes. Instead, for modern neuroscience to have a useful impact on society, it must construct explanations for complicated conditions that allow for biology and responsibility to co-exist. Redefining a social and biological phenomenon as a “brain disease” is not the way to do this. After all, absent a ghost in the machine, the owner does not melt away when the brain comes into focus.
I have just returned from UPenn where I spent some time as a Fellow in neuroethics. This blog post is adapted from a talk I gave during that time.
Towards the end of the 2nd World War, engineers in the US Air Force were asked to improve the sensitivity of radar detectors. The team drafted a working paper combining some new math and statistics – the scattering of the target, the power of the pulse, etc. They had no way of knowing at the time, but the theory they were sketching – signal detection theory, or SDT – would be one of the most influential and durable in modern psychology. By the 1960s, psychologists had become interested in applying the engineers’ theory to understand human detection – in effect, treating each person like a mini radar detector, and applying exactly the same equations to understand their performance. Fast-forward to today, and SDT is not done yet. In fact, it is beginning to break new ground in the study of human consciousness.
To understand why, we need to first cover a little of the theory. Despite the grand name, SDT is deceptively simple. When applied to psychology, it tells us that detection of things in the outside world is noisy. Imagine the following scenario. You sit down in one of our darkened testing rooms, and I ask you to carry out a relatively boring task (any reader who has participated in one of our experiments will be on familiar ground here). Each time you see a faint spot of light on the computer monitor, you should press the “yes” key. If there was no light, you should press the “no” key. If the task is made difficult enough, then sometimes you will say “yes” when there is no light present. In radar-detector speak this is equivalent to a “false alarm”. You will also sometimes say “no” when the signal was actually there – a “miss”. Why does this happen?
Consider that on each “trial” of our experiment, the faint flash of light leads to firing of neurons in the visual cortex, a region of the brain dedicated to seeing. Because the eye and the brain form a noisy system – the firing is not exactly the same for each repetition of the stimulus – different levels of activity are probabilistic. When the stimulus is actually present, the cortex tends to fire more than when it is actually absent (this is summarised by the shifted “signal” probability distribution over firing rates, X, below). But on some trials on which the stimulus was absent there will also be a high firing rate, due to random noise in the system (corresponding to the dark grey area in the figure). The crucial point is this: you only have access to the outside world via the firing of your visual cortex. If the signal in the cortex is high, it will seem as though the light flashed, even if it was absent. Your brain has no way of knowing otherwise. You say “yes” even though nothing was there.
The other insight provided by SDT is that how many false alarms you make is partly up to you. If you decide to be cautious, and only say “yes” when you are really confident, then the weaker signals in cortex won’t pass the threshold, and false alarms will be reduced. The catch is that the number of “hits” you make will be reduced too. In fact, the cornerstone of SDT is that the visual system has a constant sensitivity, meaning that any increase in hit rate is also accompanied by an increase in false alarms, as shown by the performance curve above. Perception is never perfect.
When I was learning about this stuff as an undergraduate, the SDT curve confused me. It never seemed to me that perception was noisy and graded. I don’t glance at my coffee cup and occasionally mistake it for a laptop. Instead, the coffee cup is either there, or it’s not. There doesn’t seem to be any graded, noisy firing in consciousness.
Yet, in countless experiments, the SDT curve provides a near-perfect fit to the data. This is a paradox that I think is central to our understanding of consciousness. And a recent paper from Mariam Aly and Andy Yonelinas at the University of California, Davis, has begun to develop a solution. They summarize the paradox thus:
“These examples [such as the coffee cup] suggest that some conscious experiences are discrete, and either occur or fail to occur. Yet, a dominant view of cognition is that the appearance of discrete mental states is an epiphenomenon, and cognition in reality varies in a completely continuous manner, such that some memories are simply stronger than others, or some perceptual differences just more noticeable than others.”
Aly and Yonelinas propose a reconciliation of these points of view. Their experiments hinge on measuring SDT curves in different conditions, and across different thresholds (defined as different confidence levels). In the noisy, graded model, there should never be a point at which it is possible to increase hits without also increasing false alarms (the red curve above). However, a hunch that there is a particular “state” of viewing the coffee cup that is never accompanied by mistakes would correspond to the discrete boxes at either end of the SDT distributions. Adding these boxes instead predicts the blue curve (above). Yonelinas and Aly found that for simple stimuli, such as flashes of light, the red curve was a good fit to the data, indicating a graded, noisy process that differed only in strength. But for complex stimuli, such as deciding whether two photographs were the same or different, the SDT curve indeed showed a discrete “state” effect (below). You either saw it, or you didn’t.
Because most previous SDT experiments used simple stimuli, this explains why the graded curve has come to dominate the literature. Yet for the more complex objects we are used to seeing in everyday life, our intuitions are usually correct – there really is a discrete state of seeing the coffee cup. Could these discrete states be what we associate with consciousness?
To test this hypothesis, Aly and Yonelinas asked subjects to say whether their judgment on each trial was due to a conscious, perceived difference, or an unconscious feeling of knowing. They then extracted parameters describing how curvy or discrete the SDT curves were. Conscious perception was associated with a stronger estimate of the discrete state process, while unconscious knowing was associated with a more curvy, or graded, SDT curve. A separate experiment showed that the discrete change in perception occurs at an abrupt point in time, whereas unconscious knowing emerges only gradually.
The paper is a tour de force, and well worth reading for other findings I don’t have space to cover here. Suffice to say the “discreteness” of an SDT curve might provide us with a powerful tool with which to understand how the brain gives rise to consciousness, and does so by using statistical models that are relatively immune to subjective biases. It also paves the way for computational modeling aimed at understanding why graded and discrete processes arise.
But there is another, deeper insight from the paper that I want to conclude with. SDT can also be applied to memory: instead of detecting visual signals from the outside world, think of detecting a memory signal emanating from somewhere else in the system. Yonelinas was one of the first to quantify the discrete/graded distinction in memory (known as “recollection” and “familiarity”). By applying their state/strength model to a standard long-term recognition task, he and Aly found that discrete states were more common when recognising that a previous scene had been seen before (the black curve below). But by subtly altering the long-term memory task to focus on the detection of changes, they found something striking. Here are the SDT curves for the two tasks:
Both show evidence of the discrete “state” process, bridging two areas of psychology traditionally studied separately. But they do so in opposite directions. Why?
“We propose that the reason is that the detection of similarities and differences tend to play opposite roles in memory and perception. That is, in perceptual tasks, noticing even a small change between two images is sufficient to make a definitive “different” response… [In contrast], in recognition memory tasks, one expects the state of recollection to support the detection of oldness (i.e. a y-intercept) rather than the detection of newness.”
In other words, consciousness in perception and memory might both rely on discrete states. And both appear to share a common architecture optimized for different types of detection. The difference, then, is that memory might be optimized for detecting matches with our past, whereas perception seems concerned with detecting mismatches with our future.
Aly M, Yonelinas AP (2012) Bridging Consciousness and Cognition in Memory and Perception: Evidence for Both State and Strength Processes. PLoS ONE 7(1): e30231. doi:10.1371/journal.pone.0030231
The world around you, I presume, appears rich and detailed. From my desk in my apartment this evening, I can pick out the changing shades of white as the lamps play off the walls, the dusty green of a rubber plant, and the deep blue of a painting on the wall opposite, only half visible due to the reflection of one of the lamps in its glass frame. But there appears to be far more in this scene than I can talk about. My private inner world – what Ned Block dubbed “phenomenal” consciousnesss – is poorly served by our ability to access and describe it.
This intuition has received support in the lab. Experiments show that subjects can only report a subset of several objects briefly flashed on a screen. This in itself is not surprising; our ability to retain information in mind is limited. Strikingly, however, subjects can accurately identify any single object in the array if probed immediately after the set of objects has disappeared. This result shows that each individual object is ready to be accessed: phenomenal consciousnesss tends to “overflow” what we can report.
Or does it? An alternative standpoint (epistemic correlationism) says that every piece of data on consciousness requires a “correlating” report – or how else are we to really know what the subject is doing, seeing, or thinking? In other words, we cannot claim for sure that there is phenomenology without access, as even in the flashing-object experiments, we are relying on our intuition that the subjects are conscious of the whole array of objects at some point.
A lively debate has recently ensued on this issue, fuelled by a pair of original articles in TICS. In one of these articles, Cohen & Dennett propose that conscious experience (phenomenal consciousness) cannot be investigated separately from mechanisms of access. They dismiss the view that particular brain states support phenomenal consciousness – recurrency in neural circuits, for example – without being broadcast to other brain areas, and therefore without being reported. I tend to agree on this point (although their tactic of putting words in the mouths of their opponents was less than graceful). To argue that a particular pattern of brain activity can index phenomenal consciousness in the face of subjective reports to the contrary seems absurd. Such logic rests on “pre-empirical” notions that phenomenology without some sort of access can exist – if we assume for a moment that it cannot, then the fact that a brain state is not cognitively accessed would preclude it from also being phenomenally conscious (one promising resolution to this debate is the proposal by Sid Kouider and colleagues on how different levels of access may account for different types of experience).
It is rather like Schrödinger’s cat – the cat might be dead, or it might be alive, but to know you have to open the box. As for the cat, as for epistemic correlationism in the neurosciences – the only way to know is to open the box and ask the subject. Imagine that we could perform a scan on the box that would give us some image or pattern to look at. If we first assume the cat is alive, we then might be able to say, yes, this assumption corresponds nicely with the fact that the image has such-and-such properties. But if the cat is dead when we subsequently open the box, do we have enough evidence to say that the cat was alive at the time we took the image? The inference that the cat was alive rests largely on the a priori assumption that cats are indeed alive in these types of scenarios. In the same way, the inference that a particular brain state is phenomenally conscious rests on the a priori assumption that the subject is having a phenomenally conscious experience at the time it is recorded.
That is not to say prior beliefs about how the system works are inherently bad. They are the starting point for models of cognition, and can be tested against each other. But when comparing different models, parsimony is to be preferred. The philosopher Richard Brown has argued on his blog that exactly the same constraints should apply to models of consciousness. The extra complexity of positing phenomenal consciousness without access might be justified if it leads to a better account of the data. But for now, I am yet to see a piece of data that is not more parsimoniously accounted for by an assumption that the cat is dead until proven alive.