A theory of consciousness worth attending to

There are multiple theories of how the brain produces conscious awareness. We have moved beyond the stage of intuitions and armchair ideas: current debates focus on hard empirical evidence to adjudicate between different models of consciousness. But the science is still very young, and there is a sense that more ideas are needed. At the recent Association for the Scientific Study of Consciousness* meeting in Brisbane my friend and colleague Aaron Schurger told me about a new theory from Princeton neuroscientist Michael Graziano, outlined in his book Consciousness and the Social Brain. Aaron had recently reviewed Graziano’s book for Science, and was enthusiastic about it being a truly different theory – consciousness really explained.

I have just finished reading the book, and agree that it is a novel and insightful theory. As with all good theories, it has a “why didn’t I think of that before!” quality to it. It is a plausible sketch, rather than a detailed model. But it is a testable theory and one that may turn out to be broadly correct.

When constructing a theory of consciousness we can start from different premises. “Information integration” theory begins with axioms of what consciousness is like (private, rich) in order to build up the theory from the inside. In contrast, “global workspace” theory starts with the behavioural data – the “reportability” of conscious experience – and attempts to explain the presence or absence of reports of awareness. Each theory has different starting points but ultimately aims to explain the same underlying phenomenon (similar to physicists starting either with the very large – planets – or the very small – atoms, and yet ultimately aiming for a unified model of matter).

Dennett’s 1991 book Consciousness Explained took the reportability approach to its logical conclusion. Dennett proposed that once we account for the various behaviours associated with consciousness – the subjective reports – there is nothing left to explain. There is nothing “extra” that underpins first-person subjective experience (contrast this with the “hard problem” view: there is something to be explained that cannot be solved within the standard cognitive model, which is exactly why it’s a hard problem). I read Dennett’s book as an undergraduate and was captivated that there might be a theory that explains subjective reports from the ground up, reliant only on the nuts and bolts of cognitive psychology. Here was a potential roadmap for understanding consciousness: if we could show how A connects to B, B connects to C, and C connects to the verbalization “I am conscious of the green of the grass” then we have done our job as scientists. But there was a nagging doubt: does this really explain our inner, subjective experience? Sure, it might explain the report, but it seems to be throwing out the conscious baby with the bathwater. In playful mood, some philosophers have suggested that Dennett himself might be a zombie because he thinks the only relevant data on consciousness are the reports of others!

But the problem is that subjective reports are one of the few observable features we have to work with as scientists of consciousness. In Graziano’s theory, the report forms the starting point. He then goes deeper to propose a mechanism underpinning this report that explains conscious experience.

To ensure we’re on the same page, let’s start by defining the thing we are trying to explain. Consciousness is a confusing term – some people mean level of consciousness (e.g. coma vs. sleep vs. being awake), others mean self-consciousness, others mean the contents of awareness that we have when we’re awake – an awareness that contains some things, such as the green of an apple, but not others, such as feeling of the clothes against my skin or my heartbeat. Graziano’s theory is about the latter: “The purpose of this book is to present a theory of awareness. How can we become aware of any information at all? What is added to produce awareness?” (p. 14).

What is added to produce awareness? Cognitive psychology and neuroscience assumes that the brain processes information. We don’t yet understand the details of how much of this processing works, but the roadmap is there. Consider a decision about whether you just saw a faint flash of light, such as a shooting star. Under the informational view, the flash causes changes to proteins in the retina, which lead to neural firing, information encoding in visual cortex and so on through a chain of synapses to the verbalization “I just saw a shooting star over there”. There is, in principle, nothing mysterious about this utterance. But why is it accompanied by awareness?

Scientists working on consciousness often begin with the input to the system. We say (perhaps to ourselves) “neural firing propagating across visual cortex doesn’t seem to be enough, so let’s look for something extra”. There have been various proposals for this “something extra”: oscillations, synchrony, recurrent activity. But these proposals shift the goalposts – neural oscillations may be associated with awareness, but why should these changes in brain state cause consciousness? Graziano takes the opposite tack, and works from the motor output, the report of consciousness, inwards (it is perhaps no coincidence that he has spent much of his career studying the motor system). Awareness does not emanate from additional processes that are laid on top of vanilla information processing. Instead, he argues, the only thing we can be sure of about consciousness is that it is information. We say “I am conscious of X”, and therefore consciousness causes – in a very mainstream, neuroscientific way – a behavioural report. Rather like finding the source of a river, he suggests that we should start with these reports and work backwards up the river until we find something that resembles its source. It’s a supercharged version of Dennett: the report is not the end-game; instead, the report is our objective starting point.

I recently heard a psychiatrist colleague describe a patient who believed that a beer can inside his head was receiving radio signals that were controlling his thoughts. There was little that could be done to shake the delusion – he admitted it was unusual, but he genuinely believed that the beer can was lodged in his skull. As scientist observers we know this can’t be true: we can even place the man inside a CT scanner and show him the absence of a beer can.

But – and this is the crucial move – the beer can does exist for the patient. The beer can is encoded as an internal brain state, and this information leads to the utterance “I have a beer can in my head”. Graziano proposes that consciousness is exactly like the beer can. Consciousness is real, in the sense it is an informational state that leads us to report “I am aware of X”. But there are no additional properties in the brain that make something conscious, beyond the informational state encoding the belief that the person is conscious. Consciousness is a collective delusion – if only one of us was constantly saying, “I am conscious” we might be as skeptical as we are in the case of the beer can, and scan his brain saying “But look! You don’t actually have anything that resembles consciousness in there”.

Hmm, I hear you say, this still sounds rather Dennettian. You’ve replaced consciousness with an informational state that leads to report. Surely there is more to it than that? In Graziano’s theory, the “something extra” is a model of attention, called the attention schema. The attention schema supplies the richness behind the report. Attention is the brain’s way of enhancing some signals but not others. If we’re driving along in the country and a sign appears warning of deer crossing the road, we might focus our attention on the grass verges. But attention is a process, of enhancement or suppression. The state of attention is not represented anywhere in the system [1]. Instead, awareness is the brain’s way of representing what attention is doing. This makes the state of attention explicit. By being aware of looking at my laptop while writing these words, the informational content of awareness is “My attention is pointed at my computer screen”.

Graziano suggests that the same process of modeling our own attentional state is applied to (and possibly evolved from) the ability to model the attentional focus of others [2]. And, because consciousness is a model, rather than a reality that either exists or does not, it has an appealing duality to its existence. We can attribute awareness to ourselves. But we can also attribute awareness to something else, such as a friend, our pet dog, or the computer program in the movie “Her”. Crucially, this attribution is independent of whether they each also attribute awareness to themselves.

The attention schema theory is a sketch for a testable theory of consciousness grounded in the one thing we can measure: subjective report. It provides a framework for new experiments on consciousness and attention, consciousness and social cognition, and so on.  On occasion I suspect it over-generalizes. For instance, free will is introduced as just another element of conscious experience. I found myself wondering how a model of attention could explain our experience of causing our actions, as required to account for the sense of agency. Instead, perhaps we could think of the attention schema as a prototype model for different elements of subjective report. For instance, a sense of agency could arise from a model of the decision-making process that allows us to say “I caused that to happen” – a decision schema, rather than an attention schema.

Like all good theories, it raises concrete questions. How does it account for unconscious perception? Does it predict when attention should dissociate from awareness? What would a mechanism for the attention schema look like? How is the modeling done? We may not yet have all the answers, but Graziano’s theory is an important contribution to framing the question.

[1] This importance of “representational redescription” of implicitly embedded knowledge was anticipated by Clark & Karmiloff-Smith (1992): “What seems certain is that a genuine cognizer must somehow manage a symbiosis of different modes of representation – the first-order connectionist and the multiple levels of more structured kinds” (p. 515). Importantly, representational redescription is not necessary to complete a particular task, but it is necessary to represent how the task is being completed. As Graziano says: “There is no reason for the brain to have any explicit knowledge about the process or dynamics of attention. Water boils but has no knowledge of how it does it. A car can move but has no knowledge of how it does it. I am suggesting, however, that in addition to doing attention, the brain also constructs a description of attention… and awareness is that description” (p. 25). And: “For a brain to be able to report on something, the relevant item can’t merely be present in the brain but must be encoded as information in the form of neural signals that can ultimately inform the speech circuitry.” (p. 147).

[2] Graziano suggests his theory shouldn’t be considered a metacognitive theory of consciousness because it accounts both for the abstract knowledge that we are aware and the inherent property of being aware. But this view seems to equate metacognition with abstract knowledge. Instead I suggest that a model of another cognitive process, such as the attention schema as a model of attention, is inherently metacognitive. Currently there is little work on metacognition of attention, but such experiments may provide crucial data for testing the theory.

*I am currently Executive Director of the ASSC. The views in this post are my own and should not be interpreted as representing those of the ASSC.

When tackling the brain, don’t forget the mind

The human brain is an incredibly complex object. With billions of cells each with thousands of connections, it is difficult to know where to begin. Neuroscientists can probe the brain with electrodes, see inside it with scanners, and observe what happens to people when bits of it are damaged in accidents and disease. But putting all this information together is rather like reconstructing a puzzle without the picture on the box for guidance.

computer_brain

We could take inspiration from the Human Genome Project. The genome is also extremely complex, with billions of building blocks. Despite these challenges, the genome was successfully unraveled at a cost of around $3.8 billion in 2003. The knowledge generated by the Human Genome Project is estimated to have produced $141 in the economy for every $1 spent on research.

Now the Obama administration plans to do the same for the human brain, on a similarly ambitious scale ($3 billion over ten years). The goal of the “Brain Activity Map” (BAM) is to map the activity every neuron and connection in the living brain. Because activity of the brain determines our mental lives, the hope is that a comprehensive roadmap will help us understand how memories are formed, how particular drugs might alleviate psychiatric disorders, and even how the brain generates consciousness. The relevant technologies (multi-electrode recording, optogenetics) are advancing rapidly, and large-scale studies are already providing new insights into how networks of cells interact with each other. A successful Brain Activity Map is well within our grasp.

But what will success look like? Will a map of the human brain be useful in the same way that a map of the human genome is useful? In genetics, success allows us to understand and control physical characteristics. In neuroscience, success should lead to an equivalent understanding of the mind. We would be able to use the map to help reduce aberrant emotions in post-traumatic stress disorder, to lift mood in depression, and to reverse the decline of Alzheimers. Yet all these applications rely on a thorough understanding of the mind as well as the brain.

The computer scientist David Marr noted that the mind can only be fully understood by linking three levels: the function of the system, the computations the system carries out, and how these computations are implemented in the brain. Recording brain cells firing away on their own, even thousands of them, will only get us so far. Imagine being able to visualize the electronics of your computer while tapping away at an email. The patterns you see might tell you broadly how things are working, but you could not divine that you had a web browser open, and certainly not that you were writing to an old friend. Instead, to gain a full understanding of the computer, you would need to understand the software itself, as well as how it is implemented in hardware. In an article in the journal Neuron, the scientists behind the BAM proposal remind us that brain function emerges “from complex interactions among constituents”. They seem to agree with Marr. But while we don’t know the full details of the proposal, in its current form the majority of BAM funding will be thrown at understanding only one of his three levels: implementation.

Studying one level without the other is rather like building the Large Hadron Collider without also investing in theoretical physics. Psychologists and cognitive scientists are experts at bridging the gap between the workings of the mind and brain. For example, by carefully designing behavioral tests that can probe mental dysfunction, they are beginning to delve beneath the traditional classifications of mental disorders to understand how particular components of the mind go awry. These individuals need to walk hand in hand with the technologists on the frontline of brain science. The new technologies championed by the BAM scientists will produce a rich harvest of data about the brain, and they are a crucial part of a long-term investment in the brain sciences. But without similar investment in the mind sciences we will be left puzzling over how the pieces fit into our everyday lives. Only by considering the mind when tackling the brain will we get more BAM for our buck.

Reviewing “The Ravenous Brain”

A shorter form of this review might be appearing in The Psychologist at some point, but I thought I’d post the whole thing here so that books on consciousness can fill some stockings this Christmas…

At the beginning of The Ravenous Brain, Daniel Bor reminds us “There is nothing more important to us than our own awareness”. Western society’s focus on brain, rather than cardiac, death as the natural endpoint to a meaningful life is testament to this assertion.

But only 20 years ago, consciousness science was regarded as a fringe endeavour. Now, particularly in the UK, consciousness is going mainstream, spearheaded by the Sackler Center for Consciousness Science at the Universtiy of Sussex, where Bor is based. Of course, in varying degrees, all psychologists study consciousness: attention and working memory are core components of high-level conscious function. But only recently has a deeper question been tackled: how might these functions come together to underpin awareness? Why are humans blessed with a rich, private consciousness that might not be present in other animals? And how should we tackle the all-too-frequent disorders and distortions of consciousness in neuropsychiatric disorders?

With infectious enthusiasm, Bor takes us on a tour of the latest research into how the brain generates consciousness. His scope is broad, ranging from experiments on aneasthesia and subliminal priming, to our sense of self and progress on communicating with patients in a vegetative state. One of the most difficult questions in the field is addressing what consciousness is for. Circular answers often result: if language is usually associated with consciousness, for instance, than maybe consciousness is for producing language. Bor’s answer is that consciousness is for innovation, and dealing with novelty. Again, I am not convinced that this proposal completely slips the bonds of circularity – is innovation possible without awareness? – but it opens up new avenues for future research.

This is an accessible, engaging account from a practitioner who is well aware of the messy reality of science. Bor is that rare combination of working scientist, story-teller and lucid explainer. The Ravenous Brain reads as a dispatch from a foreign country engaged in a revolution – one that is far from over.

 

Are chimpanzees self-aware?

Awareness is a private affair. For instance, I can’t say for sure whether the other customers in the coffee shop where I’m sitting are conscious in the same way that I am.  Perhaps instead they are zombies, doing a good impression of acting like self-aware human beings.

By talking to each other, we can quickly disregard this possibility. When it comes to animals, however, the jury is out. Is a chimpanzee self-aware? How about a cow? An insect?

This is not idle speculation. These questions matter. Our moral intuitions are based on the assumption that the person we are interacting with is consciously aware. And our legal system is imbued with the notion that consciousness matters. If we were to find that another animal species had a consciousness very similar to that of humans, then it may be remiss of us not to extend the same rights and protections to that species.

Recently, a prominent group of neuroscientists signed a declaration stating that several non-human animal species are conscious. They reasoned that many mammals share brain structures – the thalamus, neocortex – that are involved in consciousness in humans, and display similar behavioral repertoires, such as attentiveness, sleep and capacity for decision-making. Therefore it is more likely than not that they have a similar consciousness.

While this seems intuitive, we need to stop and examine their reasoning. It all comes down to the kind of consciousness we are talking about. No one doubts, for example, that animals have periods of both sleep and wakefulness. What is at issue is whether they are aware in the same way that you and I are aware when we are awake.

Imagine you are in the cinema, engrossed in the latest blockbuster. There’s a good chance (especially if the film is any good) that while you are experiencing the film, you are not aware that you are experiencing the film. “Meta”-awareness is absent. Now imagine that you are condemned to spend the rest of your life without meta-awareness, continuously engrossed in the film of your own life. I’d wager it wouldn’t be much of an existence; as Socrates suggested, the unexamined life is not worth living.

Whether or not animals have this capacity for meta-awareness is unclear. Without the ability to report mental states, it is notoriously difficult to assess. But one particularly promising test involves judgments of control, or “agency”. Consider playing an arcade game after the money has run out – at some point, you realize that rather than steering your digital car through the streets of Monte Carlo, your efforts at the wheel are having no effect whatsoever. This realization – that you are no longer in control – is known as a judgment of agency, and may be intimately linked to meta-awareness.

In a recent study conducted in Kyoto, Japan, researchers asked whether chimpanzees could make judgments of agency. The task was to move a computer cursor to bump into another target displayed on the screen. The twist was that another decoy cursor was also present on the screen, whose movements were replayed from a previous trial. Thus the chimpanzee had control of one of the cursors, but not the other, even though visually they were identical. After the trial ended, the animals were trained to indicate the cursor that they had been controlling. All three chimpanzees correctly indicated this “self” cursor around 75% of the time. As the experimenters note, “Because both the self- and distractor cursor movements were produced by the same individual, the movements were presumably indistinguishable to a third person (and to the experimenters), who passively observed the display.” In other words, the only way to do the task is to monitor internal states, which is a pre-requisite for meta-awareness.

Judgments about another species’ consciousness should not be taken lightly. In particular, we should be careful about what kind of consciousness we are talking about. The kind that matters most from a moral and legal perspective is the capacity to be aware of our actions and intentions. Initial evidence suggests that some animals, particularly the great apes, may have this higher-order reflective capacity. This should give us greater pause for thought than the presence of primary or phenomenal consciousness in lower animals.

*This post was cross-posted from Psychology Today

The Compulsive Emailer

More than 100 years ago, the great neuroscientist Ramon y Cajal identified a clutch of “diseases of the will” that could derail a young scientist’s career. In today’s computerised world, dominated by smartphones and the Internet, this list deserves an update. In particular, a constant connection to the web has given rise to the Compulsive Emailer*.

The compulsive emailer has a particular rhythm and pattern to his day. Upon waking, he reaches for his smartphone, and, bleary-eyed, checks to see whether anything of import occurred during the course of the night. Science being a rather solitary and gradual endeavour, this is unlikely. Instead, a tinge of disappointment greets the usual automated adverts from journals, and the deluge of mail from obscure technical discussion groups.

Arriving at work, the compulsive emailer checks his smartphone in the elevator, and, after making coffee, sits down at his desk to deal promptly with any urgent missives received during the intervening five minutes. For the rest of the day, metronomic checking is never more than a couple of clicks away. The worst cases may even dedicate a separate screen to their inbox, allowing checking to be done with no more than a glance of the eyes.

Sometimes, in moments of reflection, the compulsive emailer will become frustrated with his lot, and yearn for a job in which email is center-stage, such as a political aide, or a journalist. At least then, his affliction would become useful, rather than be wasted on the continual archiving of dubious invitations to attend far-away conferences.

The curse reaches fever pitch, of course, a few weeks after submission of a paper. Convinced that the letter deciding the paper’s fate will arrive at any second, he hits the refresh button with renewed intensity. Fortunately such occasions are relatively rare, as time spent on any real work is dwarfed in comparison to that spent toiling at the inbox.

The compulsive emailer would do well to restrict e-communications to a particular time of day, perhaps the late afternoon, after time has been given for things “important and unread” to accumulate. He will be pleasantly surprised how quickly email can be dealt with, and dismissed for another day, while relishing the expanses of time that will open up for doing real science.

Retiring for the evening, he makes a few final checks of the smartphone, explaining to any company present that he is expecting to receive some new data from a research assistant.  What he is going to do with those files at midnight on a Sunday is anyone’s guess, but the implication is that they are really rather important.

*The author, being a Compulsive Emailer, is well-qualified to describe this condition.

Consciousness and the law

Yesterday my piece on consciousness and the law was published in Aeon, an exciting new online magazine focussing on ideas and culture.

As fits this particular canvas, the article is painted with a broad brush, and there wasn’t room to go into detail about any particular study. But for those who are interested in the details, I’ve included below some links to the original sources along with the relevant quotes from the Aeon piece.

To quote a recent article by the psychologist Jonathan Schooler and colleagues ‘we are often startled by the discovery that our minds have wandered away from the situation at hand’.

Schooler J et al. (2011) Meta-awareness, perceptual decoupling and the wandering mind. Trends Cog Sci 15(7):319-26

…the Court of Appeal opined that the defence of automatism should not have been on the table in the first place, due to a driver without ‘awareness’ retaining some control of the car.

Attorney General’s Reference (No. 2 of 1992)

In the early 1970s, Lawrence Weiskrantz and Elizabeth Warrington discovered a remarkable patient… studies on similar patients with ‘blindsight’ confirmed that these responses relied on a neural pathway quite separate from the one that usually passes through the occipital lobes

Weiskrantz L et al. (1974) Visual capacity in the hemianopic field following a restricted occipital ablation. Brain 97(4):709-28.

Dodds C et al. (2002) A temporal/nasal asymmetry for blindsight in a localisation task: evidence for extrageniculate mediation. Neuroreport 13(5):655-8.

…a key difference between conscious and unconscious vision is activity in the prefrontal cortex…

Other research implies that consciousness emerges when there is the right balance between connectivity between brain regions, known as the ‘information integration’ theory.

…anesthesia may induce unconsciousness by disrupting the communication between brain regions.

Dehaene S & Changeux JP (2011) Experimental and theoretical approaches to conscious processing. Neuron 70(2):200-27.

Tononi G (2005) Consciousness, information integration, and the brain. Prog Brain Res 150:109-26.

Alkire MT et al (2008) Consciousness and anesthesia. Science 322(5903):876-80.

A series of innovative experiments have begun to systematically investigate mind-wandering… Under the influence of alcohol, people become more likely to daydream and less likely to catch themselves doing so.

Christoff K (2012) Undirected thought: neural determinants and correlates. Brain Res 428:51-9.

Sayette MA et al. (2009) Lost in the sauce: the effects of alcohol on mind-wandering. Psychol Sci 20(6):747-52.