A theory of consciousness worth attending to

There are multiple theories of how the brain produces conscious awareness. We have moved beyond the stage of intuitions and armchair ideas: current debates focus on hard empirical evidence to adjudicate between different models of consciousness. But the science is still very young, and there is a sense that more ideas are needed. At the recent Association for the Scientific Study of Consciousness* meeting in Brisbane my friend and colleague Aaron Schurger told me about a new theory from Princeton neuroscientist Michael Graziano, outlined in his book Consciousness and the Social Brain. Aaron had recently reviewed Graziano’s book for Science, and was enthusiastic about it being a truly different theory – consciousness really explained.

I have just finished reading the book, and agree that it is a novel and insightful theory. As with all good theories, it has a “why didn’t I think of that before!” quality to it. It is a plausible sketch, rather than a detailed model. But it is a testable theory and one that may turn out to be broadly correct.

When constructing a theory of consciousness we can start from different premises. “Information integration” theory begins with axioms of what consciousness is like (private, rich) in order to build up the theory from the inside. In contrast, “global workspace” theory starts with the behavioural data – the “reportability” of conscious experience – and attempts to explain the presence or absence of reports of awareness. Each theory has different starting points but ultimately aims to explain the same underlying phenomenon (similar to physicists starting either with the very large – planets – or the very small – atoms, and yet ultimately aiming for a unified model of matter).

Dennett’s 1991 book Consciousness Explained took the reportability approach to its logical conclusion. Dennett proposed that once we account for the various behaviours associated with consciousness – the subjective reports – there is nothing left to explain. There is nothing “extra” that underpins first-person subjective experience (contrast this with the “hard problem” view: there is something to be explained that cannot be solved within the standard cognitive model, which is exactly why it’s a hard problem). I read Dennett’s book as an undergraduate and was captivated that there might be a theory that explains subjective reports from the ground up, reliant only on the nuts and bolts of cognitive psychology. Here was a potential roadmap for understanding consciousness: if we could show how A connects to B, B connects to C, and C connects to the verbalization “I am conscious of the green of the grass” then we have done our job as scientists. But there was a nagging doubt: does this really explain our inner, subjective experience? Sure, it might explain the report, but it seems to be throwing out the conscious baby with the bathwater. In playful mood, some philosophers have suggested that Dennett himself might be a zombie because he thinks the only relevant data on consciousness are the reports of others!

But the problem is that subjective reports are one of the few observable features we have to work with as scientists of consciousness. In Graziano’s theory, the report forms the starting point. He then goes deeper to propose a mechanism underpinning this report that explains conscious experience.

To ensure we’re on the same page, let’s start by defining the thing we are trying to explain. Consciousness is a confusing term – some people mean level of consciousness (e.g. coma vs. sleep vs. being awake), others mean self-consciousness, others mean the contents of awareness that we have when we’re awake – an awareness that contains some things, such as the green of an apple, but not others, such as feeling of the clothes against my skin or my heartbeat. Graziano’s theory is about the latter: “The purpose of this book is to present a theory of awareness. How can we become aware of any information at all? What is added to produce awareness?” (p. 14).

What is added to produce awareness? Cognitive psychology and neuroscience assumes that the brain processes information. We don’t yet understand the details of how much of this processing works, but the roadmap is there. Consider a decision about whether you just saw a faint flash of light, such as a shooting star. Under the informational view, the flash causes changes to proteins in the retina, which lead to neural firing, information encoding in visual cortex and so on through a chain of synapses to the verbalization “I just saw a shooting star over there”. There is, in principle, nothing mysterious about this utterance. But why is it accompanied by awareness?

Scientists working on consciousness often begin with the input to the system. We say (perhaps to ourselves) “neural firing propagating across visual cortex doesn’t seem to be enough, so let’s look for something extra”. There have been various proposals for this “something extra”: oscillations, synchrony, recurrent activity. But these proposals shift the goalposts – neural oscillations may be associated with awareness, but why should these changes in brain state cause consciousness? Graziano takes the opposite tack, and works from the motor output, the report of consciousness, inwards (it is perhaps no coincidence that he has spent much of his career studying the motor system). Awareness does not emanate from additional processes that are laid on top of vanilla information processing. Instead, he argues, the only thing we can be sure of about consciousness is that it is information. We say “I am conscious of X”, and therefore consciousness causes – in a very mainstream, neuroscientific way – a behavioural report. Rather like finding the source of a river, he suggests that we should start with these reports and work backwards up the river until we find something that resembles its source. It’s a supercharged version of Dennett: the report is not the end-game; instead, the report is our objective starting point.

I recently heard a psychiatrist colleague describe a patient who believed that a beer can inside his head was receiving radio signals that were controlling his thoughts. There was little that could be done to shake the delusion – he admitted it was unusual, but he genuinely believed that the beer can was lodged in his skull. As scientist observers we know this can’t be true: we can even place the man inside a CT scanner and show him the absence of a beer can.

But – and this is the crucial move – the beer can does exist for the patient. The beer can is encoded as an internal brain state, and this information leads to the utterance “I have a beer can in my head”. Graziano proposes that consciousness is exactly like the beer can. Consciousness is real, in the sense it is an informational state that leads us to report “I am aware of X”. But there are no additional properties in the brain that make something conscious, beyond the informational state encoding the belief that the person is conscious. Consciousness is a collective delusion – if only one of us was constantly saying, “I am conscious” we might be as skeptical as we are in the case of the beer can, and scan his brain saying “But look! You don’t actually have anything that resembles consciousness in there”.

Hmm, I hear you say, this still sounds rather Dennettian. You’ve replaced consciousness with an informational state that leads to report. Surely there is more to it than that? In Graziano’s theory, the “something extra” is a model of attention, called the attention schema. The attention schema supplies the richness behind the report. Attention is the brain’s way of enhancing some signals but not others. If we’re driving along in the country and a sign appears warning of deer crossing the road, we might focus our attention on the grass verges. But attention is a process, of enhancement or suppression. The state of attention is not represented anywhere in the system [1]. Instead, awareness is the brain’s way of representing what attention is doing. This makes the state of attention explicit. By being aware of looking at my laptop while writing these words, the informational content of awareness is “My attention is pointed at my computer screen”.

Graziano suggests that the same process of modeling our own attentional state is applied to (and possibly evolved from) the ability to model the attentional focus of others [2]. And, because consciousness is a model, rather than a reality that either exists or does not, it has an appealing duality to its existence. We can attribute awareness to ourselves. But we can also attribute awareness to something else, such as a friend, our pet dog, or the computer program in the movie “Her”. Crucially, this attribution is independent of whether they each also attribute awareness to themselves.

The attention schema theory is a sketch for a testable theory of consciousness grounded in the one thing we can measure: subjective report. It provides a framework for new experiments on consciousness and attention, consciousness and social cognition, and so on.  On occasion I suspect it over-generalizes. For instance, free will is introduced as just another element of conscious experience. I found myself wondering how a model of attention could explain our experience of causing our actions, as required to account for the sense of agency. Instead, perhaps we could think of the attention schema as a prototype model for different elements of subjective report. For instance, a sense of agency could arise from a model of the decision-making process that allows us to say “I caused that to happen” – a decision schema, rather than an attention schema.

Like all good theories, it raises concrete questions. How does it account for unconscious perception? Does it predict when attention should dissociate from awareness? What would a mechanism for the attention schema look like? How is the modeling done? We may not yet have all the answers, but Graziano’s theory is an important contribution to framing the question.

[1] This importance of “representational redescription” of implicitly embedded knowledge was anticipated by Clark & Karmiloff-Smith (1992): “What seems certain is that a genuine cognizer must somehow manage a symbiosis of different modes of representation – the first-order connectionist and the multiple levels of more structured kinds” (p. 515). Importantly, representational redescription is not necessary to complete a particular task, but it is necessary to represent how the task is being completed. As Graziano says: “There is no reason for the brain to have any explicit knowledge about the process or dynamics of attention. Water boils but has no knowledge of how it does it. A car can move but has no knowledge of how it does it. I am suggesting, however, that in addition to doing attention, the brain also constructs a description of attention… and awareness is that description” (p. 25). And: “For a brain to be able to report on something, the relevant item can’t merely be present in the brain but must be encoded as information in the form of neural signals that can ultimately inform the speech circuitry.” (p. 147).

[2] Graziano suggests his theory shouldn’t be considered a metacognitive theory of consciousness because it accounts both for the abstract knowledge that we are aware and the inherent property of being aware. But this view seems to equate metacognition with abstract knowledge. Instead I suggest that a model of another cognitive process, such as the attention schema as a model of attention, is inherently metacognitive. Currently there is little work on metacognition of attention, but such experiments may provide crucial data for testing the theory.

*I am currently Executive Director of the ASSC. The views in this post are my own and should not be interpreted as representing those of the ASSC.

Advertisements

10 thoughts on “A theory of consciousness worth attending to

  1. Great review. I thoroughly agree with your second note. He inherits and even more severe version of Dennett’s ‘attribution problem,’ however, because he really doesn’t distinguish between the intentionality and the phenomenality of consciousness – I’m guessing this is what you mean when you mention the problem of over-generalizing. Taking the ‘intentional stance’ to others and oneself is one thing, but saying as much about the phenomenality of consciousness is the reason why he’s so quickly dismissed in philosophy of mind circles. The report may be provide the data, but a theory of reports isn’t ultimately what we’re interested in, so much as what is being reported. Certainly information is driving the reports, and using the ‘attention schema’ the way he does allows him to connect with the information that is generally reported, but ‘becoming available for report’ in no way explains what is reported. An explicitly metacognitive approach, I think, both captures Graziano’s (Dennett’s – or Jame’s really!) interpretivist insight, as well as provide the way to tackle the phenomenality problem head on.

    As a long time advocate of this general approach, the biggest problem I had was his insistence that he had discovered an entirely new, radical angle on consciousness. His recent NYT piece goes some way to redress this, however.

    I am a huge fan of your work, Steve!

    • Many thanks for your kind words, Scott!

      I think intentionality here is key, and I rather glossed over this issue. You’re right that’s what I have in mind with the “over-generalisation” comment. A key part of consciousness is awareness of not only sensation but also of being an active agent in the world. One could think of the difference as being awareness of input vs. awareness of output. The attention schema seems to point us in the right direction in terms of an explanation for awareness of input (notwithstanding the ill-defined aspect of content, as you point out), but it seems silent on agency.

      It’s interesting to note that there was once a heated debate between attention researchers about whether parietal neurons were more properly classed as coding for the animal’s attention rather than intention, one that ultimately came down to semantics. One could imagine a parallel debate about the attention schema, which might go some way to solving this issue.

  2. Steve,

    very interesting post.
    You speak of “information” very much (as many people tackling mind-related issues do). But what is this “information” you are refering to? Sometimes I have a feeling that one mystery is replaced by another. For example, instead of talking about “some consciousness” we talk about “some information”. It would be awesome if you could elaborate on this topic.

    Graziano’s theory is interesting. However I have my own objections to it:
    1. What about attention to stimuli that we are not conscious of?
    2. Very often we are conscious of things that are not present in the environment (mind wandering, dreams, etc.). What exactly do we attend to in these situations?
    3. In the case of feelings, moods, emotions there doesn’t seem to be much “informational content” to be discerned. If I’m feeling sleepy or excited or interested, am I really processing large bundles of information (when these feelings are moderately long lasting)?
    4. Consciousness have this property of differentiating (contents, feelings, qualia?). When I see something and when I hear it I have completely different experiences, yet they refer to roughly the same things (they should represent something similar). What are they (sounds, visuals) so different?

    What are your thoughts on that, Steve?

    Btw. I really like your blog 🙂

  3. Thank you!

    I agree information is an over-used term, and has shades of meaning from the colloquial to the formal to everything in between. I mean it in terms of representations (as I think does Graziano) – e.g. if the firing of neurons in V1 rise and fall in tandem with the presentation of a stimulus, we can say that they represent/carry information about the stimulus.

    So, if we find neural activity that represents/carries information about the subject’s reports of awareness, then in principle we can determine the antecedents of that neural activity, the other representations it draws upon, etc. Beyond that, however, the mechanics of the attention schema theory are not (yet) well-defined.

    1. What about attention to stimuli that we are not conscious of?

    I think this objection conflates the process and model of attention. We may attend to e.g. our heartbeat yet not consciously perceive it. But the attention schema account would say “you’re accurately modeling the focus of your attention, and that model encodes the awareness of attending to your heartbeat without perceiving any sensation”. So there’s no contradiction.

    2. Very often we are conscious of things that are not present in the environment (mind wandering, dreams, etc.). What exactly do we attend to in these situations?

    Good question. it’s a lot easier to think about attention in terms of enhancing/suppressing sensory input. But in principle there’s no reason why the same mechanisms can’t be at play when information is arising spontaneously… e.g. activity in early visual cortex can be attended regardless of whether that activity is caused by external stimuli or the subject’s imagination.

    3. In the case of feelings, moods, emotions there doesn’t seem to be much “informational content” to be discerned. If I’m feeling sleepy or excited or interested, am I really processing large bundles of information (when these feelings are moderately long lasting)?

    Yes, I think so 🙂 I agree it’s less easy to pin down by introspection, but there is good evidence from animal models that e.g. fear relies on a particular set of subcortical circuits. Which suggests a degree of specificity to emotion similar to how sensory stimuli are processed.

    4. Consciousness have this property of differentiating (contents, feelings, qualia?). When I see something and when I hear it I have completely different experiences, yet they refer to roughly the same things (they should represent something similar). What are they (sounds, visuals) so different?

    This echoes Scott’s comment above, that the attention schema theory seems to gloss over content and qualia. There are various computational approaches to qualia, for instance information integration theory would say that qualia are part and parcel of discriminating in a vast space of possible experiences. Personally I’m sympathetic to the sensorimotor view, that sensations have the qualities they do due to the sensorimotor contingencies they set up. However I think it’s a very difficult problem to get experimental traction on, whereas reports of (un)awareness appear much more straightforward.

    • Steve,

      I mean it in terms of representations (as I think does Graziano) – e.g. if the firing of neurons in V1 rise and fall in tandem with the presentation of a stimulus, we can say that they represent/carry information about the stimulus. Sure, that is a common way of understanding information in the neurosciences. However When one looks closer at various phenomena that the brain exhibits, such representationalist account of information seems less obvious.
      If information ~ representation, then what about:
      – information processing — what exactly does it mean to process representations? This question is not exactly a hard one. But it allows for multiple (mis)interpretations.
      – moods, emotions (yes, again) — when I’m sleepy my brain works in a specific way. Does this sleepiness represent something?…
      – spontaneous activity — not correlated with any stimulus (in the present or remembered). Such activity does not seem to represent anything, yet most likely has some bearing on the brain activity at large.

      Now, I don’t want to be misunderstood. I’m not saying that ideas such as “information” or computational perspective on the mind are totally misguided. I would just like to point to some vagueness around them, and to maybe propose that there is more to the brain than processing of information.

      I’ve started to work on the critique of computationalism on my own blog, if you’re interested. For example: http://observingideas.wordpress.com/2014/09/08/mind-is-not-an-information-processor/

      We may attend to e.g. our heartbeat yet not consciously perceive it. But the attention schema account would say “you’re accurately modeling the focus of your attention, and that model encodes the awareness of attending to your heartbeat without perceiving any sensation”. So there’s no contradiction. Yes, Steve. There is no contradiction. We can attend to stimuli that we are not conscious of. However if one suggests to base a theory of consciousness on attention/awareness, then this becomes an issue.

      (…) in principle there’s no reason why the same mechanisms can’t be at play when information is arising spontaneously… e.g. activity in early visual cortex can be attended regardless of whether that activity is caused by external stimuli or the subject’s imagination. I’m thinking the same way, even though I don’t like this idea. The concept of attending to activity in the brain is very fuzzy, at least for me. What exactly does this attending?

      (…) there is good evidence from animal models that e.g. fear relies on a particular set of subcortical circuits. Which suggests a degree of specificity to emotion similar to how sensory stimuli are processed. I don’t see how the finding of (sub)cortical neuronal pathways relates to the issue of information and representation. No one doubts that emotions and feelings arise when specific dynamics arises in the nervous system. The thing is: how do the ideas of information/representation and computationalism help in describing and explaining these phenomena (moods, emotions, and the like).

  4. Nice, Steve. For a theory that resonates with Graziano’s, please check out

    Shadlen, MN and Kiani, R (2011). “Consciousness as a decision to engage. In: Characterizing Consciousness: From Cognition to the Clinic? Research and Perspectives in Neurosciences (Dehaene S, Christen Y, eds), 27-46. Berlin Heidelberg: Springer-Verlag. doi:10.1007/978-3-642-18015-6_2 https://www.shadlenlab.columbia.edu/publications/publications/mike/Shadlen2011.pdf

    And one day I’ll entertain you with the extension of this idea that I call my qualia-lite argument. It’s also relevant to the piece that brought me to this site today—your musings on the Kording BioRxiv piece. I found your comments refreshing.

    • Hi Mike – thanks for this, glad you liked the blog 🙂
      I look forward to reading this paper. Would be great to discuss your new ideas, I’m intrigued – we need more decision-making angles on the consciousness question.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s