Overflowing cats

The world around you, I presume, appears rich and detailed. From my desk in my apartment this evening, I can pick out the changing shades of white as the lamps play off the walls, the dusty green of a rubber plant, and the deep blue of a painting on the wall opposite, only half visible due to the reflection of one of the lamps in its glass frame. But there appears to be far more in this scene than I can talk about. My private inner world – what Ned Block dubbed “phenomenal” consciousnesss – is poorly served by our ability to access and describe it.

This intuition has received support in the lab. Experiments show that subjects can only report a subset of several objects briefly flashed on a screen. This in itself is not surprising; our ability to retain information in mind is limited. Strikingly, however, subjects can accurately identify any single object in the array if probed immediately after the set of objects has disappeared. This result shows that each individual object is ready to be accessed: phenomenal consciousnesss tends to “overflow” what we can report.

Or does it? An alternative standpoint (epistemic correlationism) says that every piece of data on consciousness requires a “correlating” report – or how else are we to really know what the subject is doing, seeing, or thinking? In other words, we cannot claim for sure that there is phenomenology without access, as even in the flashing-object experiments, we are relying on our intuition that the subjects are conscious of the whole array of objects at some point.

A lively debate has recently ensued on this issue, fuelled by a pair of original articles in TICS. In one of these articles, Cohen & Dennett propose that conscious experience (phenomenal consciousness) cannot be investigated separately from mechanisms of access. They dismiss the view that particular brain states support phenomenal consciousness – recurrency in neural circuits, for example – without being broadcast to other brain areas, and therefore without being reported. I tend to agree on this point (although their tactic of putting words in the mouths of their opponents was less than graceful). To argue that a particular pattern of brain activity can index phenomenal consciousness in the face of subjective reports to the contrary seems absurd. Such logic rests on “pre-empirical” notions that phenomenology without some sort of access can exist – if we assume for a moment that it cannot, then the fact that a brain state is not cognitively accessed would preclude it from also being phenomenally conscious (one promising resolution to this debate is the proposal by Sid Kouider and colleagues on how different levels of access may account for different types of experience).

It is rather like Schrödinger’s cat – the cat might be dead, or it might be alive, but to know you have to open the box. As for the cat, as for epistemic correlationism in the neurosciences – the only way to know is to open the box and ask the subject. Imagine that we could perform a scan on the box that would give us some image or pattern to look at. If we first assume the cat is alive, we then might be able to say, yes, this assumption corresponds nicely with the fact that the image has such-and-such properties. But if the cat is dead when we subsequently open the box, do we have enough evidence to say that the cat was alive at the time we took the image? The inference that the cat was alive rests largely on the a priori assumption that cats are indeed alive in these types of scenarios. In the same way, the inference that a particular brain state is phenomenally conscious rests on the a priori assumption that the subject is having a phenomenally conscious experience at the time it is recorded.

That is not to say prior beliefs about how the system works are inherently bad. They are the starting point for models of cognition, and can be tested against each other. But when comparing different models, parsimony is to be preferred. The philosopher Richard Brown has argued on his blog that exactly the same constraints should apply to models of consciousness. The extra complexity of positing phenomenal consciousness without access might be justified if it leads to a better account of the data. But for now, I am yet to see a piece of data that is not more parsimoniously accounted for by an assumption that the cat is dead until proven alive.

“Aha!” moments in self-knowledge

Some changes to our mental world take place gradually. An orchestra comes to a crescendo, and then fades back to nothing, all the while playing the same chord. As the listener, we experience a graded change in volume. But other changes are more immediate. When squinting to read a street sign from a distance, there is a moment when you suddenly “get it”, and know what it says. There is no way in which the sign gradually becomes more or less intelligible. This is an example of categorical perception.

Might knowledge of ourselves be similar?

A study by Katerina Fotopoulou and colleagues sheds light on this issue. She focussed on a fascinating case of recovery from anosognosia. Anosognosia (from the Greek meaning “without knowledge”) is the term given to a lack of awareness or insight into a particular neurological condition. In extreme form, anosognosia can result in very bizarre symptoms, such as Anton’s syndrome, where cortically blind patients claim to be able to see. Dr. Fotopoulou’s patient was a 67-year-old lady who suffered from hemiplegia – paralysis of half of the body following a right-hemisphere stroke. However, she claimed to be able to move her arm, and breezily asserted that she could clap her hands. At least, until Dr. Fotopoulou showed her a recorded video of herself being examined:

“As soon as the video stopped, LM immediately and spontaneously commented: “I have not been very realistic”. Examiner (AF): “What do you mean?” LM: “I have not been realistic about my left-side not being able to move at all”. AF: What do you think now?” “I cannot move at all”. AF: “What made you change your mind?” LM: “The video. I did not realize I looked like this”.”


This altered self-awareness was still present 6 months later. It appeared that allowing the patient a third-person perspective on herself had removed her anosognosia, and led to changes in the representation of her own body. While this is a single case report, and may not work for all patients, the data are tantalising. In particular, they suggest that onset of self-awareness can be sudden and transformative. This makes sense – we all have experienced the “aha” moment accompanying retrieval of the name of that actor that you couldn’t drum up during dinner the previous evening. Changes in awareness of the self may share similarities with other domains of categorical perception. Whether this mental plasticity is accompanied by a rapid form of neural reorganisation, or a “change in the software”, remains unknown.