Overflowing cats

The world around you, I presume, appears rich and detailed. From my desk in my apartment this evening, I can pick out the changing shades of white as the lamps play off the walls, the dusty green of a rubber plant, and the deep blue of a painting on the wall opposite, only half visible due to the reflection of one of the lamps in its glass frame. But there appears to be far more in this scene than I can talk about. My private inner world – what Ned Block dubbed “phenomenal” consciousnesss – is poorly served by our ability to access and describe it.

This intuition has received support in the lab. Experiments show that subjects can only report a subset of several objects briefly flashed on a screen. This in itself is not surprising; our ability to retain information in mind is limited. Strikingly, however, subjects can accurately identify any single object in the array if probed immediately after the set of objects has disappeared. This result shows that each individual object is ready to be accessed: phenomenal consciousnesss tends to “overflow” what we can report.

Or does it? An alternative standpoint (epistemic correlationism) says that every piece of data on consciousness requires a “correlating” report – or how else are we to really know what the subject is doing, seeing, or thinking? In other words, we cannot claim for sure that there is phenomenology without access, as even in the flashing-object experiments, we are relying on our intuition that the subjects are conscious of the whole array of objects at some point.

A lively debate has recently ensued on this issue, fuelled by a pair of original articles in TICS. In one of these articles, Cohen & Dennett propose that conscious experience (phenomenal consciousness) cannot be investigated separately from mechanisms of access. They dismiss the view that particular brain states support phenomenal consciousness – recurrency in neural circuits, for example – without being broadcast to other brain areas, and therefore without being reported. I tend to agree on this point (although their tactic of putting words in the mouths of their opponents was less than graceful). To argue that a particular pattern of brain activity can index phenomenal consciousness in the face of subjective reports to the contrary seems absurd. Such logic rests on “pre-empirical” notions that phenomenology without some sort of access can exist – if we assume for a moment that it cannot, then the fact that a brain state is not cognitively accessed would preclude it from also being phenomenally conscious (one promising resolution to this debate is the proposal by Sid Kouider and colleagues on how different levels of access may account for different types of experience).

It is rather like Schrödinger’s cat – the cat might be dead, or it might be alive, but to know you have to open the box. As for the cat, as for epistemic correlationism in the neurosciences – the only way to know is to open the box and ask the subject. Imagine that we could perform a scan on the box that would give us some image or pattern to look at. If we first assume the cat is alive, we then might be able to say, yes, this assumption corresponds nicely with the fact that the image has such-and-such properties. But if the cat is dead when we subsequently open the box, do we have enough evidence to say that the cat was alive at the time we took the image? The inference that the cat was alive rests largely on the a priori assumption that cats are indeed alive in these types of scenarios. In the same way, the inference that a particular brain state is phenomenally conscious rests on the a priori assumption that the subject is having a phenomenally conscious experience at the time it is recorded.

That is not to say prior beliefs about how the system works are inherently bad. They are the starting point for models of cognition, and can be tested against each other. But when comparing different models, parsimony is to be preferred. The philosopher Richard Brown has argued on his blog that exactly the same constraints should apply to models of consciousness. The extra complexity of positing phenomenal consciousness without access might be justified if it leads to a better account of the data. But for now, I am yet to see a piece of data that is not more parsimoniously accounted for by an assumption that the cat is dead until proven alive.