Humans are bad at forecasting exponentials – and we’re unaware of it

The transmission of coronavirus (SARS-COV-2) is, like many infectious diseases, exponential. An exponential process is one that doubles once a particular time period, t, has elapsed. For instance, if t = 1 week, then if I have 100 people infected today, then 200 people will be infected in 1 week, 400 in 2 weeks, 800 in 3 weeks, 1600 in 4 weeks, etc. The time period t is known as the “doubling time”. A study from the early phase of the Chinese outbreak estimated a doubling time of around 6 days.

It’s not quite as simple as a pure exponential in reality, especially when lots of people get infected, but it’s a good model of the early stages.

The problem is that the early stages of an exponential also don’t seem that bad. 100, 200, 400, 800 cases… that all sounds manageable. But exponentials, by definition, run away with you. That same process, after 16 weeks, gets to ~6.5 million!

Humans are remarkably bad at intuiting such exponential growth. This was allegedly used to great effect by the inventor of chess in ancient India. He asked his king for some modest compensation for inventing the game: 1 grain of rice on the first square of the chessboard, 2 grains on the second square, 4 grains on the third square… and so on, all the way out to the 64th square. The king laughed at what seemed like such a small reward, and agreed – until his advisers figured out that the exponential growth led to a total of 18,446,744,073,709,551,615 grains of rice (the sum of all the exponents up to 2^64), an impossible number for the kingdom to provide.

Not only are we bad at forecasting exponential growth, we also don’t know that we’re bad. In other words, we have poor self-awareness about our forecasting abilities. People in one experiment were asked to forecast an exponential process, and were paid based on the accuracy of their answer. Most people were unwilling to pay even a small amount for the correct answer even when they were wrong. In other words, they were overconfident in their forecasts.

In the case of coronavirus, this is potentially devastating when, as in London today, things seem under control, and people may not understand why urgent advice is being given to stay home. If we’re unable to forecast what the situation is going to look like in a week or two weeks’ time, then we are also unlikely to take individual action to slow the spread of the virus today. This is especially dangerous given that the median incubation time of coronavirus before symptoms appear is ~5 days. This is exactly the time window when taking steps to curb exponential growth is critical. Most of the transmission in China appears to be by people who didn’t know they had the virus.

These trends also imply that, once this crisis is over, psychologists should look for ways of improving people’s self-awareness about these kinds of forecasts. We may not be able to change our ability to imagine the curve. But if we can encourage people to know when they don’t know what will happen, it might make them less likely to rely on intuition, and increase their willingness to listen to advice.

By practising physical distancing, avoiding pubs, avoiding going to the gym, basically avoiding going out, we can increase the doubling time, and slow the exponential growth. This prevents everyone getting sick at once, giving healthcare systems the capacity to respond, and science the time to find treatments. We need to be doing this even if things look ok today. Even in the last few squares of the chess board, things look manageable. Until they are not.

Is metacognition related to radicalism, extremism or both?

Post last updated 4th Jan 2019

We recently published a paper in Current Biology entitled “Metacognitive Failure as a Feature of Those Holding Radical Beliefs”, which identified a link between radical world views and metacognition. Unsurprisingly this got picked up in a few media outlets. In a recent post on Twitter, Prof. Ken Miller at Columbia University queried the interpretation of the findings:


We think that these comments stem from a misunderstanding about the relationship between radicalism and extremism, which we’re glad of the opportunity to clarify.

Ken says: “The claim is that radicals, which you would think would mean the extremes of political left and right…”

This is incorrect, and the source of the subsequent confusion. The paper did not set out to examine extremism, and we specifically avoided using this term in both the paper and the UCL press release. Instead our focus was on an index of radical views closely modelled on previous measures in the literature. As we say in the paper:

These questionnaires were selected based on prior models of political radicalism as stemming from a combination of intolerance to others’ viewpoints, dogmatic and rigid beliefs, and authoritarianism, which represents adherence to in-group authorities and conventions, and aggression in relation to deviance from these norms [23–25].

Some (but not all) of the press coverage (mainly the headlines) extrapolated from our findings on radicalism to write about political extremism. But we don’t think our measure is specific to politics:

…we stress that radicalism is likely to reflect a general cognitive style that transcends the political domain—as exemplified by links between religious fundamentalism and increased dogmatism and authoritarianism [22, 26]—and instead refers to how one’s beliefs are held and acted upon [27].


Similarly, while our measures of radicalism were derived from questionnaires tapping into political attitudes, it is possible that impairments in metacognition may constitute a general feature of radicalism about political, religious, and scientific issues.

Our key result is that a radicalism index (formed from the combined measures of dogmatism and authoritarianism) is negatively related to metacognitive sensitivity (the ability to distinguish correct from incorrect decisions in the perceptual task), but not task performance. These effect sizes are indeed small, but robust and replicable in a second independent dataset. Given that the tasks are far removed from real-world issues we think it’s striking that basic difference in metacognition predict answers to questions indicative of radical beliefs:

Despite relatively small effect sizes, our findings linking radicalism to changes in metacognition are robust and replicable across two independent samples. However, we note that other, domain-specific facets of metacognition (e.g., insight into the validity of higher-level reasoning or certainty about value-based choices [39]) are arguably closer to the drivers of radicalization of political and religious beliefs, suggesting that the current results represent a lower bound for the strength of a relationship between metacognitive abilities and radicalism.

While our main focus was on radicalism, we also examined relationships with political orientation. First, we found that conservatism was related to overall confidence, but not metacognitive sensitivity (see Figure 3C in the paper). Again, this result shows that radicalism is not the same as being on the left or right of the political spectrum; instead, these two aspects of the questionnaire data relate to different facets of task performance.

In an earlier version of the paper, we also examined whether these facets of radicalism also mediated the extremity (absolute value) of political orientation. We’re glad of the opportunity to revisit it here. A multiple mediation model relating metacognition, facets of radicalism and the political extremity is shown below (data are pooled across Studies 2 and 3):


Model estimated with lavaan in R; all parameters are standardized; *P<0.05, **P<0.005, ***P<0.001

Importantly, political extremity in isolation is not linked to metacognitive sensitivity; however, the relationship between metacognition and political extremism is mediated by its impact on dogmatism and authoritarianism. This is consistent with the findings of a recent paper by Leor Zmigrod and colleagues at Cambridge, who showed that the impact of psychological flexibility on political attitudes (towards Brexit) was mediated by its effect on ideology.

So, to summarise, we find that radicalism but not extremism is linked to metacognitive failure. There is a complex relationship between these constructs (as Ken points out in his post), and causality remains to be disentangled. But we think it’s plausible that the differences in metacognitive function we find in relation to radical views (the focus of the paper) may in turn predict the extremity of political orientation.

Thoughts on starting a lab

It’s nearly three years since I moved to UCL to set up our Metacognition Group or “MetaLab” for short ( This has been a steep learning curve, and now feels as good a time as any to reflect on what I’ve learnt and write down a few tips that might be useful for others going through the same thing. These are inevitably skewed from the perspective of cognitive neuroscience and different things may apply in other fields. And, especially as we study metacognition, I’m obliged to include the disclaimer that some or all of this may well be wrong…

  • First of all, if you’re still finishing your PhD, or partway through your postdoc, consider whether you want to go for a faculty position in the first place. Being an academic isn’t the only end goal, and it’s certainly not “better” than other paths. There are loads of amazing careers out there for qualified researchers, from tech to policy to industry (and we should make sure, as PI’s, that our students are aware of these opportunities). Academia is just one option, so think carefully about what you want to do before jumping in to the job market. Even within academia there are lots of different routes – lectureships vs. research fellowships, for instance, and each has its pros and cons.


  • There are two ways of interacting with your peers and colleagues. One is to look to them for useful advice and support; the other is to worry and despair that they’re all doing well and you’re not. Try to do more of the former than the latter. There are several excellent resources out there – for instance, check out other blogs from Becky Lawson, Tim Behrens, Duncan Astle and Sue Fletcher-Watson, and Aidan Horner.


  • Teach. If you’re employed on a lectureship this will be part of your contract. But even if you’re mainly employed to do research (e.g. as a postdoc, on a fellowship, or at a research institute), volunteer to teach (and take the teaching courses offered by your institution). This year I have taught Masters courses at UCL and Aix-Marseille, and next year we’re planning a third-year undergraduate course in UCL psychology. It’s a great way to work out what you really know and what you don’t know and to contribute to the core mission of your university, and your colleagues will appreciate your contribution.


  • Remember that, especially early on, you are your own most experienced postdoc. Don’t think that, just because you’re now a PI, you should sit back and wait for your students to make things happen and the paper drafts to roll in. The best way to maintain your productivity early on is to keep on thinking of yourself as a postdoc. This will happen naturally if you have datasets and papers that need writing up from your actual postdoc, but even after these papers are out of the door, having projects of your own helps you explore new ideas and keeps your eye in with coding and data analysis.


  • Try out a few different ways of running lab meetings and one-on-one meetings until you find a good routine that suits you and your group (and ask them for feedback on what they would prefer).


  • When supervising students, finding the right balance between guidance and freedom is hard. There’s no recipe for this, unfortunately – it differs between individuals, and you won’t always get it right. You will regularly need to dive in and help at the coal face – devoting several hours to writing or debugging code, fixing stimuli and re-running analyses. This is inevitable and doesn’t mean you’ve got the balance wrong – this is your job.


  • Projects are almost never linear, and that doesn’t mean you’re a bad scientist – in fact it means you’re a good scientist. Going back to square one on a project means you (and your student) have learnt something, and that what you do next will be much stronger as a result. The same goes for piloting – we often pilot several versions of an experiment before committing to a design. This can be frustrating if the initial assumption is that it will “work” out of the box. If you instead treat piloting as a learning process then it’s much less stressful.


  • Finding the right setting on the accelerator is hard. Resist the temptation to grow the lab for the sake of it – starting small is fine. But this is a delicate balance. Promotion criteria and future grant success are often linked to past grant success. So – once you’ve settled in and have a new idea you want to work on, go for it. Often there will be smaller internal grants for early career researchers at your institution, and these can be very helpful for getting a new line of work off the ground.


  • The people you hire as postdocs/RA’s or take on as students are your lab, and will shape the culture of your group. These are probably the most critical decisions you will make in the first few months/years. So don’t rush into it, and trust your gut feeling – it’s usually better not to hire at all than to hire people you’re not sure about. It’s especially crucial that postdocs are passionate about the same kind of questions as you, so that you are both pulling in the same direction. This works both ways: remember that postdocs are taking a gamble by working with you rather than someone more established, so you should expect to work hard for them, rather than the other way around.


  • Share your code and data for each project (for an overview of how to do this and why, check out these slides from Laurence Hunt). We have a lab Github and before a paper is published we take a couple of days to make sure all the code and data are uploaded with accompanying notes. I’m fairly sure no-one outside our lab cares about this, but within the lab it’s an incredibly useful resource. I can point new students to where they can get code snippets and examples of various types of analysis. We’ve also started pre-registering protocols and hypotheses for empirical projects by uploading timestamped PDFs to OSF. I’m not militant about this – a purely theoretical project, for instance, probably can’t be meaningfully pre-registered and several empirical projects in our lab start out as exercises in the development of candidate computational models. But there is usually some point within the life cycle of a project when it makes sense to pre-register. This forces us to commit to what we’re doing and keeps both student and PI honest.


  • Write a lab wiki. I haven’t done this yet but am jealous of other lab wikis. Like sharing code, it will save you time in the long run.


  • Give yourself time to think and try not to be busy just to seem busy. In Michael Lewis’ book The Undoing Project, Amos Tversky tells us that “The secret of doing good research is always to be a little underemployed. You waste years by not being able to waste hours.”


  • Take holidays and breaks (see above) and encourage people in your group to take time off. The world won’t end. But try to switch off your email – set an out of office, remove it from your phone and change the password. That way the only way you can get online is via a laptop, by making a conscious decision to work. When I’m doing something active on holiday (such as sailing, with any luck) I don’t take the laptop. But if we go to sit by a pool I like to have my laptop with me in case inspiration or boredom strike. Just don’t let it make you anxious or stressed.


  • Being a PI is like playing good tennis – it takes a lot of repeated practice. You will struggle at the start not because you’re not a good scientist/academic but because you’ve got less experience than someone who has been in the game for 10 years. Seek the respect of those who you respect, and everything else will take care of itself.

False functional inference: what does it mean to understand the brain?

A few days ago Eric Jonas and Konrad Kording (J&K) posted a thought-provoking paper on bioRxiv entitled “Could a neuroscientist understand a microprocessor?” It’s been circling round my head for most of the weekend, and prompted some soul searching about what we’re trying to achieve in cognitive neuroscience.

The paper reports on a mischievous set of experiments in which J&K took a simulation of the MOS 6502 microchip (which ran the Apple I computer, and which has been the subject of some fascinating digital archaeology by the team), and then analysed the link between it’s function and behaviour much as we might do for the brain in cognitive and systems neuroscience. The chip’s “behaviour” was its ability to boot and run three different sets of instructions for different games: Donkey Kong, Space Invaders and Pitfall (as a side note, exploring this sent me down the rabbit hole of internet emulations including this one of Prince of Persia which occupied many hours of my childhood). While their findings will not necessarily be surprising for a chip designer, they are humbling for a neuroscientist.

By treating the chip like a mini-brain, albeit one in which the ground truth was fully known, J&K could apply some canonical analysis techniques and see what they revealed. The bottom line is that for most of these analyses, they were either downright misleading or ended up producing trivial results.


Here is one example: breaking individual transistors in the microprocessor (equivalent to “lesioning” parts of its brain) led to different patterns of boot failure in different games (see their Figure 4 copied above). We might talk about one such lesion as having “caused a deficit in playing Donkey Kong” in a typical cognitive neuroscience paper. J&K show that these parts of the system were not responsible for fundamental aspects of the game but instead implemented simple functions that, when omitted, led to catastrophic failure of a particular set of instructions. This is similar to the disclaimer about lesion studies I learned as a psychology undergraduate – just because removing part of a radio causes it to whistle doesn’t mean that its function was to stop the radio whistling. But I am just as susceptible to this kind of inference as the next person. For instance, last year we published a paper showing that individuals with lesions to the anterior prefrontal cortex had selective deficits in metacognition. We interpreted this as providing evidence of a “causal contribution of anterior prefrontal cortex to perceptual metacognition”. While this conclusion seems reasonable, it’s also important to remember that it tells us little about the region’s normal function, and that such a pattern of results could be due to the failure of an as-yet-unknown set of functions that manifest as a behavioural deficit, similar to the microprocessor’s failure to boot a game.

J&K acknowledge that the brain is probably not working like a microprocessor, it’s organic, plastic, etc – but if anything this may mean that we are more likely to fall into traps of false functional inference than if it were like a microprocessor. And while lesions are a relatively crude technique, things don’t get any better if you have access to the “firing” of every part of the circuit (as is the goal of the recent big data initiatives in neuroscience) – applying typical dimensionality reduction techniques to this data also failed to reveal how the circuit is running the game. In other words, as I’ve pointed out elsewhere on this blog, expecting big data approaches to succeed just because they have lots of data is like expecting to understand how Microsoft Word works by taking the back off your laptop and staring at the wiring.

These false inferences are critically different from false positives. If J&K ran their experiments again, they would get the same results. The science is robust. But the interpretation of the results – what we infer about the system from a particular pattern of behavioural change – is wrong. Often these inferences are not the core of a typical experimental paper – they come afterwards in the discussion, or are used to motivate the study in the introduction. But they form part of the scientific culture that generates hypotheses and builds implicit models of how we think things work. As J&K write, “analysis of this simple system implies that we should be far more humble at interpreting results from neural data analysis”.

This kind of soul-searching is exactly what we need to ensure neuroscience evolves in the right direction. There are also reasons to remain optimistic. First, understanding is only part of doing good science. Deriving robust predictions (e.g. “when I lesion this transistor, Donkey Kong will not run”) is an important goal in and of itself, and one that has real consequences for quality of life. As an example, my colleagues at UCL have shown that by applying machine learning techniques to a large corpus of MRI scans taken after stroke, it’s possible to predict with a high degree of accuracy who will recover the ability to speak. Second, understanding exists on different levels – we might be able to understand a particular computation or psychological function to a sufficient degree to effectively “fix” it when it goes wrong without understanding its implementation (equivalent to debugging the instructions that run Donkey Kong, without knowledge of the underlying microprocessor). For instance, there is robust evidence for the efficacy of psychological therapy in alleviating depression (which in turn was informed by psychological-level models), and yet how such therapy alters brain function remains unknown. But as neuroscience matures, it’s inevitable that alongside attempts to predict and intervene, we will also seek a transparent understanding of why things work the way that they do. J&K provide an important reminder that we should remain humble when making such inferences – and that even the best data may lead us astray.

A theory of consciousness worth attending to

There are multiple theories of how the brain produces conscious awareness. We have moved beyond the stage of intuitions and armchair ideas: current debates focus on hard empirical evidence to adjudicate between different models of consciousness. But the science is still very young, and there is a sense that more ideas are needed. At the recent Association for the Scientific Study of Consciousness* meeting in Brisbane my friend and colleague Aaron Schurger told me about a new theory from Princeton neuroscientist Michael Graziano, outlined in his book Consciousness and the Social Brain. Aaron had recently reviewed Graziano’s book for Science, and was enthusiastic about it being a truly different theory – consciousness really explained.

I have just finished reading the book, and agree that it is a novel and insightful theory. As with all good theories, it has a “why didn’t I think of that before!” quality to it. It is a plausible sketch, rather than a detailed model. But it is a testable theory and one that may turn out to be broadly correct.

When constructing a theory of consciousness we can start from different premises. “Information integration” theory begins with axioms of what consciousness is like (private, rich) in order to build up the theory from the inside. In contrast, “global workspace” theory starts with the behavioural data – the “reportability” of conscious experience – and attempts to explain the presence or absence of reports of awareness. Each theory has different starting points but ultimately aims to explain the same underlying phenomenon (similar to physicists starting either with the very large – planets – or the very small – atoms, and yet ultimately aiming for a unified model of matter).

Dennett’s 1991 book Consciousness Explained took the reportability approach to its logical conclusion. Dennett proposed that once we account for the various behaviours associated with consciousness – the subjective reports – there is nothing left to explain. There is nothing “extra” that underpins first-person subjective experience (contrast this with the “hard problem” view: there is something to be explained that cannot be solved within the standard cognitive model, which is exactly why it’s a hard problem). I read Dennett’s book as an undergraduate and was captivated that there might be a theory that explains subjective reports from the ground up, reliant only on the nuts and bolts of cognitive psychology. Here was a potential roadmap for understanding consciousness: if we could show how A connects to B, B connects to C, and C connects to the verbalization “I am conscious of the green of the grass” then we have done our job as scientists. But there was a nagging doubt: does this really explain our inner, subjective experience? Sure, it might explain the report, but it seems to be throwing out the conscious baby with the bathwater. In playful mood, some philosophers have suggested that Dennett himself might be a zombie because he thinks the only relevant data on consciousness are the reports of others!

But the problem is that subjective reports are one of the few observable features we have to work with as scientists of consciousness. In Graziano’s theory, the report forms the starting point. He then goes deeper to propose a mechanism underpinning this report that explains conscious experience.

To ensure we’re on the same page, let’s start by defining the thing we are trying to explain. Consciousness is a confusing term – some people mean level of consciousness (e.g. coma vs. sleep vs. being awake), others mean self-consciousness, others mean the contents of awareness that we have when we’re awake – an awareness that contains some things, such as the green of an apple, but not others, such as feeling of the clothes against my skin or my heartbeat. Graziano’s theory is about the latter: “The purpose of this book is to present a theory of awareness. How can we become aware of any information at all? What is added to produce awareness?” (p. 14).

What is added to produce awareness? Cognitive psychology and neuroscience assumes that the brain processes information. We don’t yet understand the details of how much of this processing works, but the roadmap is there. Consider a decision about whether you just saw a faint flash of light, such as a shooting star. Under the informational view, the flash causes changes to proteins in the retina, which lead to neural firing, information encoding in visual cortex and so on through a chain of synapses to the verbalization “I just saw a shooting star over there”. There is, in principle, nothing mysterious about this utterance. But why is it accompanied by awareness?

Scientists working on consciousness often begin with the input to the system. We say (perhaps to ourselves) “neural firing propagating across visual cortex doesn’t seem to be enough, so let’s look for something extra”. There have been various proposals for this “something extra”: oscillations, synchrony, recurrent activity. But these proposals shift the goalposts – neural oscillations may be associated with awareness, but why should these changes in brain state cause consciousness? Graziano takes the opposite tack, and works from the motor output, the report of consciousness, inwards (it is perhaps no coincidence that he has spent much of his career studying the motor system). Awareness does not emanate from additional processes that are laid on top of vanilla information processing. Instead, he argues, the only thing we can be sure of about consciousness is that it is information. We say “I am conscious of X”, and therefore consciousness causes – in a very mainstream, neuroscientific way – a behavioural report. Rather like finding the source of a river, he suggests that we should start with these reports and work backwards up the river until we find something that resembles its source. It’s a supercharged version of Dennett: the report is not the end-game; instead, the report is our objective starting point.

I recently heard a psychiatrist colleague describe a patient who believed that a beer can inside his head was receiving radio signals that were controlling his thoughts. There was little that could be done to shake the delusion – he admitted it was unusual, but he genuinely believed that the beer can was lodged in his skull. As scientist observers we know this can’t be true: we can even place the man inside a CT scanner and show him the absence of a beer can.

But – and this is the crucial move – the beer can does exist for the patient. The beer can is encoded as an internal brain state, and this information leads to the utterance “I have a beer can in my head”. Graziano proposes that consciousness is exactly like the beer can. Consciousness is real, in the sense it is an informational state that leads us to report “I am aware of X”. But there are no additional properties in the brain that make something conscious, beyond the informational state encoding the belief that the person is conscious. Consciousness is a collective delusion – if only one of us was constantly saying, “I am conscious” we might be as skeptical as we are in the case of the beer can, and scan his brain saying “But look! You don’t actually have anything that resembles consciousness in there”.

Hmm, I hear you say, this still sounds rather Dennettian. You’ve replaced consciousness with an informational state that leads to report. Surely there is more to it than that? In Graziano’s theory, the “something extra” is a model of attention, called the attention schema. The attention schema supplies the richness behind the report. Attention is the brain’s way of enhancing some signals but not others. If we’re driving along in the country and a sign appears warning of deer crossing the road, we might focus our attention on the grass verges. But attention is a process, of enhancement or suppression. The state of attention is not represented anywhere in the system [1]. Instead, awareness is the brain’s way of representing what attention is doing. This makes the state of attention explicit. By being aware of looking at my laptop while writing these words, the informational content of awareness is “My attention is pointed at my computer screen”.

Graziano suggests that the same process of modeling our own attentional state is applied to (and possibly evolved from) the ability to model the attentional focus of others [2]. And, because consciousness is a model, rather than a reality that either exists or does not, it has an appealing duality to its existence. We can attribute awareness to ourselves. But we can also attribute awareness to something else, such as a friend, our pet dog, or the computer program in the movie “Her”. Crucially, this attribution is independent of whether they each also attribute awareness to themselves.

The attention schema theory is a sketch for a testable theory of consciousness grounded in the one thing we can measure: subjective report. It provides a framework for new experiments on consciousness and attention, consciousness and social cognition, and so on.  On occasion I suspect it over-generalizes. For instance, free will is introduced as just another element of conscious experience. I found myself wondering how a model of attention could explain our experience of causing our actions, as required to account for the sense of agency. Instead, perhaps we could think of the attention schema as a prototype model for different elements of subjective report. For instance, a sense of agency could arise from a model of the decision-making process that allows us to say “I caused that to happen” – a decision schema, rather than an attention schema.

Like all good theories, it raises concrete questions. How does it account for unconscious perception? Does it predict when attention should dissociate from awareness? What would a mechanism for the attention schema look like? How is the modeling done? We may not yet have all the answers, but Graziano’s theory is an important contribution to framing the question.

[1] This importance of “representational redescription” of implicitly embedded knowledge was anticipated by Clark & Karmiloff-Smith (1992): “What seems certain is that a genuine cognizer must somehow manage a symbiosis of different modes of representation – the first-order connectionist and the multiple levels of more structured kinds” (p. 515). Importantly, representational redescription is not necessary to complete a particular task, but it is necessary to represent how the task is being completed. As Graziano says: “There is no reason for the brain to have any explicit knowledge about the process or dynamics of attention. Water boils but has no knowledge of how it does it. A car can move but has no knowledge of how it does it. I am suggesting, however, that in addition to doing attention, the brain also constructs a description of attention… and awareness is that description” (p. 25). And: “For a brain to be able to report on something, the relevant item can’t merely be present in the brain but must be encoded as information in the form of neural signals that can ultimately inform the speech circuitry.” (p. 147).

[2] Graziano suggests his theory shouldn’t be considered a metacognitive theory of consciousness because it accounts both for the abstract knowledge that we are aware and the inherent property of being aware. But this view seems to equate metacognition with abstract knowledge. Instead I suggest that a model of another cognitive process, such as the attention schema as a model of attention, is inherently metacognitive. Currently there is little work on metacognition of attention, but such experiments may provide crucial data for testing the theory.

*I am currently Executive Director of the ASSC. The views in this post are my own and should not be interpreted as representing those of the ASSC.

When tackling the brain, don’t forget the mind

The human brain is an incredibly complex object. With billions of cells each with thousands of connections, it is difficult to know where to begin. Neuroscientists can probe the brain with electrodes, see inside it with scanners, and observe what happens to people when bits of it are damaged in accidents and disease. But putting all this information together is rather like reconstructing a puzzle without the picture on the box for guidance.


We could take inspiration from the Human Genome Project. The genome is also extremely complex, with billions of building blocks. Despite these challenges, the genome was successfully unraveled at a cost of around $3.8 billion in 2003. The knowledge generated by the Human Genome Project is estimated to have produced $141 in the economy for every $1 spent on research.

Now the Obama administration plans to do the same for the human brain, on a similarly ambitious scale ($3 billion over ten years). The goal of the “Brain Activity Map” (BAM) is to map the activity every neuron and connection in the living brain. Because activity of the brain determines our mental lives, the hope is that a comprehensive roadmap will help us understand how memories are formed, how particular drugs might alleviate psychiatric disorders, and even how the brain generates consciousness. The relevant technologies (multi-electrode recording, optogenetics) are advancing rapidly, and large-scale studies are already providing new insights into how networks of cells interact with each other. A successful Brain Activity Map is well within our grasp.

But what will success look like? Will a map of the human brain be useful in the same way that a map of the human genome is useful? In genetics, success allows us to understand and control physical characteristics. In neuroscience, success should lead to an equivalent understanding of the mind. We would be able to use the map to help reduce aberrant emotions in post-traumatic stress disorder, to lift mood in depression, and to reverse the decline of Alzheimers. Yet all these applications rely on a thorough understanding of the mind as well as the brain.

The computer scientist David Marr noted that the mind can only be fully understood by linking three levels: the function of the system, the computations the system carries out, and how these computations are implemented in the brain. Recording brain cells firing away on their own, even thousands of them, will only get us so far. Imagine being able to visualize the electronics of your computer while tapping away at an email. The patterns you see might tell you broadly how things are working, but you could not divine that you had a web browser open, and certainly not that you were writing to an old friend. Instead, to gain a full understanding of the computer, you would need to understand the software itself, as well as how it is implemented in hardware. In an article in the journal Neuron, the scientists behind the BAM proposal remind us that brain function emerges “from complex interactions among constituents”. They seem to agree with Marr. But while we don’t know the full details of the proposal, in its current form the majority of BAM funding will be thrown at understanding only one of his three levels: implementation.

Studying one level without the other is rather like building the Large Hadron Collider without also investing in theoretical physics. Psychologists and cognitive scientists are experts at bridging the gap between the workings of the mind and brain. For example, by carefully designing behavioral tests that can probe mental dysfunction, they are beginning to delve beneath the traditional classifications of mental disorders to understand how particular components of the mind go awry. These individuals need to walk hand in hand with the technologists on the frontline of brain science. The new technologies championed by the BAM scientists will produce a rich harvest of data about the brain, and they are a crucial part of a long-term investment in the brain sciences. But without similar investment in the mind sciences we will be left puzzling over how the pieces fit into our everyday lives. Only by considering the mind when tackling the brain will we get more BAM for our buck.

Reviewing “The Ravenous Brain”

A shorter form of this review might be appearing in The Psychologist at some point, but I thought I’d post the whole thing here so that books on consciousness can fill some stockings this Christmas…

At the beginning of The Ravenous Brain, Daniel Bor reminds us “There is nothing more important to us than our own awareness”. Western society’s focus on brain, rather than cardiac, death as the natural endpoint to a meaningful life is testament to this assertion.

But only 20 years ago, consciousness science was regarded as a fringe endeavour. Now, particularly in the UK, consciousness is going mainstream, spearheaded by the Sackler Center for Consciousness Science at the Universtiy of Sussex, where Bor is based. Of course, in varying degrees, all psychologists study consciousness: attention and working memory are core components of high-level conscious function. But only recently has a deeper question been tackled: how might these functions come together to underpin awareness? Why are humans blessed with a rich, private consciousness that might not be present in other animals? And how should we tackle the all-too-frequent disorders and distortions of consciousness in neuropsychiatric disorders?

With infectious enthusiasm, Bor takes us on a tour of the latest research into how the brain generates consciousness. His scope is broad, ranging from experiments on aneasthesia and subliminal priming, to our sense of self and progress on communicating with patients in a vegetative state. One of the most difficult questions in the field is addressing what consciousness is for. Circular answers often result: if language is usually associated with consciousness, for instance, than maybe consciousness is for producing language. Bor’s answer is that consciousness is for innovation, and dealing with novelty. Again, I am not convinced that this proposal completely slips the bonds of circularity – is innovation possible without awareness? – but it opens up new avenues for future research.

This is an accessible, engaging account from a practitioner who is well aware of the messy reality of science. Bor is that rare combination of working scientist, story-teller and lucid explainer. The Ravenous Brain reads as a dispatch from a foreign country engaged in a revolution – one that is far from over.


Are chimpanzees self-aware?

Awareness is a private affair. For instance, I can’t say for sure whether the other customers in the coffee shop where I’m sitting are conscious in the same way that I am.  Perhaps instead they are zombies, doing a good impression of acting like self-aware human beings.

By talking to each other, we can quickly disregard this possibility. When it comes to animals, however, the jury is out. Is a chimpanzee self-aware? How about a cow? An insect?

This is not idle speculation. These questions matter. Our moral intuitions are based on the assumption that the person we are interacting with is consciously aware. And our legal system is imbued with the notion that consciousness matters. If we were to find that another animal species had a consciousness very similar to that of humans, then it may be remiss of us not to extend the same rights and protections to that species.

Recently, a prominent group of neuroscientists signed a declaration stating that several non-human animal species are conscious. They reasoned that many mammals share brain structures – the thalamus, neocortex – that are involved in consciousness in humans, and display similar behavioral repertoires, such as attentiveness, sleep and capacity for decision-making. Therefore it is more likely than not that they have a similar consciousness.

While this seems intuitive, we need to stop and examine their reasoning. It all comes down to the kind of consciousness we are talking about. No one doubts, for example, that animals have periods of both sleep and wakefulness. What is at issue is whether they are aware in the same way that you and I are aware when we are awake.

Imagine you are in the cinema, engrossed in the latest blockbuster. There’s a good chance (especially if the film is any good) that while you are experiencing the film, you are not aware that you are experiencing the film. “Meta”-awareness is absent. Now imagine that you are condemned to spend the rest of your life without meta-awareness, continuously engrossed in the film of your own life. I’d wager it wouldn’t be much of an existence; as Socrates suggested, the unexamined life is not worth living.

Whether or not animals have this capacity for meta-awareness is unclear. Without the ability to report mental states, it is notoriously difficult to assess. But one particularly promising test involves judgments of control, or “agency”. Consider playing an arcade game after the money has run out – at some point, you realize that rather than steering your digital car through the streets of Monte Carlo, your efforts at the wheel are having no effect whatsoever. This realization – that you are no longer in control – is known as a judgment of agency, and may be intimately linked to meta-awareness.

In a recent study conducted in Kyoto, Japan, researchers asked whether chimpanzees could make judgments of agency. The task was to move a computer cursor to bump into another target displayed on the screen. The twist was that another decoy cursor was also present on the screen, whose movements were replayed from a previous trial. Thus the chimpanzee had control of one of the cursors, but not the other, even though visually they were identical. After the trial ended, the animals were trained to indicate the cursor that they had been controlling. All three chimpanzees correctly indicated this “self” cursor around 75% of the time. As the experimenters note, “Because both the self- and distractor cursor movements were produced by the same individual, the movements were presumably indistinguishable to a third person (and to the experimenters), who passively observed the display.” In other words, the only way to do the task is to monitor internal states, which is a pre-requisite for meta-awareness.

Judgments about another species’ consciousness should not be taken lightly. In particular, we should be careful about what kind of consciousness we are talking about. The kind that matters most from a moral and legal perspective is the capacity to be aware of our actions and intentions. Initial evidence suggests that some animals, particularly the great apes, may have this higher-order reflective capacity. This should give us greater pause for thought than the presence of primary or phenomenal consciousness in lower animals.

*This post was cross-posted from Psychology Today

The Compulsive Emailer

More than 100 years ago, the great neuroscientist Ramon y Cajal identified a clutch of “diseases of the will” that could derail a young scientist’s career. In today’s computerised world, dominated by smartphones and the Internet, this list deserves an update. In particular, a constant connection to the web has given rise to the Compulsive Emailer*.

The compulsive emailer has a particular rhythm and pattern to his day. Upon waking, he reaches for his smartphone, and, bleary-eyed, checks to see whether anything of import occurred during the course of the night. Science being a rather solitary and gradual endeavour, this is unlikely. Instead, a tinge of disappointment greets the usual automated adverts from journals, and the deluge of mail from obscure technical discussion groups.

Arriving at work, the compulsive emailer checks his smartphone in the elevator, and, after making coffee, sits down at his desk to deal promptly with any urgent missives received during the intervening five minutes. For the rest of the day, metronomic checking is never more than a couple of clicks away. The worst cases may even dedicate a separate screen to their inbox, allowing checking to be done with no more than a glance of the eyes.

Sometimes, in moments of reflection, the compulsive emailer will become frustrated with his lot, and yearn for a job in which email is center-stage, such as a political aide, or a journalist. At least then, his affliction would become useful, rather than be wasted on the continual archiving of dubious invitations to attend far-away conferences.

The curse reaches fever pitch, of course, a few weeks after submission of a paper. Convinced that the letter deciding the paper’s fate will arrive at any second, he hits the refresh button with renewed intensity. Fortunately such occasions are relatively rare, as time spent on any real work is dwarfed in comparison to that spent toiling at the inbox.

The compulsive emailer would do well to restrict e-communications to a particular time of day, perhaps the late afternoon, after time has been given for things “important and unread” to accumulate. He will be pleasantly surprised how quickly email can be dealt with, and dismissed for another day, while relishing the expanses of time that will open up for doing real science.

Retiring for the evening, he makes a few final checks of the smartphone, explaining to any company present that he is expecting to receive some new data from a research assistant.  What he is going to do with those files at midnight on a Sunday is anyone’s guess, but the implication is that they are really rather important.

*The author, being a Compulsive Emailer, is well-qualified to describe this condition.