A few days ago Eric Jonas and Konrad Kording (J&K) posted a thought-provoking paper on bioRxiv entitled “Could a neuroscientist understand a microprocessor?” It’s been circling round my head for most of the weekend, and prompted some soul searching about what we’re trying to achieve in cognitive neuroscience.
The paper reports on a mischievous set of experiments in which J&K took a simulation of the MOS 6502 microchip (which ran the Apple I computer, and which has been the subject of some fascinating digital archaeology by the http://www.visual6502.org/ team), and then analysed the link between it’s function and behaviour much as we might do for the brain in cognitive and systems neuroscience. The chip’s “behaviour” was its ability to boot and run three different sets of instructions for different games: Donkey Kong, Space Invaders and Pitfall (as a side note, exploring this sent me down the rabbit hole of internet emulations including this one of Prince of Persia which occupied many hours of my childhood). While their findings will not necessarily be surprising for a chip designer, they are humbling for a neuroscientist.
By treating the chip like a mini-brain, albeit one in which the ground truth was fully known, J&K could apply some canonical analysis techniques and see what they revealed. The bottom line is that for most of these analyses, they were either downright misleading or ended up producing trivial results.
Here is one example: breaking individual transistors in the microprocessor (equivalent to “lesioning” parts of its brain) led to different patterns of boot failure in different games (see their Figure 4 copied above). We might talk about one such lesion as having “caused a deficit in playing Donkey Kong” in a typical cognitive neuroscience paper. J&K show that these parts of the system were not responsible for fundamental aspects of the game but instead implemented simple functions that, when omitted, led to catastrophic failure of a particular set of instructions. This is similar to the disclaimer about lesion studies I learned as a psychology undergraduate – just because removing part of a radio causes it to whistle doesn’t mean that its function was to stop the radio whistling. But I am just as susceptible to this kind of inference as the next person. For instance, last year we published a paper showing that individuals with lesions to the anterior prefrontal cortex had selective deficits in metacognition. We interpreted this as providing evidence of a “causal contribution of anterior prefrontal cortex to perceptual metacognition”. While this conclusion seems reasonable, it’s also important to remember that it tells us little about the region’s normal function, and that such a pattern of results could be due to the failure of an as-yet-unknown set of functions that manifest as a behavioural deficit, similar to the microprocessor’s failure to boot a game.
J&K acknowledge that the brain is probably not working like a microprocessor, it’s organic, plastic, etc – but if anything this may mean that we are more likely to fall into traps of false functional inference than if it were like a microprocessor. And while lesions are a relatively crude technique, things don’t get any better if you have access to the “firing” of every part of the circuit (as is the goal of the recent big data initiatives in neuroscience) – applying typical dimensionality reduction techniques to this data also failed to reveal how the circuit is running the game. In other words, as I’ve pointed out elsewhere on this blog, expecting big data approaches to succeed just because they have lots of data is like expecting to understand how Microsoft Word works by taking the back off your laptop and staring at the wiring.
These false inferences are critically different from false positives. If J&K ran their experiments again, they would get the same results. The science is robust. But the interpretation of the results – what we infer about the system from a particular pattern of behavioural change – is wrong. Often these inferences are not the core of a typical experimental paper – they come afterwards in the discussion, or are used to motivate the study in the introduction. But they form part of the scientific culture that generates hypotheses and builds implicit models of how we think things work. As J&K write, “analysis of this simple system implies that we should be far more humble at interpreting results from neural data analysis”.
This kind of soul-searching is exactly what we need to ensure neuroscience evolves in the right direction. There are also reasons to remain optimistic. First, understanding is only part of doing good science. Deriving robust predictions (e.g. “when I lesion this transistor, Donkey Kong will not run”) is an important goal in and of itself, and one that has real consequences for quality of life. As an example, my colleagues at UCL have shown that by applying machine learning techniques to a large corpus of MRI scans taken after stroke, it’s possible to predict with a high degree of accuracy who will recover the ability to speak. Second, understanding exists on different levels – we might be able to understand a particular computation or psychological function to a sufficient degree to effectively “fix” it when it goes wrong without understanding its implementation (equivalent to debugging the instructions that run Donkey Kong, without knowledge of the underlying microprocessor). For instance, there is robust evidence for the efficacy of psychological therapy in alleviating depression (which in turn was informed by psychological-level models), and yet how such therapy alters brain function remains unknown. But as neuroscience matures, it’s inevitable that alongside attempts to predict and intervene, we will also seek a transparent understanding of why things work the way that they do. J&K provide an important reminder that we should remain humble when making such inferences – and that even the best data may lead us astray.