XKCD versus neurobollocks



Corrections, typos, grammar flubs and errors

Obligatory post of the dead salmon fMRI: http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf


They are assuming that the salmon should not have any detected neurological activity. I would argue that they discovered the location of the salmon’s soul.

Thanks, salmon. Thalmon.


There are many methodological and conceptual flaws in the search for neural correlates and surely there are more yet to be discovered, and there are yet more in the cognitive sciences more broadly.

There’s a distinction to be made between flaws in naturalistic approaches to knowledge and the appropriateness of naturalism per se. As with any other field replete with bleeding edges, people often look at cognitive science and say, “Ah-ha, you’re ignorant of how or even whether human experiences reduce to physical phenomena, therefore [insert religious and/or religiose angling old-age and new].”


There are a lot of caveats when interpreting fMRI, but unfortunately XKCD doesn’t actually nail any of them.

fMRI does not detect brain activity, it detects changes in brain activity. An fMRI experiment alternates a task with a control activity in order to produce a “contrast”. This usually happens several times within the experiment, e.g. 30 seconds on, 30 seconds off, 30 seconds on, 30 seconds off, and so on for ten minutes total. The brain image you see at the end displays the difference between what was measured in the on state and in the off state (more precisely, it’s a map of the statistical significance of the difference).

In the XKCD example, the only way claustrophobia, loud noise, or anything else could contribute to the activation pattern would be if they were only present during the memory task.


There are several ways to analyze fMRI images, depending on how willing you are to accept false positives. The dead salmon experiment used a technique that was known to have a high likelihood of false positives (specifically, they used uncorrected p values). In fact, they even point out that more stringent techniques (p correction with FDR or FWE) were more accurate, with no evidence of salmon “neural activity”.

In short, they weren’t criticizing fMRI itself, they were criticizing the use of uncorrected p thresholds. And the dead salmon results weren’t surprising, it was pretty much common knowledge that uncorrected p values lead to false positives.


The problem is that this leaves unchallenged, and so implicitly validates, the idea that the mind somehow sits in the brain waiting to be discovered with a good enough microscope. Fixing the methodology can’t fix that idea.

Probing a PC’s hardware won’t get you any closer to understanding Microsoft Word, and watching people’s brains won’t answer questions about consciousness. Neurological evidence is relevant to the enquiry, of course, but unless you are Roger Penrose and insist there are magical quantum unicorns hiding in our brains, it’s unlikely that understanding the hardware will crack the case.

Apart from funding issues, I think part of the bias towards neurobollocks is that most people-- including scientists-- don’t want to accept what philosophical enquiry alone can already tell us about consciousness; we’d rather spend our efforts on research that, by asking the wrong questions, guarantees to reassure us of how unknowable we are.


There are lot’s of problems with “neuro-bollucks” fMRI research, but this isn’t one of them. Functional activity is always a contrast between two scans. You scan a person doing X, and scan a person doing mostly the same thing, but without actually doing X (i.e., a control), and you subtract the two images to find areas activated in one but not the other condition. “Lound noise” activation will occur in both conditions, and will therefore not show up on the contrast.


XKCD’s dismissal of research in its infancy (I’ve been to lectures by Donald Hebb on brain mapping he did with Wilder Penfield) is that he might as well post a cartoon on the simplistic nature of toddlers.

It could be that part of the problem is the fear we have of being ‘understood.’ The idea of a “singularity” of computer tech has a similar fear. What if suddenly our computers build their own children who then can easily out field Willie Mays while at the same time trouncing the 100 best Grand Masters in chess? They might then point out the primitive meat bag nature of our flawed evolutionary construction, rather than the intelligent designs they create.

We are not magical, but will we be the first to determine the exact nature of the non-mystery or will it be one of our unrelated children?


That’s disingenuous. If you examine and organization of PC hardware and the timecourse of activity across elements in a well designed experiment, there’s a lot that can be inferred about how MS Word is implemented. Just listening to hard disk activity provides clues.

fMRI experiments can be compared to looking for thermal heating of different parts of a CPU during particular program operations, and inferring differences between those operations. It is in no way direct evidence of either hardware or program/algorithm structure, but there is information about both in such experiments.


I hear there’s a Kingdom Coming as well - not sure if its due before or after complete understanding of human consciousness via fMRI, Singularity, Socialist Workers Paradise, etc, etc.


This topic was automatically closed after 5 days. New replies are no longer allowed.