looking at the paper, it seems they tried three things.
the first was treating the brain like an expensive dac – having the person listen to a podcast, and then seeing if what the algorithm “heard” was what the podcast said
the second was having the person tell a story in their head, and then repeat it aloud so the researchers could compare. while the third was to watch a movie and let the algorithm try to interpret what the person saw.
the first seems plausible to this non-scientist, the second and especially the third much less so.
they released most of their data, and the algorithm’s on github. but their results ( of the imagined and video tests ) are stated like this:
We compared the decoded word sequences to language descriptions of the films for the visually impaired (Methods) and found that they were significantly more similar than expected by chance (q(FDR) < 0.05, one-sided non-parametric test; Extended Data Fig. 5a). Qualitatively, the decoded sequences accurately described events from the films
and they provide a few still frames with text
what they didn’t seem to have done* was take ambient mris or dead noise and run it through the machine, or done any kind of double blind with the imagined story or movie clip analysis. ( they do seem to have done that for the first test of reconstructing heard sound – but that’s the least surprising one anyway )
i think they’re likely overstating their results, and running afoul of the same sort of bias researchers had when they said koko the gorilla could communicate complex ideas by signing. (eta: ie, seeing what they want to see. )
(* granted i didn’t do a super deep read, so i may have missed it if its there )
So far, the magnetic field required for MRI would be hard to hide. The swarm of random metallic objects turned into deadly projectiles would be noticeable.
Neuroscientist w/ fMRI & advanced data analysis experience here:
Results of this journal article are being [massively] over sold & misinterpreted…both by the authors, and by the media.
1.) it’s just not very good science, on an experimental design perspective. The only attempts at control are rubbish, and I’m shocked it made it through review without further scrutiny.
2.) the results are presented in a misleading way to obscure the actual effect size. Actual results are only presented in figures relative to statistics of their [esoteric] baseline, established using a rather weak ‘straw man’.
3.) honestly just read the supplementary materials, and make your own judgement. Skip to the final pages and observe green passages that are deemed, by their chosen statistic, to be more accurate than chance.
The vague, jumbled, and uninterpretable nature of them have no relationship to the claims advertised in the main paper and press coverage.
Whew, thank you for taking a more patient time to summarize my frustrations.
( What's your hourly rate? ...I have a serious backlog for you)
Oh ya, and the part about alternative explanatory variables! Seriously, just export the estimated subject motion components from the original MRI scans (…[x. y. z. theta. etc], these are standard outputs from any fMRI data preprocessing), then rejigger the analysis code to replace their GLM matrix with subject motion artifact estimates.
I would put money on that being rather ‘informative’
Yeah, in terms of possible surreptitious brain scans, I’m thinking of far, far in the future when it’s conceivable that at some point some other instrument could get similar readings without the need for such a magnetic field, but it still seems unlikely.
Way back in 2011 researchers published a study where they created rough videos based on what their subjects were seeing after training the system by having them watch a bunch of movie trailers while in a FMRI machine:
The next year a team tried to “decode dreams” by scanning someone sleeping in an FMRI machine. Maybe there’s something more recent that this, but the results weren’t too impressive at the time:
Yah, this study has “garbage” written all over it, but I don’t have the domain knowledge here to say exactly why. The red flags are all there though and it’s the type of study that the press is going to have a field day with. BB is sadly a culprit of this in general, with hyperbolic headlines like, well, this one, which have little or nothing to do with the results in the paper. Press outlets (again sadly BB does this constantly) also make no effort to determine if a paper is p-hacked or misreported trash before breathlessly extrapolating ridiculously from it.
This actively undermines peoples’ understanding of science when every single story is misreported.