I read the study. The U.S. section of the study had 199 participants from Georgetown - people around 20 years old, 2/3 women, slightly more than half saying they were Christian, about a quarter saying they had no affiliation. So most likely, mostly GTU students. The Afghanistan section of the study had 149 participants from Kabul - around 27 years old, just slightly more men than women, no religious affiliations were collected. So the sample size was very underpowered for a topic this complex. Like most studies run by psychologists.
When you read the assumptions scattered through the paper, based on hypotheses from other severely underpowered studies, their interpretation of what they are seeing starts looking like a house of cards - like most studies run by psychologists.
It was a low budget study of very few people, in two fairly homogeneous groups who were given some simple tests which purport to quantify some very complex mental processes. The statistical correlations they describe with words like “strong” are in fact subtle, and as evidence, this looks like a preliminary study to see if there’s anything there that warrants further study. And sure, they should replicate this study. With larger cohorts. In completely different national groups. And cohorts with much greater age spreads and sub-cultural spreads, including cultural groups with non-Abrahamic religious majorities. And with more comprehensive testing to determine the qualities they seek to compare, instead of the simple tests they used. That would determine if this subtle effect was noise in the groups studied, or was a real correlation.
And then they could design a study to see if there was any causation - which this study definitely did not, it just assumes causation for the effect they think they found - like most studies run by psychologists.
Anything based on biology is extremely complex - and psychology is based on completely unknown biological mechanisms. Any neurologist will tell you, humans do not understand how brains work yet. We’re not even close. So psychological studies are being done in an environment of profound ignorance, and really good studies are way more expensive than most researchers can get money for. So they do little studies which are stabs in the dark. Nothing wrong with that, science has to start somewhere. But they also get little, tentative results from their little studies - which some of them then claim to prove grand, sweeping conclusions. Which is one of the main things that has brought psychology into disrepute.
Looks like they didn’t actually measure a general “belief in god” as it correlated to a pattern matching test but rather a scale of belief in an interventional god using three tests, including a “Belief in Divine Intervention Scale”. So the more prone people were to subconscious pattern matching (as opposed to conscious rational thought) the (slightly) more they believed a god was intervening. My quick scan of the study couldn’t find any reference to the number, if any, of the participants who have zero belief in god. There is a link to the study data, but not in a format that I’m able to parse.
The study mentions and then overlooks that “scientific belief was inversely associated with [belief in an interventionist god]”.
So it’s definitely not that “smarter” people believe in god.
I dislike the seriously flawed psychological research industry - their studies tend to be underpowered and not reproducible. As anyone knows who’s been paying attention to the replication problem in recent years, even many of their foundational studies, it turns out, can’t be reproduced. But the psychology field is filled with gullible folks who are ok with that - and blithely treat tentative conclusions from underpowered studies as proven facts. Science is a distinct process - and psychologists rarely do science.
Yes, there are some good studies run by psychologists. They tend to have larger, diverse cohorts, control groups, a very narrow focus and rely on observable results as evidence. And some neurologists are doing some very good work, but it’s severely limited by the field’s current lack of tools and computing power. FMRI studies, for example, touted by some as “gold standard”, use the very very low resolution blunt instrument of FMRIs and then often make ludacris claims which are not at all supported by their data.
When a psychologist or neurologist who approaches their work scientifically talks about their results, they do not claim more than they actually found. And they do not pretend that their assumptions were anything more than assumptions, or that the state of the profession is more advanced than it is. What they are doing is necessary groundwork, which is perfectly honorable. There is no need to pretend it is more than that.
Then the real damage is done once the university does a press release about it, and/or some media outlet catches wind of it and concocts a ridiculous headline around it. The latter is often so wrong that it draws the opposite conclusion of the actual paper, but millions of people stop reading at the headline and it becomes “science” to them. Bad science reporting and aggressive university PR departments carry as much blame as overzealous psych researchers, in my humble opinion.
Do polytheists, monotheists, atheists, and dentists have different pattern-recognition abilities? How about neurotics, psychotics, sociopaths, and naturopaths? Celibates, libertines, fetishists, bestialists? Monoglots vs polyglots? We need more studies. Can we get grants?
The U.S. section of the study enrolled a predominantly Christian group of 199 participants from Washington, DC. The Afghanistan section of the study enrolled a group of 149 Muslim participants in Kabul.