Academic publishing is a mess and it makes culture wars dumber

Originally published at:


It’s almost like maybe it’s a good idea to read what you’re putting your name and thus reputation by.

And humanities by no means have the market cornered.



Stephen Pinker has been pointing out these things just recently as yesterday.


That is literally the sort of knee-jerk chum tweet I’m criticizing.


Well, academic publishing is a mess for many reasons, mostly to do with predatory publishers.

But I think here you mean peer-review is a mess. The thing is, peer review will always be a mess, because research itself is messy. There’s this pervasive myth outside academia that peer-review is supposed to be some gold standard that only lets truth though. But it never has been, and never can be. Peer review is much more about making sure papers are clear and detailed enough so that others can understand and replicate the work. The whole point is to get new work out into the research community so that others can poke and prod it. When we train PhD students, we teach them to sort through the peer reviewed literature in their area, and learn how to pick out the handful of papers that are actually worthwhile; those which further the field. This sorting the wheat from the chaff cannot be done at peer review time, because it takes many studies (and hence many papers) to develop and evaluate theories. Researchers have to take a holistic view of all the work published in their field to make sense of it.

Contrast this with the popular view in the media that each new paper presents some new truth. So playing this system for shock value is a ridiculously easy game. But it doesn’t tell us anywhere near as much about the peer review system as those outside academic seem to think it does.


Well put. However, I’d add that peer review is a mess for another reason: there are more than a few people who, when they are meant to peer review a paper, merely scan it for their names and then respond accordingly. If they even do that much. Peer review doesn’t count for much in terms of service or academic activity (two of the three areas profs are generally evaluated on. I don’t remember the third. I’m sure it doesn’t matter much; must not be too important [it’s teaching]). So profs know how to butter their bread: they spend the majority of their time on their own research, as little time as possible in service, and as little as they can get away with teaching. As a general rule. Things are different at teaching institutions, but not necessarily better.


This may vary a lot from field to field. I do a lot of editing in my own field, and the vast majority of reviews that cross my desk are careful, detailed, and thoughtful, and lead to much better/clearer published papers. Some are merely superficial in the way you describe, but definitely fewer than 1 in 5; maybe fewer than 1 in 10.


All parties involved should be ashamed. It’s not a conspiracy, it’s the system working as intended (albeit a little slowly.)

1 Like

I’d like to see some of these stories you all have at the Ctrl-V.

Well written, Mr. Beschizza.

Having had extensive exposure to publishing scientists, the tendency of Internet commentariats to idolize peer review has never made sense to me. Something is true or not true, or true under certain circumstances or from certain viewpoints, regardless of whether it has passed publication filters.


This is a bad example. However meritorious or un- the original paper was, the way it was pulled was a violation of publication ethics. In both cases the paper was peer-reviewed and accepted – and in the NYJM case the paper was published. If the editors in the latter case changed their minds they should have retracted it, not disappeared it.

FWIW, Ted Hill is quite a good and well-respected mathematician.


It seems to me that most of the blame lies with the editors for handling it badly. As you say, they should have owned their decision and printed a retraction. The academics (including at least one respected mathematician), who criticized the paper were exercising their own academic freedom, and the authors theirs. The reviewers would bear some of the blame but, as @sme points out, research institutions don’t place much value on peer review with a predictable and pervasive result in quality control.

Even good and respected mathematicians are capable of faulty research. One retraction wouldn’t really be a prevailing reflection on him or his abilities.



I believe this ‘Cuck Philosopher’ would agree with you. I happened across this recently and was surprised YouTube could do philosophy. I also recognised all I knew of Sokal was the media spectacle. 10 mins.
(The “cuck” is undoubtedly ironic)


The math in the paper is not faulty (though not especially deep/groundbreaking), and is not what triggered the reactions. The Intelligencer (where he first meant to publish) is not really an academic journal at all, but a casual/recreational magazine for mathematicians. The NYJM is not normally where I’d look for a paper on probability or statistics or something recreational. The whole story – not just the shoddy process in the retraction – is kind of weird.

One retraction wouldn’t really be a prevailing reflection on him or his abilities.

He’s an emeritus, so beyond any kind of institutional reward or punishment for level of publication.


Good to know! I’m in a field that is high in BS, so perhaps that contributes. (I don’t tend to do peer reviews. I’m not required to do much more service than I actually do, and I can’t take a lot of poststructuralism so I try to avoid it as much as possible.)

Non-predatory publishers can be quite immoral too. I’ve had a paper rejected from one of the high impact factor journals (published by one of the most respected publishers), and it was explicitly stated that it was rejected for not citing papers from the right journals. It was their way of artificially pumping up the impact factor.

1 Like

Contrary to what the author said, the magazine that published the Sokol hoax was not comprised of sociologists and they were not employing faux science. Indeed, they were not sociologists at all. They were post modernists who denied the reality of objective reality. That is, they rejected the epistemological foundations of empiricism on which science is based. While I think that post modernism is gibberish, I would hope that someone attempting to make a point about the flaws in their method would at least understand what they were.

1 Like

My field was the behavioural neuroscience side of psychopharmacology. Rats, brains, drugs, medicine.

My experience of peer review heavily featured:

  • Obvious self-promotion (“why haven’t you cited my paper that isn’t even published yet?”)

  • Critique of things not actually in the paper (“why did you do X?” when there is a paragraph explaining why we did not do X)

  • Demands for things already done and in the paper (“why didn’t you do Y?” when a chunk of the results section is devoted to the findings of doing Y)

  • A preference for discredited but traditional statistical methods rather than modern more reliable methods (“we’ve always done it this way”; the stats used throughout much biosci are garbage)

It was rather obvious that many of the reviewers barely glanced at the paper, and that even those who did offered critique that was seriously flawed.

Academic publishing is broken.