Do Your Part! Illegally Download Scientific Papers

If you’re a journal in a position to use newsworthyness as a selection criterion without dropping the quality of the science, then why not? A scientific advance is not worse just because it is interesting outside of specialists. If journals like Science and Nature reallyu do have a bias towards papers that will attract journalists, and if this is a problem in your field, then the solution is for the leaders in your field to stop publishing in these journals. mathematicians rarely publish in these two journals, and as a result nobody looks for publications in them as a criterion for tenure or a grant.

This is not consistent with my experience; if you’ve been editor-in-chief of a journal and didn’t find it a substantial amount of work, then you must be better at it (or in a saner field) than I am.

For the drug that was my major research topic, the entire corpus at the start of the investigation consisted of one decades-old brief synthesis note.

The advantage of working in a field which is narrowly focused is that the problem-specific literature will be sparse and easily mastered. The disadvantage is that it will be less interesting to nonspecialists, which makes it harder to publish in non-specialist journals, and harder to get grants and jobs. It feels unfair, but it’s not. It just is.

1 Like

About a year and a half ago, Elsevier (seemingly out of the blue) dropped the published prices on all its journals by about 1/3. This meant that we had to find new brands of cars to compare to the price of Brain Research, whose price had dropped from $24k to under $20,000. Of course, it wasn’t enough of a drop to tamp down our general rage.

5 Likes

Impact factor is not a bad measure of journals and whether they are worth subscribing to. It is a TERRIBLE way of trying to measure the worth of individual articles, even if we regard citations as a good measure of the quality of individual articles. The problem is the logarithmic distribution of article citations. A very small number of seminal articles get most of the citations. So the impact factor of a journal is mostly determined by the publication of a few superstar articles. So impact factors are little influenced by the median number citations. And they say nothing much at all about the minimum quality of article published in a journal title. It’s the Bill Gates in a room full of homeless guys problem. If he Blankfin show up in a room the average income is huge, but we don’t really know what the minimum income is to get into the room. But administrators really WANT a way to measure the quality of research without waiting a few years to see how well lit is being cited. So they put inpact factors to an inappropriate use.

1 Like

—Because it typically takes years for citations of articles to accrue. What administrators want to know is “What did you do last year and how significant is it?” So they fall back on “You got published in journal x with an impact factor of y”. They fall back on “Y” to judge the quality of the research because it is available right away.

1 Like

Many grants don’t include specifically earmarked money for publication. My current funding does not. A close colleague of mine recently got a large grant, but the funding agency asked them to cut 7% from it (which is not a lot to cut, in the scheme of things), and the choice came down to not having summer personnel to do the work vs. open access fees.

I think it’s really important to publish work and have it not get stuck behind a paywall. I think you’re ultimately right, but regulation on that is really hard, since funding tends to cover the work, not the dissemination of it. I’m really not sure how to attack that - OA fees are often really exorbitant. A recent paper I published was free to me as a society member, but the OA fee was a little shy of two thousand dollars (though I’m allowed to host a free copy on my website). Maybe each researcher on taxpayer funded research could get one OA waiver a year? Or at the start of the grant designate several outputs to be open access? Or disallow publishers from charging OA fees if the project was funded by the government?

2 Likes

“Why not” is because it introduces a systematic bias into the publication of results that substantially distorts the perception of the underlying facts. Both historically and presently, publication biases have had real, measurable and destructive effects as an impediment to scientific knowledge. False results are promulgated, important findings are buried. Ineffective and dangerous medications stay on the market, life-saving discoveries are ignored.

The problem extends far beyond just Science and Nature. These issues affect journals across the scale of generalist to specialist (The Lancet, New England Journal of Medicine, Neuroscience, Addiction Biology, Journal of Analytical Toxicology…). Large or small, the journals are all going to publish the most “exciting” of what is submitted (given a basic standard of quality, although even that is doubtful once you get into the pay-to-publish “Frontiers in…” rubbish). The bias also creates a chilling effect; nobody bothers doing replication work or submitting negative results because they know that it isn’t going to get published.

If you’ve time for a book, I’d strongly recommend that you have a look at the work of Ben Goldacre (Bad Pharma in particular). The flaws in the current system of scientific publication are literally killing people.

Again: I am not saying that there is no work or expense involved in journal production. What I am saying is that the expense of managing scientific publications within academia on a collaborative/collective non-profit basis would be vastly less than the costs of paying for exorbitant commercial journal fees, and that the total workload involved in publishing this research would not be substantially increased by eliminating Elsevier et al and correcting some of the existing publication biases.

[quote=“d_r, post:41, topic:84185, full:true”]The advantage of working in a field which is narrowly focused is that the problem-specific literature will be sparse and easily mastered. The disadvantage is that it will be less interesting to nonspecialists, which makes it harder to publish in non-specialist journals, and harder to get grants and jobs. It feels unfair, but it’s not. It just is.
[/quote]

It has nothing to do with “fair” or not. What it has to do with are systematic biases in the literature that impede the advancement of scientific knowledge, resulting in direct and lethal real world consequences.

The type of universal scientific database that I discuss upthread would be a tool of immense value to the advancement of human knowledge worldwide. The challenges of the current world will need human knowledge stretched to its utmost if we’re to cope with them. We can’t afford to leave a tool like this on the shelf just for the sake of maintaining some corporations’ unearned profits.

2 Likes

What a journal’s IF does reflect to some extent, even for an individual article or author, is the difficulty of publishing in that journal, so even if your paper is eventually doomed to very few cites, getting it into the highly ranked journal provides evidence that it was attractive to a very picky editorial board. Moreover, it maximizes the chance of active researchers seeing it, and elevates the school’s visibility. All of these are reasons why from an administrator’s POV, journal IF is a reasonable thing to use when looking at a faculty member.

Some problems specifically with the IF are that it can be gamed and that it gives a private entity a stranglehold over the measurement of a journal’s prestige.

If you are referring to the dearth of replication studies - this is the only relevant thing I can think of that I recall Goldacre discussing over at Badscience - that is a problem, but it is not a problem in journal ownership, it is a problem with who funds some areas of research, and how.

Yes, of course, all else equal they will publish the research that is newest and most interesting. They publish the research that other scientists are interested in seeing. There is nothing keeping anyone from publishing a Journal of Results You Don’t Care About, except that nobody will want to subscribe to it.

Of course the indexing already exists: SCI and Web of Science for science, Zentrallblatt and MathSci in math, etc. Inter-article connectivity infrastructure also exists, thanks to organizations like Crossref. What is new in your proposal is that the index links should go to live articles, and even that exists if you’re behind the firewall of a major research university. (On my campus, for journals we can’t afford it can take as long as 2 days to get such an article, through interlibrary loan. This requires around 5 mouse clicks. That is much better than what I had when I started out, where I had to fill in a request, hand-carry it over to the appropriate office at the library, and wait sometimes weeks for a bad copy of the possibly-correct article to show up.)

You’re not going to nationalize and democratize the big journals: their owners don’t want it, but more importantly most scientists don’t want it. Having a de facto ranking of journals is useful, even if it sometimes feels oppressive. What is possible is for governments to stop creating extra artificial incentives for publishing in such journals, and to reduce the artificial leverage the private indexing companies have over all of science. Require research done under government grants to be published in open-access form (as NIH is already doing), but don’t allow ridiculous page charges t be charged to grants. Stop paying bonuses to departments based on the number of SCI-indexed journals they publish in (some governments do this). Don’t rank departments exclusively on private indexes like SCI and Scopus, include DOAJ and specialist/society indexes. Give incentives for new no-fee open-access journals.

The rest has to come from scientists themselves, with a serious commitment to accept at least some open-access and professional society journals on an equal footing with private journals when it comes to hiring, tenure, and funding decisions.

That is the common assumption, but I think that the correlation is much weaker than generally believed. Certainly ISI themselves warn against the use of IF in this way. Certainly high IF journals (Physical Review, Journal of Geophysical Research etc.) are more difficult to publish in. But they are also publish a HUGE number of papers. High IF drives a high IF because people preferentially submit to them. The get to select a smaller proportion of the articles submitted, but they still publish a large number of poorly cited works. The distribution of citations approximate a hyperbolic distribution, and IF is largely determined by the publication of of a few highly cited works and not nearly as much about the editors ability to set a bottom for quality. But IF gives one the illusion of being an empirical measure of articles rather than just the judgement call by the editors of high IF journals.

1 Like

I agree with this; I didn’t mean to suggest that the editorial boards of such journals are especially discerning, only that they have a huge set of papers from which to select.

2 Likes

Pirates tend to enormously UNDERestimate the costs, and OVERestimate the ability of crowd sourcing or institutions to support the things that they want.

1 Like

Scientific research may be grant funded. Scientific papers are not. Nor are they likely to be.

Costs include things like:
– Staff to select possible papers. These can’t be some random secretary, but must have not only a good sense of the science but also a good judgment of the writing.
– Staff to send papers out for review, and get them back, forward the results to the authors for correction, and around again.
– Staff to make sure that you have the right balance for each issue, and that you’re not missing critical new areas.
– Staff to do the copyediting, and ensure that there are no typos, or infelicities in the language, and to request clarification. These, again, must be familiar with the science.
– Staff to do the layout. (You didn’t think that you can just print a manuscript did you? Because that’s not even CLOSE to true.)
– Staff to compare the page proofs to the final draft, ensuring that no changes have accidentally been made during layout or printing or file conversion. (This is the real meaning of proofreading – which has nothing to do with finding spelling mistakes.)
– Staff to spread the word about the journal, including representing the company at major conferences.
– Hosting conferences (a non-trivial expense!)
– Staff to handle subscriptions and billing, and customer service.

And all of this may be supported by a few hundred or at most a thousand subscriptions.

Get a grip. This is profitable, but it’s not nearly as much so as you seem to think. And pirates simply ensure that the prices rise toward a singularity.

RELX (corporate mothership of Elsevier) has a much higher growth in profits than revenue, since years. but I’m sure there’s a total innocent explanation, and absolutely not related to “because we can, also our shareholders love money spent by public libraries SOO much”

5 Likes

I think Elsevier has since abandoned this practice but they used to add a 10% subscription charge for North American libraries “to adjust for fluctuations of multiple European currencies”. When most of Europe went to a single currency the explanation was “to adjust for fluctuations of the Euro”.

Why North American libraries were supposed to defray the cost of European subscriptions was never answered, but the shareholders sure did love the practice.

4 Likes

Well, I am.

Peer review’s “filtering and evaluation” is the gatekeeper function that props up the whole corrupt science publishing system. It was apparently originally introduced because too many papers were being submitted to journals, and some filter was required because of the cost of paper? Now that all papers can be published to the cloud for less cost than printing a single run of magazines, peer review’s major function is to stifle voices.

A post on Gelman’s blog](When does peer review make no damn sense? | Statistical Modeling, Causal Inference, and Social Science) starts out:

This post is not peer reviewed in the traditional sense of being vetted for publication by three people with backgrounds similar to mine. Instead, thousands of commenters, many of whom are not my peers—in the useful sense that, not being my peers, your perspectives are different from mine, and you might catch big conceptual errors or omissions that I never even noticed—have the opportunity to point out errors and gaps in my reasoning, to ask questions, and to draw out various implications of what I wrote. Not “peer reviewed”; actually peer reviewed and more; better than peer reviewed.

Traditional peer review no longer has any value in it worth the cost. It’s just a shibboleth. It needs to die.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.