…you can write a paper and not send along your data and precisely what statistical methods you used?
…
brb getting published everywhere.
Jesus. Clearly, I’ve been doing it wrong all these years.
…you can write a paper and not send along your data and precisely what statistical methods you used?
…
brb getting published everywhere.
Jesus. Clearly, I’ve been doing it wrong all these years.
I would like to see the original responses from that survey but Wagenmaker refused to share the data.
So Psychology is not a science. It’s just a modern recreation of Phrenology. Look at how many psychologists have declared a mentally ill killer “cured” so he could be turned loose on the streets to repeat his crime.
Isn’t “the C.I.A. psychologist explains to the association official that the contractors “are doing special things to special people in special places.”” a phrase of impressive ominousness?
I’m currently finishing a PhD in Psychology and I would welcome scrutiny of my data. Why? It would set my publications apart.
The hardest part of doing research now isn’t doing the research itself, but coming up with a high quality review of literature. No matter what point you want to make, you can find research to support it. There’s so much crap out there that it’s genuinely difficult to find good, sound research. And you can’t just rely on heavily cited articles because there are a lot of sloppy researchers citing other sloppy researchers. If I get the chance to publish my research in a journal that analyzes the data, I’ll jump at it.
And then this happened:
Shorter version: Prominent dietary researcher, provider of much patronising policy advice on how to manage the population’s eating habits, has reported numerically identical results from quite different experiments. Refuses to share data, because reasons.
two studies published by Wansink in 2001 and 2003 present uncannily similar results, with 39 out of 45 outcomes identical to the decimal point, despite being drawn from different samples. In the 2001 study, Wansink reports recruiting “153 members of the Brand Revitalisation Consumer Panel” while in the second study the reported sample consisted of 654 respondents to a nationwide survey “based on addresses obtained from census records”. How such similar results could emerge from two distinct studies, and two distinct samples, remains unexplained. At the time of publication, Wansink had not responded to requests for comment.
I’ll bite: How many?
He’s not committing fraud; he’s just advocating for smaller minimum publishable units as the foundation of a healthy career!
While I am with everyone so far in this thread that data should be accessible, the cluelessness of many comments is striking.
There are quite some fields where you do not send your data for peer review, in most cases. There are much more where you do not publish the data, in most cases. And yes, there are plenty of reasons for that, and most of them are excruciatingly based on the culture of the specific branch of the specific field of the specific discipline of science.
While indignation in the current case seems to be in order because they asked a member of the editorial board to step down while they could and should have known his policies, this is certainly no reason for condescending remarks on the whole discipline.
And, just for the record, it is my belief that if everyone would try to do science properly from now on, then a dissertation would last at least eight years, the number of published studies would drop to about five percent of the current output and most universities would shut down in a couple of years because the funding system would collapse. Not gonna happen.
This is nothing you can discuss for everyone at once, and very certainly nothing
IKR? You f*ck just one goat…
[quote=“Alfred_Packer1, post:23, topic:96184”][/quote]
Please, in the name of pedantry, change your nym to Alferd_Packer1.
I don’t think that anyone doubts that there are reasons; it’s just that there being good reasons seems far less evident. I can’t say that ‘excruciatingly based on the culture’ sounds promising as a source of reasons that are good, rather than merely deeply entrenched and likely to be ferociously defended by the natives.
Nah, I wouldn’t say there would be tribal wars if review or publication would need open data. At least in my field, however, the community is sometimes rather small, and chances are that there are reasons why you don’t give your data to anonymous reviewerd which I, for one, find somewhat understandable… And if there is one such example of reason I can understand, or sympathise with, then there might be more in other fields and disciplines.
What is the field? What are those reasons?
I’m in a specific branch in a field of biology where every PI basically knows every other PI, and there is quite a bit of competition amongst them.
One of my PIs during my graduate studies told me he got scooped by a colleague who, as he suspects but cannot proof, stalled a paper on the matter and changed his own project to incorporate the ideas he had found in the MS. That’s just anecdotal, of course, but it gives you an idea of the problem.
In other fields in biology you would have other reasons. Someone who works in bionics, for example, might have trouble sharing raw data due to their financial sources in military or industry. I knew people who worked on biological IR sensors, others on surface coatings, others on water absorbing materials. They told me of some weird situations. They do publish - but only aggregated data and stats go into their papers.
The weirdest story so far was that the comitee had to sign a NDA before they could take a PhD candidates viva, and the candidate had to redact his thesis after the viva for the public version, i.e. the one which went to the libraries, including our national library. Thus, his comitee has given him his degree based on a different thesis than what is available to the public. Seriously, so I was told - I have no proof for that, and it sounds absolutely absurd to me. And no, there was nothing even remotely military- or security-related in that thesis.
Maybe you should contact the Belgian psychologist from the article…? Seems like he might want to.
Or is he now ‘persona non grata’ in the field because APA turned them down?
There was the Bridges case a few decades ago:
As a reviewer he saw a paper on rhodopsin metabolism in vitro and immediately realized its significance. He tried with some success to denigrate the manuscript thus delaying its publication. Meanwhile he rushed through some experiments of his own (later claiming they predated his viewing of the other manuscript) and submitted his own paper to Nature – which was rejected. Bridges denied all the evidence suggesting this summary of the events, but was eventually debarred from all Federal funding for three years.
And the episode last year when a paper on diet and obesity was rejected, followed by one of the peer reviewers republishing the results and manuscrpt as his own work:
I absolutely love that! 10 likes!
The issue I would foresee is less convincing study participants to grant this kind of open-ended consent for the sharing of data with qualified reviewers, which seems reasonable enough at face value. On the other hand, I can definitely see a University IRB (or equivalent ethics committees in other settings) balking at the suggestion that confidential information would be shared, as a matter of course, with individuals they have no supervision or control over.
Psychologists have long used pseudonyms for cases. Anna O, Rat Man, Dora, HD, Wolf Man, are all clients of Freud’s.
Doctor-patient privacy is still protected. Yes, they could make up clients, but if you don’t even have to do that much it makes it a bajillion times shadier.