Originally published at: Is Precognition Real? | Boing Boing
…
So that’s what it all boils down to, right? If you’re trying to convince folks that a novel, poorly understood phenomenon is real then you need to come up with a type of experiment that generates strong, repeatable and undeniable results or else people will just be debating whether or not the effect is real ad infinitum. Trying to go back and re-parse old studies with admittedly small effects hoping to convince skeptics doesn’t seem like a productive strategy.
I knew you would say that
What a load of twaddle
Unless one is an attention-seeking woo peddler trying to sell books claiming nonsense like “The Law of Attraction” and “The Secret” are real phenomena. Then it’s not so much about convincing skeptics as it is trying to prevent them from exposing the grift by trying to throw mud at them. Case in point:
Nope it isn’t
Statistical Significance is thoroughly misunderstood. It does not mean something is real or important!
Everyone who uses p-values in their daily life should read the ASA statement on p-values. It’s written in plain language so it can be interesting to non stats people too.
http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108#.Vt2XIOaE2MN
If you use p-values for making important decisions and don’t understand it… Stop using p-values without consulting a statistician
The statement’s six principles, many of which address misconceptions and misuse of the p-
value, are the following:
- P-values can indicate how incompatible the data are with a specified statistical model.
- P-values do not measure the probability that the studied hypothesis is true, or the
probability that the data were produced by random chance alone.
- Scientific conclusions and business or policy decisions should not be based only on
whether a p-value passes a specific threshold.
- Proper inference requires full reporting and transparency.
- A p-value, or statistical significance, does not measure the size of an effect or the
importance of a result.
- By itself, a p-value does not provide a good measure of evidence regarding a model or
hypothesis.
(Edited a couple typos)
Also [OBJ]
I have a masters in data analysis, and I still feel unqualified to weigh in here without knowing the base sizes of the participants or the number of experiments performed…
Boingboing has posted parapsychology nonsense before. Woo woo woo
Well… yes.
If the null hypothesis persists for over four hours, stop taking null hypothesis and consult a statistician immediately.
My reaction while reading the article was “what a load of twaddle”. Imagine how freaked out I was to come to the comments and find that someone had read my mind!
My feelings on precognition are mixed because for the most part i’m a skeptic in many aspects. However i have had a few odd moments over my life where i’ve experienced some things that were un-explainable. I think where i land at the end of the day is that i’m open to the idea of some mysteries possibly maybe being true, but i live my life not putting any stake in such things.
I consulted a statistician for my students persistent null hypothesis and they only made it worse!
Maybe I need a naturostat?
(Apologies for that terrible joke )
We have good twaddle detectors!
This is wonderful! I’m so glad other scientists have been able to replicate his experiments and prove…
… well darn. Hey, that’s just one - okay, three - attempts…
… crap. Well, at least he didn’t use p-hacking to…
… and he did. Well fuck.
I got nothing. Except to say, come on, I love whacky stories and interesting videos and learning new things, but woo peddling feels a bit beneath the usual honesty of Boing Boing.
Pseudoscience. In addition to the lack of scientific evidence, you can find scientific arguments that contradict “precognition”. I can quote physics. Precognition presupposes retrocausality: you receive information from the future, but how is it transported? It’s not possible. An effect prior to the cause violates the principle of causality.
Betteridge’s law of headlines says “No”.
Having just read the 2011 paper out of curiosity it’s an excellent example of everything we shouldn’t do.
One sided p-values, multiple comparisons, p-hacking and so on. It is really quite awful.
As the Firesign Theater said. “Many words but few to the point.”