Absolutely agreed. It’s so easy to dismiss anything outside of the realm of your personal experience as “woo”, stick your head in the sand, and walk away feeling elitist.
Jimmy Wales tells "energy workers" that Wikipedia won't publish woo, "the work of lunatic charlatans isn't the equivalent of 'true scientific discourse'"
Turns out deception isn’t actually necessary for prescription of placebos.
For many of us, woo is well within the realm of our personal experience. Some of use were raised on woo, and have significant exposure through family and friends.
Are we allowed to dismiss it? If you’ve been immersed in that culture, are you allowed to disparage the fact that it’s proponents see magical thinking, fads, and identity politics as more important than science and truth? How much experience do we have to have before we’re allowed to comment, exactly?
What is woo? This was clearly established years ago.
See “Potter on Lifemanship”, by Stephen Potter. I have in front of me the second impression (1951), published by Rupert Hart-Davis of 36 Soho Square, London, W1.
Page 53 - “Some notes on contemporary woomanship”. The principles outlined in perfectly clear language are of how to woo someone without necessarily having any sincere purpose or integrity, and in fact, how to get your way without a moment’s pause for any ideological consideration whatsoever. Just getting what you want is what counts.
The essence of lifemanship itself is how to win at life without actually being better at something than anyone else; and more likely than not, being a lot worse, if you ever actually demonstrate the thing itself.
I think that’s a beautifully summed up meaning of “woo”.
Wow - that’s Mr. Wiki all over. Were your words a mirror, he would comb his hair in their woven reflection.
Well, there is the Quantum Uncertainty Theory of Comment Threads:
“You can either fully understand an online discussion, or you can participate in it.”
As a principle, the Comment Uncertainty relationship must be something that is in accord with all experience. However, humans do not form an intuitive understanding of this indeterminacy in online life, so it may be helpful to deploy moderators and/or graphite ban-hammers to dampen the ensuing chain-reactions (“flame wars”).
Now, while there is the timelike-discussion interpretation (“most recent first” or “Flat Earth model”), and the nested-reply-discussion interpretation (“Yggdrasil model”), do not confuse this with the Disinterested Observer Effect. Both of these alternative conceptualizations of quantum commenting can be examined with the goal of demonstrating the key role the uncertainty principle plays.
The major criticisms of these models rely on the thought experiments “Pandoa’s Box” and “Pandora’s Slit”, but they are generally considered NSFW and somewhat trollish. The other major school of thought relies on the Copenhagen Interpretation which believes that online comments are a hive of scum and villainy and should never be enabled.
Tapas Acupressure Technique sounds delicious. I would like a Sangria Bloodletting to go with that. And a Taquito Scan.
No! At no point can you dismiss it. That is the thing that makes woo what it is. No failure to demonstrate real effect is enough. If something actually makes it into science it only proves that EVERYTHING that hasn’t yet is also science. Personal experience matters in confirming woo but not in dismissing it.
It does not matter if people die of cancer or sepsis because some one was healing them with lights and rainbow silks to balance their chakras. You must have the experience that confirms the belief or else you can’t participate!
You just caused me to STAR this thread.
Isn’t that what conservapedia is?
That was splendid. Bravo.
How is that different in any reasonable way than the abomination that is ‘Continental Breakfast’?
People who want to write about their nonscientific "Woo"beliefs should go make their own Encyclopedia and stay off Wikipedia.
While, on the other hand, people who think things are “true” because they are in a peer reviewed journal should stop being so naive.
Wikipedia should be permitted to do whatever it wishes with information. They claim to be an encyclopedia. We should let them be.
For that matter, TED may also probably be justified in censoring Rupert Sheldrake – and I don’t say that lightly, because Sheldrake has taught me a lot about materialist philosophy.
What concerns me about the responses I see here in this thread is that people seem to not realize that, in a general sense, there is a trade-off between censorship and innovation. There is no getting around that: It may feel good to cut off certain lines of research – and honestly, people might ask themselves why alternative ideas actually bother them – but ultimately, science needs more than just peer review as a mechanism for originating new ideas in science. After all, the problems with peer review have been discussed in special medical journal issues for a couple of decades now at great length. People who extoll the virtues of peer review are at this point clearly not paying attention.
But, what I am a little bit perplexed with is that anybody who actually looks at peer review can plainly see that this was not designed as a platform to create and elaborate new ideas in science. I was under the impression that designers understood the meaning of that: Good things don’t happen by accident. They are designed to happen, based upon smart, intentional design and insight into how things work. So, what is with all of this weirdo mob mentality reaction here on BB?
David Shatz, author of Peer Review: A Critical Inquiry, suggests that there are a number of problems with the open marketplace of ideas concept. The two which seem most serious to me, of those which he mentions, are inundation and product identification – two sides of the same coin. What I would encourage people to do is to maybe dial down the mob mentality a little bit, and attempt to apply the same principles of innovation that designers and programmers use at their jobs to the peer review problem. Inundation is obviously a problem of reading, whereas product identification is the ability to track down that which matters to a person in a huge pile of junk.
In both cases, the paper format would seem to actually be the source of this problem. What really needs to be happening here is people need to be taking a closer look at the process for how proto-theories turn into actual theories, and for how theory changes occur, and to build a system which reflects and supports those actual processes.
The mob voice which shouts down all that exists at the periphery of conventional wisdom does not truly know about every idea it is shouting down. We need systems of communication that support a distinction between pseudoscience and critical thinking – if we are going to continue to lead the world in scientific innovation.
Isn’t that called ‘the scientific method’?
The complaints leveled against peer review are in fact numerous …
(a) being prone to bias, including reviewer bias, editor bias, various forms of publication bias; (b) unscientific and lacking in evidence for its benefits; © having no measurable outcome, and when research is conducted, it is typically on the quality of the review, rather than the quality of the manuscript; (d) conservative, tending to accept for publication articles that are less controversial and less innovative; (e) slow and expensive; (f) yielding papers that are often grossly flawed … (g) unable to detect fraud; (h) sloppy; (i) subjective; (j) secretive; (k) having many reviewers who are incompetent; (l) having relatively low agreement among reviewers of the same manuscript; (m) having difficulty in handling dissent; (n) unnecessary; (o) leading to potential dishonesty among the reviewers; § stifling scientific communication and hence slowing the advancement of knowledge; (q) subject to various forms of political pressure; ® incestuous, with a small group of reviewers reviewing each other’s work, particularly in small narrowly defined specialty areas; and (s) having reviewers who are caustic, nasty, overcritical, arbitrary, self-serving, savage, uncivil, irresponsible, arrogant, inappropriate “and there are probably a few other choice adjectives out there in the literature.”
(from Liora Schmelkin, “Peer Review: Standard or Delusion”, Division 5 Presidential Address to the American Psychological Association, August 9, 2003)
So, the question is why people imagine that this is a process which cannot be improved … If that was the feedback on a project you were working on, you’d be toast.
Here’s another assessment, from Richard Smith …
‘If peer review was a drug it would never be allowed onto the market,’ says Drummond Rennie, deputy editor of the Journal Of the American Medical Association and intellectual father of the international congresses of peer review that have been held every four years since 1989. Peer review would not get onto the market because we have no convincing evidence of its benefits but a lot of evidence of its flaws.
I’m talking more about ‘producing repeatable results’ than academic politicking. Of course there are problems, egos, and idiots. It’s done by humans, who are problematic, egotistical idiots. And you can level all the same claims at woo peddlers and pseudoscience only, you know, their results aren’t repeatable. What’s your point?
Maybe, maybe not. I don’t consider this statement to be truthful or neutral:
“placebo pills made of an inert substance, like sugar pills, that have
been shown in clinical studies to produce significant improvement in IBS
symptoms through mind-body self-healing processes”
Since there is not reliable evidence that suggest there is a “mind-body self-healing process” that has demonstrated physiological effects beyond time and the natural healing process.
There is evidence that there really is no such thing as a placebo effect since all measurements of the so called effect are self-reported, subjective assessment of symptoms and there is no way to control for self-deception or desire to please those administering a trial by reporting improvement.
Even the study in the article you cite, references for its support that there is a placebo effect a study has demonstrated says:
Recent research shows that placebo effects are genuine psychobiological events attributable to the overall therapeutic context … There is also evidence that placebo effects can exist in clinical practice, even if no placebo is given. [emphasis added]
Which really supports the idea that increased therapeutic interaction with any sort of “health care provider” woo or otherwise, can reduce stress and anxiety which also can contribute to a perceived reduction of symptoms and at best a reduction in stress related chemicals in the body. All of which could be achieved through better access to established medical practices, even if it just to discuss symptoms and progress with no treatment whatsoever.
There is, however a problem here. By training, I’m a geologist. There was a time when Continental Drift Theory, aka Plate Tectonics, was considered crackpot pseudo-science. Today, it’s accepted as (pardon the term) gospel in Geology.
The solution, of course, is the use of these techniques in an objective, blind test, to see if any differences can be detected.
Every once in a while, there’s a nugget of truth wrapped in layers of crack-pottery. It may be worth looking at it, on the chance that this may be the case. . .
Keep in mind that the comment you replied to was the statement:
“We need systems of communication that support a distinction between pseudoscience and critical thinking”
On this …
Actually, there has been a lot of research to suggest that a substantial percentage of published papers are just outright inaccurate. People who are commenting about pseudoscience online should be paying close attention to this …
what evidence we do have shows almost universally that peer review is a waste of time and resources and that it really doesn’t achieve very much at all. It doesn’t effectively guarantee accuracy, it fails dismally at predicting importance, and its not really supporting any effective filtering. If I appeal to authority I’ll go for one with some domain credibility, lets say the Cochrane Reviews which conclude the summary of a study of peer review with “At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research.” Or perhaps Richard Smith, a previous editor of the British Medical Journal, who describes the quite terrifying ineffectiveness of referees in finding errors deliberately inserted into a paper. Smith’s article is a good entry into to the relevant literature as is a Research Information Network study that notably doesn’t address the issue of whether peer review of papers helps to maintain accuracy despite being broadly supportive of the use of peer review to award grants.