But the valley of ambiguity is where all the cool stuff is going on. If there's no ambiguity in an issue there's not much reason for scientists to be working on it.
Like cold fusion, for example? Ambiguity be damned!
So what, concisely, is the horizontal axis on the napkin diagram? Is it really a continuous independent variable?
If she's right, the real trick with science reporting on the Internet is to write accurate stories that
aren't all reported from deep in the Valley of Ambiguity.are written totally with LOLcats.
This article is an excellent find. It raises a very important problem which social media has thus far failed to address: Thumbs-up and thumbs-down might be appropriate for measuring the feeling which a person has about a conversation. But, does it make sense to adopt that sort of metric when the goal is to think like a scientist? Consensus is not exactly a value in science, as it is in the domain of conversation. The values of science have been studied. I've posted this before, but it's worth referring to again. "Science Education and Scientific Attitudes" by Pravin Singh:
The current set of scientific attitudes of objectivity, open-mindedness, unbiassedness, curiosity, suspended judgement, critical mindedness, and rationality has evolved from a systematic identification of scientific norms and values. The earliest papers of any importance in the field of scientific attitudes are those of R.K. Merton (1957). He conceptualized the norms or institutional imperatives on the basis of evidence taken mainly from statements by scientists about science and their scientific activity. He then identified four norms. These are universalism, communality, disinterestedness and organized skepticism.
Universalism requires that information presented to the scientific community be assessed independently of the character of the scientist who presents the information. The norm of communality requires that scientific knowledge be held in common, in other words, the researcher is expected to share his findings with other scientists freely and without favour. The norm of disinterestedness requires scientists to pursue scientific knowledge without considering their career or their reputation. Scientists are exhorted by the norm of organized skepticism never to take results on trust. They are expected to be consistently critical of knowledge.
To this list of institutional imperatives Barber (1962: 122-142) later added two more — rationality and emotional neutrality. Rationality relates essentially to having faith in reason and depending on empirical tests rather than on tradition when substantiating hypotheses. Scientists are encouraged also to conform to the norm of emotional neutrality i.e. to avoid emotional involvement which may colour their judgement.
Is it not possible that these scientific attitudes have been popularised and then reified as a set of ideal attitudes but in reality is not often found in actual scientific practices? The following studies raise serious doubts about the scientists' adherence to institutional imperatives.
What I'd like to suggest is that until we implement ways of rating contributions which more accurately emulate the actual values associated with thinking like a scientist, our scientific discourse will suffer.
I would also add that scientific discourse exhibits different goals than social chit-chat -- one of the most important being the identification and spread of information about models that exhibit predictive potential. To be clear, our pre-existing knowledge, misconceptions, our educations, worldviews, (as the article points out) concern about our reputations, and our inherent immunity to change all stand in the way of the spread of this novel information. An effective scientific social network would therefore create a space for the proliferation of new ideas in our scientific discourse, in direct opposition of those very strong forces.
Today's science reporting is understandably in a bad space. There is this potential to use the Internet as a tool for identifying good ideas within the long tail of scientific discourse. But, instead of solving that problem, what we have instead is generally reporting which sticks to the predictably safe stories. And what that generally means is that conventional theories are rarely questioned on high-profile sites which value their reputations. Our failure to devise systems which effectively distinguish critical thinking from pseudoscience invites people to abandon all of it, and defer to authority. But, this undermines the rational for believing in science, to begin with: it's empirical nature. And it leaves the public vulnerable to bad ideas which originate within our universities. This dogma is actually far more dangerous than the cranks and crackpots, because we cannot identify it. It lurks within our own minds, inviting us to ridicule the good idea which should replace it, if we are to answer the biggest, most perplexing questions in science.
There are ways to fix this problem. It is possible to create a system of discourse which can distinguish between critical thinking and pseudoscience.
Thank you for posting this particularly poignant article.
This topic was automatically closed after 5 days. New replies are no longer allowed.