Demarcation problem

From RationalWiki - Reading time: 7 min

Thinking hardly
or hardly thinking?

Philosophy
Icon philosophy.svg
Major trains of thought
The good, the bad,
and the brain fart
Come to think of it

The demarcation problem is the philosophical problem of determining what types of hypotheses should be considered scientific and what types should be considered pseudoscientific or non-scientific. It also concerns itself with the ongoing struggle between science and religion, in particular the question about which elements of religious doctrine can and should be subjected to scientific scrutiny. This is one of the central topics of the philosophy of science, and it has never been fully resolved. In general, though, a hypothesis must be falsifiable, parsimonious, consistent, and reproducible to be scientific. It should be emphasized however that even though this may generally be the case that these features are present, it is not absolutely the case. Dark matter may not currently be falsifiable but it would still be classed as scientific.


No one in the history of the world has ever self-identified as a pseudoscientist. There is no person who wakes up in the morning and thinks to himself, "I'll just head into my pseudolaboratory and perform some pseudoexperiments to try to confirm my pseudotheories with pseudofacts.
—Michael Gordin[1]

Obviously nobody identifies their beliefs as pseudoscience, because that would imply that what they believed was wrong; if they thought they were wrong, then they would change their beliefs, and would avoid believing what they think is pseudoscience. Self-identification is out, and it's clear that a better method is needed to determine if something is pseudoscience.

Attempts to resolve the problem[edit]

Methodological naturalism[edit]

See the main article on this topic: Methodological naturalism

The assumption of methodological naturalism is arguably the most basic and important foundation of the scientific method. While it does not explicitly reject the existence of the supernatural — a position associated with philosophical naturalism — it limits the applicability of science to the natural world and the observable laws shaping it. In a nutshell, this means that an explanation that has to resort to a supernatural cause, like Intelligent design, must be considered unscientific.

Positivism[edit]

See the main article on this topic: Logical positivism

The Viennese philosophers who introduced the positivist paradigm effectively laid the groundwork for the modern philosophy of science and one of its most important strands of thought. The early Positivists favored a rather strict approach to the demarcation and strongly affirmed the empirical nature of science, meaning that questions that cannot be empirically verified or falsified are irrelevant to scientific thought. This obviously set science in stark contrast to religion, but also to the philosophical schools in the vein of classical rationalist tradition that emphasized pure thought.

Falsifiability[edit]

See the main article on this topic: Falsifiability

In his book The Logic of Scientific Discovery, Karl Popper proposed the idea that scientific hypotheses must be falsifiable; unfalsifiable hypotheses should be considered non-scientific. Popper's emphasis on falsifiability changed the way scientists viewed the demarcation problem, and his impact on philosophy of science was enormous. The concept of falsifiability came under attack from Willard Van Orman Quine, who argued that it is impossible to test a hypothesis in isolation, because the process of testing requires the assumption of certain background hypotheses (this is known as the Duhem-Quine thesis), such as 'the equipment is working the way I think it does', and 'the laws of thermodynamics hold'. This means that a falsifying observation guarantees that one of your assumptions is incorrect, but it says nothing about which one it is.

A good example of this is from the British astronomer Sir Fred Hoyle. Hoyle believed in Steady State theory, which opposes Big Bang theory in arguing that the universe is eternal. The vast majority of scientists felt that the debate between Steady State and Big Bang was solved in the 1960s with the observation of the cosmic microwave background (CMB) radiation, which was considered decisive evidence in favour of the Big Bang and a falsification of Steady State. Hoyle dissented from his colleagues by arguing that the observation of the CMB did not disprove Steady State, but instead disproved the First Law of Thermodynamics, i.e. that matter/energy would not remain constant in a closed system, but that there was a source of energy somewhere in the Universe. Hoyle resolved the incompatibility between his theory and observation by rejecting one of his background assumptions. Almost no scientist took this seriously, and Hoyle died in 2001 as a scientific outcast still rejecting the Big Bang.

Despite the problems with Popper's concept of falsification, it has seen wide adoption by many scientists and is often given as the solution to the demarcation problem by practicing scientists.

Puzzle-Solving[edit]

See the main article on this topic: Paradigm shift

Thomas Kuhn coined the concept of a paradigm shift, which referred to the sum-total of background assumptions, auxiliary hypotheses, and accepted theories within a given domain of science. This concept is used to characterize the general tendencies across the history of science, with periods dedicated to what Kuhn called "normal science" punctuated by periods of "revolutionary science" to which the old paradigm must be replaced with a new one due to mounting difficulties with the existing theories, and the accumulation of unexplained anomalies in the existing evidence. [2] These periods of "normal science" are periods to which the dominant paradigm is unquestioned in a relevant field of science, but the little limitations and difficulties within the existing set of accepted theories are being worked out in what Kuhn called "puzzle solving". It's the absence of this puzzle solving in fields that declare themselves sciences with their own existing paradigms that Kuhn classifies as examples of pseudoscience. [3]

As an example Kuhn cites how in astronomy existing limitations and difficulties in accepted theories are actively being worked upon as "problems" for the theories, but in fields like astrology the limitations, problems, and inconsistencies are actively ignored or dismissed. [3]

Research programs[edit]

Imre Lakatos combined elements of Popper and Kuhn's philosophies with his concept of research programs. Programs that succeed at predicting novel facts are scientific, while ones that fail ultimately lapse into pseudoscience.

Theory Progression[edit]

The philosopher and cognitive scientist Paul Thagard argues that the demarcation problem requires three major components to consider. The first is the theory's foundation as a physical explanation, the second is if the community that advocates for a given theory is in general consensus about what theory means and what it entails, and lastly, the theory needs to be compared in terms of success to other theories explaining the same phenomena. [4] With all those considerations Thagard proposes the following criteria...

"A theory or discipline which purports to be scientific is pseudoscientific if and only if:

  1. It has been less progressive than alternative theories over a long period of time, and faces many unsolved problems; but
  2. The community of practitioners makes little attempt to develop the theory towards solutions of the problems shows no concern for attempts to evaluate the theory in relation to others, and is selective in considering confirmation and disconfirmations" - Paul Thagard (1978)[4]

A problem with this demarcation criteria is that it actually makes the status of a discipline and/or theory as pseudoscientific as temporally contextual. For example, creationism is only pseudoscientific now due to the existence of the theory of natural selection. Some would object to this implication on the grounds that a theory classed as pseudoscience should be eternal and unchanging in it's status as pseudoscience.

NOMA[edit]

See the main article on this topic: Non-Overlapping Magisteria

The concept of 'Non-overlapping magisteria (NOMA) is a relatively recent attempt at proposing a clear demarcation between science and religion. It explicitly restricts science to its naturalistic foundations, meaning that no conclusions about supernatural phenomena like gods may be drawn from within the confines of science. This idea has come under heavy criticism for ignoring the blatantly irrational nature of modern-day fundamentalism, and adherents of this theological orientation have unfortunately not paid the same respect to science.

Threat to science[edit]

It's been noted that people often call something pseudoscience if it threatens something important to science.[1] For example, young earth creationism is a threat to science education and funding and confuses the public on what evolution and science actually are. This is opposed to, for example, string theory, which is probably unfalsifiable but doesn't actively hurt science.

This approach is problematic, not least because what a creationist and an "evolutionist" consider to be sane and useful to science are radically different, and so the demarcation of pseudoscience becomes an issue of ideology.

Rejection of the problem[edit]

Epistemological anarchism[edit]

Paul Feyerabend "solves" the problem by arguing that there is no distinction, "science" is meaningless, and so anything is valid.

As an opposite worldview, Imre Lakatos stated:

Many philosophers have tried to solve the problem of demarcation in the following terms: a statement constitutes knowledge if sufficiently many people believe it sufficiently strongly. But the history of thought shows us that many people were totally committed to absurd beliefs. If the strengths of beliefs were a hallmark of knowledge, we should have to rank some tales about demons, angels, devils, and of Heaven and Hell as knowledge. Scientists, on the other hand, are very sceptical even of their best theories. Newton's is the most powerful theory science has yet produced, but Newton himself never believed that bodies attract each other at a distance. So no degree of commitment to beliefs makes them knowledge. Indeed, the hallmark of scientific behaviour is a certain scepticism even towards one's most cherished theories. Blind commitment to a theory is not an intellectual virtue: it is an intellectual crime.

Thus a statement may be pseudoscientific even if it is eminently 'plausible' and everybody believes in it, and it may be scientifically valuable even if it is unbelievable and nobody believes in it. A theory may even be of supreme scientific value even if no one understands it, let alone believes in it.
—Imre Lakatos, Science and Pseudoscience[5]

Others[edit]

Larry Laudan has proposed that there is no firm line of demarcation between science and non-science, and that any attempt to draw such a line is a pointless exercise.[6]

Others like Susan Haack, while not rejecting the problem wholesale, argue that a misleading emphasis has been placed on the problem that results in getting bogged down in arguments over definitions rather than evidence.[7]

See also[edit]

External links[edit]

References[edit]

  1. 1.0 1.1 http://www.scientificamerican.com/article/what-is-pseudoscience/
  2. Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press: Chicago.
  3. 3.0 3.1 https://plato.stanford.edu/entries/pseudo-science/
  4. 4.0 4.1 http://cogsci.uwaterloo.ca/Articles/astrology.pdf
  5. Imre Lakatos, Science and Pseudoscience, Science and Pseudoscience (transcript), Dept of Philosophy, Logic and Scientific Method, 1973.
  6. Laudan on the Demarcation Problem, Philosophy of Science
  7. Six Signs of Scientism, Susan Haack

Licensed under CC BY-SA 3.0 | Source: https://rationalwiki.org/wiki/Demarcation_problem
13 views |
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF