Publication bias

From RationalWiki - Reading time: 3 min

Poetry of reality
Science
Icon science.svg
We must know.
We will know.
A view from the
shoulders of giants.

Publication bias, or the file-drawer effect, refers to the possibility of a systemic bias in published research in a field due to the over-reporting of positive results. The reasons for this can be:

  • A positive result is usually more interesting than a study finding no effect. Thus, positive results tend to get published more often, even if they may be spurious. Many peer-reviewed publications have to reject many otherwise high-quality studies and need to rank publications not only on scientific merit but on scientific importance; positive results may be one contributor to the perceived importance of a study.
  • The researcher, having found nothing, may simply be uninterested in writing up and reporting the results.
  • The researcher may be self-interested against publishing negative results; for example, a curriculum vitae of published studies with many negative results is likely less promotion-worthy or grant-worthy than one with many positive results. A researcher might also have a financial interest in positive results, e.g., a pharmaceutical study in which the researcher is financially invested.

A study of published and unpublished clinical trials found that positive results in the sample were three times more likely to be published.[1] Publication bias is known to be an issue with research funded by "Big Pharma" as pharmaceutical companies are often required to register their clinical trials in a public database but not to report all the results, allowing them to inflate the perceived efficacy of treatments. Some journals have been launched to report negative results to combat publication bias.[2] John Ioannidis, a medical researcher, has provoked substantial academic debate with his claims that most published research is ultimately false.[3]

A large research project involved seventy-three independent research teams analyzing the same data to test the same given hypothesis. The teams were free to use whatever additional data and any statistical methodologies they thought appropriate. There was a surprising diversity of results: some teams found a large negative independent variable effect, some a large positive, and some concluded there was not enough data to test the hypothesis. The authors descibed their findings as revealing a "hidden universe of uncertainty."[4] Although the authors didn't specifically discuss publication bias, researchers who find significant results relative to others using the same or similar data would have better chances of getting published and cited.

A secondary impact of publication bias is on review articles and meta-analyses. Both types of publications summarize the existing literature and, thus, rely on either the inclusion of all studies or, at the very least, for unpublished studies to be otherwise similar to published studies. If unpublished studies are negative, publication bias further biases reviews and meta-analyses.

See also[edit]

External links[edit]

References[edit]

  1. K. Dickerson et al. Publication Bias and Clinical Trials. Controlled Clinical Trials 8 (4): 343–353.
  2. Your Experiment Didn't Work Out? The Journal of Errology Wants to Hear From You, Retraction Watch
  3. John P.A. Iaonnidis. Why Most Published Research Findings Are False. PLoS Med. 2005 August; 2(8): e124.
  4. Nate Breznau, Eike Mark Rinke, Alexander Wuttke, Tomasz Żółtak and 162 members of individual research teams.Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty PNAS November 1, 2022, 119(44).

Licensed under CC BY-SA 3.0 | Source: https://rationalwiki.org/wiki/Publication_bias
12 views | Status: cached on December 16 2023 02:03:40
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF