This is a list of important publications in statistics, organized by field.
Some reasons why a particular publication might be regarded as important:
Topic creator – A publication that created a new topic
Breakthrough – A publication that changed scientific knowledge significantly
Influence – A publication which has significantly influenced the world or has had a massive impact on the teaching of statistics.
Contents
1Probability
2Mathematical statistics
3Bayesian statistics
4Time series
5Applied statistics
6Statistical learning theory
7Variance component estimation
8Survival analysis
9Meta analysis
10Experimental design
11See also
12References
13External links
Probability
Théorie analytique des probabilités
Author: Pierre-Simon Laplace
Publication data: 1820 (3rd ed.)
Online version: Internet Archive; CNRS, with more accurate character recognition; Gallica-Math, complete PDF and PDFs by section
Description: Introduced the Laplace transform, exponential families, and conjugate priors in Bayesian statistics. Pioneering asymptotic statistics, proved an early version of the Bernstein–von Mises theorem on the irrelevance of the (regular) prior distribution on the limiting posterior distribution, highlighting the asymptotic role of the Fisher information. Studies the influence of median and skewness in regression analysis. Inspired the field of robust regression, proposed the Laplace distribution and was the first to provide alternatives to Carl Friedrich Gauss's work on statistics.
Publication data: Princeton Mathematical Series, vol. 9. Princeton University Press, Princeton, N. J., 1946. xvi+575 pp. (A first version was published by Almqvist & Wiksell in Uppsala, Sweden, but had little circulation because of World War II.)
Description: Carefully written and extensive account of measure-theoretic probability for statisticians, along with careful mathematical treatment of classical statistics.
Importance: Made measure-theoretic probability the standard language for advanced statistics in the English-speaking world, following its earlier adoption in France and the USSR.
Statistical Decision Functions
Author: Abraham Wald
Publication data: 1950. John Wiley & Sons.
Description: Exposition of statistical decision theory as a foundations of statistics. Included earlier results of Wald on sequential analysis and the sequential probability ratio test and on Wald's complete class theorem characterizing admissible decision rules as limits of Bayesian procedures.
Importance: Raised the mathematical status of statistical theory and attracted mathematical statisticians like John von Neumann, Aryeh Dvoretzky, Jacob Wolfowitz, Jack C. Kiefer, and David Blackwell, providing greater ties with economic theory and operations research. Spurred further work on decision theory.
Testing Statistical Hypotheses
Author: Erich Leo Lehmann
Publication data: 1959. John Wiley & Sons.
Description: Exposition of statistical hypothesis testing using the statistical decision theory of Abraham Wald, with some use of measure-theoretic probability.
Importance: Made Wald's ideas accessible. Collected and organized many results of statistical theory that were scattered throughout journal articles, civilizing statistics.
Bayesian statistics
An Essay towards solving a Problem in the Doctrine of Chances
Author: Thomas Bayes
Publication data: 1763-12-23
Online version:"An Essay towards solving a Problem in the Doctrine of Chances. By the late Rev. Mr. Bayes, F.R.S. communicated by Mr. Price, in a Letter to John Canton, A.M. F.R.S.". Department of Mathematics, University of York. http://www.york.ac.uk/depts/maths/histstat/essay.pdf.
Description: In this paper Bayes addresses the problem of using a sequence of identical "trials" to determine the per-trial probability of "success" – the so-called inverse probability problem. It later inspired the theorem that bears his name (Bayes' theorem). See also Pierre Simon de Laplace.
Publication data:Peirce, Charles Sanders; Jastrow, Joseph (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences3: 73–83. http://psychclassics.yorku.ca/Peirce/small-diffs.htm.
Description: Peirce and Jastrow use logistic regression to estimate subjective probabilities of subjects's judgments of the heavier of two measurements, following a randomized controlled repeated measures design.Cite error: Closing </ref> missing for <ref> tag[1]
Time series
Time Series Analysis Forecasting and Control
Authors: George E.P. Box and Gwilym M. Jenkins
Publication data: Holden-Day, 1970
Description: Systematic approach to ARIMA and ARMAX modelling
Importance: This book introduces ARIMA and associated input-output models, studies how to fit them and develops a methodology for time series forecasting and control. It has changed econometrics, process control and forecasting.
Description: The original manual for researchers, especially biologists, on how to statistically evaluate numerical data.
Importance: Hugely influential text by the father of modern statistics that remained in print for more than 50 years.[2] Responsible for the widespread use of tests of statistical significance.
Statistical Methods
Author: George W. Snedecor
Publication data: 1937, Collegiate Press
Description: One of the first comprehensive texts on statistical methods. Reissued as Statistical Methods Applied to Experiments in Agriculture and Biology in 1940 and then again as Statistical Methods with Cochran, WG in 1967. A classic text.
Importance: Influence
Principles and Procedures of Statistics with Special Reference to the Biological Sciences.
Authors: Steel, R.G.D, and Torrie, J. H.
Publication data: McGraw Hill (1960) 481 pages
Description: Excellent introductory text for analysis of variance (one-way, multi-way, factorial, split-plot, and unbalanced designs). Also analysis of co-variance, multiple and partial regression and correlation, non-linear regression, and non-parametric analyses. This book was written before computer programmes were available, so it gives the detail needed to make the calculations manually.Cited in more than 1,381 publications between 1961 and 1975.[3]
Importance: Influence
Biometry: The Principles and Practices of Statistics in Biological Research
Authors: Robert R. Sokal; F. J. Rohlf
Publication data: 1st ed. W. H. Freemann (1969); 2nd ed. W. H. Freemann (1981); 3rd ed. Freeman & Co. (1994)
Description:: Key textbook on Biometry: the application of statistical methods for descriptive, experimental, and analytical study of biological phenomena.
Importance Cited in more than 7,000 publications.[4]
Statistical learning theory
On the uniform convergence of relative frequencies of events to their probabilities
Authors: V. Vapnik, A. Chervonenkis
Publication data: Theory of Probability and Its Applications, 16(2):264–280, 1971 doi:10.1137/1116025
Description: Computational learning theory, VC theory, statistical uniform convergence and the VC dimension.
Importance: Breakthrough, Influence
Variance component estimation
On the mathematical foundations of theoretical statistics
Author: Fisher, RA
Publication data: 1922, Philosophical Transactions of the Royal Society of London, Series A, volume 222, pages 309–368
Description: First comprehensive treatise of estimation by maximum likelihood.[5]
Description: First description of three methods of estimation of variance components in mixed linear models for unbalanced data. "One of the most frequently cited papers in the scientific literature."[6][7]
Estimation of Variance and Covariance Components in Linear Models
Author: Rao, CR
Publication data: 1972, Journal of the American Statistical Association, volume 67, pages. 112–115
Description: First description of Minimum Variance Quadratic Unbiased Estimation (MIVQUE) and Minimum Norm Quadratic Unbiased Estimation (MINQUE) for unbalanced data
Publication data: 1980, John Wiley & Sons , New York
Description: First comprehensive text covering the methods of estimation and inference for time to event analyses
Importance: Influence
Meta analysis
Report on Certain Enteric Fever Inoculation Statistics
Author: Pearson, K
Publication data: 1904, British Medical Journal, volume 2, pages 1243-1246 PMID 20761760
Description: Generally considered to be the first synthesis of results from separate studies, although no formal statistical methods for combining results are presented.
Importance: Breakthrough, Influence
The Probability Integral Transformation for Testing Goodness of Fit and Combining Independent Tests of Significance
Description: A comprehensive treatment of the various methods for formally combining results from different experiments
Importance: Breakthrough, Influence
Experimental design
On Small Differences in Sensation
Author: Charles Sanders Peirce and Joseph Jastrow
Publication data:Peirce, Charles Sanders; Jastrow, Joseph (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences3: 73–83. http://psychclassics.yorku.ca/Peirce/small-diffs.htm.
Description: Peirce and Jastrow use logistic regression to estimate subjective probabilities of subjects's judgments of the heavier of two measurements, following a randomized controlled repeated measures design.[9][10]
Importance: The first randomized experiment, which also used blinding; it seems also to have been the first experiment for estimating subjective probabilities.[9][10]
The Design of Experiments
Author: Fisher, RA
Publication data: 1935, Oliver and Boyd, Edinburgh
Description: The first textbook on experimental design
Importance: Influence[11][12][13]
The Design and Analysis of Experiments
Author: Oscar Kempthorne
Publication data: 1950, John Wiley & Sons , New York (Reprinted with corrections in 1979 by Robert E. Krieger)
Description: Early exposition of the general linear model using matrix algebra (following lecture notes of George W. Brown). Bases inference on the randomization distribution objectively defined by the experimental protocol, rather than a so-called "statistical model" expressing the subjective beliefs of a statistician: The normal model is regarded as a convenient approximation to the randomization-distribution, whose quality is assessed by theorems about moments and simulation experiments.
Importance: The first and most extensive discussion of randomation-based inference in the field of design of experiments until the recent 2-volume work by Hinkelmann and Kempthorne; randomization-based inference is called "design-based" inference in survey sampling of finite populations. Introduced the treatment-unit additivity hypothesis, which was discussed in chapter 2 of David R. Cox's book on experiments (1958) and which has influenced Donald Rubin and Paul Rosenbaum's analysis of observational data.
On the Experimental Attainment of Optimum Conditions (with discussion)
Author: George E. P. Box and K. B. Wilson.
Publication data: (1951) Journal of the Royal Statistical Society Series B 13(1):1–45.
Description: Introduced Box-Wilson central composite design for fitting a quadratic polynomial in several variables to experimental data, when an initial affine model had failed to yield a direction of ascent. The design and analysis is motivated by a problem in chemical engineering.
Importance: Introduced response surface methodology for approximating local optima of systems with noisy observations of responses.
See also
List of scientific journals in statistics
References
↑Schervish, Mark J. (November 1987). "A Review of Multivariate Analysis". Statistical Science2 (4): 396–413. doi:10.1214/ss/1177013111. ISSN 0883-4237.
↑"Statistical Methods for Research Workers". Encyclopædia Britannica, Inc.. http://www.britannica.com/EBchecked/topic/564160/Statistical-Methods-for-Research-Workers.
↑"Steel, Robert GD & Torrie, JH. Principles and procedures of statistics". Current Contents/Life Sciences39: 20. 1977. http://garfield.library.upenn.edu/classics1977/A1977DU23500002.pdf.
↑"Sokal RR and Rohlf FI. Biometry: the principles and practice of statistics in biological research". Current Contents/Agriculture, Biology, Environment41: 22. 1982. http://garfield.library.upenn.edu/classics1982/A1982PJ14400001.pdf.
↑Aldrich, John (1997). "R.A. Fisher and the making of maximum likelihood 1912-1922". Statistical Science12 (3): 162–176. doi:10.1214/ss/1030037906. http://projecteuclid.org/Dienst/UI/1.0/Summarize/euclid.ss/1030037906.
↑Searle, SR (November 1991). "C.R. Henderson, the statistician; and his contributions to variance components estimation". Journal of Dairy Science74 (11): 4035–4044. doi:10.3168/jds.S0022-0302(91)78599-8. ISSN 0022-0302. PMID 1757641.
↑"Henderson, CR: Estimation of variance and covariance components". Current Contents/Agriculture Biology & Environmental Sciences24: 10. 1980. http://garfield.library.upenn.edu/classics1980/A1980JU47400001.pdf.
↑"Mantel N. Evaluation of survival data and two new rank order statistics arising in its consideration". Current Contents/Life Sciences8: 19. 1983. http://garfield.library.upenn.edu/classics1983/A1983QB30100002.pdf.
↑ 9.09.1Cite error: Invalid <ref> tag; no text was provided for refs named projecteuclid
↑ 10.010.1Cite error: Invalid <ref> tag; no text was provided for refs named Stephen1992
↑Stanley, J. C. (1966). "The Influence of Fisher's "The Design of Experiments" on Educational Research Thirty Years Later". American Educational Research Journal3 (3): 223–229. doi:10.3102/00028312003003223.
↑Box, JF (February 1980). "R. A. Fisher and the Design of Experiments, 1922-1926". The American Statistician34 (1): 1–7. doi:10.2307/2682986.
↑Yates, F (June 1964). "Sir Ronald Fisher and the Design of Experiments". Biometrics20 (2): 307–321. doi:10.2307/2528399.
External links
Eugene Garfield. "What is a Citation Classic?". University of Pennsylvania. http://www.garfield.library.upenn.edu/classics.html.
Ryan, TP & Woodall, WH; Woodall (July 2005). "The Most-Cited Statistical Papers". Journal of Applied Statistics32 (5): 461–474. doi:10.1080/02664760500079373.