Statistical hypothesis testing

From Wikidoc - Reading time: 13 min

Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]


A statistical hypothesis test is a method of making statistical decisions from and about experimental data. Null-hypothesis testing just answers the question of "how well the findings fit the possibility that chance factors alone might be responsible."[1] This is done by asking and answering a hypothetical question. One use is deciding whether experimental results contain enough information to cast doubt on conventional wisdom.

As an example, consider determining whether a suitcase contains some radioactive material. Placed under a Geiger counter, it produces 10 counts per minute. The null hypothesis is that no radioactive material is in the suitcase and that all measured counts are due to ambient radioactivity typical of the surrounding air and harmless objects in a suitcase. We can then calculate how likely it is that the null hypothesis produces 10 counts per minute. If it is likely, for example, if the null hypothesis predicts on average 9 counts per minute and a standard deviation of 1 count per minute, we say that the suitcase is compatible with the null hypothesis (which does not imply that there is no radioactive material, we just can't determine!); on the other hand, if the null hypothesis predicts, for example, 1 count per minute and a standard deviation of 1 count per minute, then the suitcase is not compatible with the null hypothesis and there are likely other factors responsible to produce the measurements.

The test described here is more fully the null-hypothesis statistical significance test. The null hypothesis is a conjecture that exists solely to be falsified by the sample. Statistical significance is a possible finding of the test--that the sample is unlikely to have occurred by chance given the truth of the null hypothesis. The name of the test describes its formulation and its possible outcome. One characteristic of the test is its crisp decision: reject or do not reject (which is not the same as accept). A calculated value is compared to a threshold.

Details[edit | edit source]

One may be faced with the problem of making a definite decision with respect to an uncertain hypothesis which is known only through its observable consequences. A statistical hypothesis test, or more briefly, hypothesis test, is an algorithm used to choose between the alternatives (for or against the hypothesis) which minimizes certain risks.

This article describes the commonly used frequentist treatment of hypothesis testing. From the Bayesian point of view, it is appropriate to treat hypothesis testing as a special case of normative decision theory (specifically a model selection problem) and it is possible to accumulate evidence in favor of (or against) a hypothesis using concepts such as likelihood ratios known as Bayes factors.

There are several preparations we make before we observe the data.

  1. The null hypothesis must be stated in mathematical/statistical terms that make it possible to calculate the probability of possible samples assuming the hypothesis is correct. For example: The mean response to treatment being tested is equal to the mean response to the placebo in the control group. Both responses have the normal distribution with this unknown mean and the same known standard deviation . . . (value).
  2. A test statistic must be chosen that will summarize the information in the sample that is relevant to the hypothesis. In the example given above, it might be the numerical difference between the two sample means, m1 − m2.
  3. The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). In this example, the difference between sample means would have a normal distribution with a standard deviation equal to the common standard deviation times the factor <math>\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}</math> where n1 and n2 are the sample sizes.
  4. Among all the sets of possible values, we must choose one that we think represents the most extreme evidence against the hypothesis. That is called the critical region of the test statistic. The probability of the test statistic falling in the critical region when the null hypothesis is correct, is called the alpha value (or size) of the test.
  5. The probability that a sample falls in the critical region when the parameter is <math>\theta</math>, where <math>\theta</math> is for the alternative hypothesis, is called the power of the test at <math>\theta</math>. The power function of a critical region is the function that maps <math>\theta</math> to the power of <math>\theta</math>.

After the data are available, the test statistic is calculated and we determine whether it is inside the critical region.

If the test statistic is inside the critical region, then our conclusion is one of the following:

  1. Reject the null hypothesis. (Therefore the critical region is sometimes called the rejection region, while its complement is the acceptance region.)
  2. An event of probability less than or equal to alpha has occurred.

The researcher has to choose between these logical alternatives. In the example we would say: the observed response to treatment is statistically significant.

If the test statistic is outside the critical region, the only conclusion is that there is not enough evidence to reject the null hypothesis. This is not the same as evidence in favor of the null hypothesis. That we cannot obtain using these arguments, since lack of evidence against a hypothesis is not evidence for it. On this basis, statistical research progresses by eliminating error, not by finding the truth.

Definition of terms[edit | edit source]

Following the exposition in Lehmann and Romano[2], we shall make some definitions:

Simple hypothesis
Any hypothesis which specifies the population distribution completely.
Composite hypothesis
Any hypothesis which does not specify the population distribution completely.
Statistical test
A decision function that takes its values in the set of hypotheses.
Region of acceptance
The set of values for which we fail to reject the null hypothesis.
Region of rejection / Critical region
The set of values of the test statistic for which the null hypothesis is rejected.
Power of a test (1-<math>\beta</math>)
The test's probability of correctly rejecting the null hypothesis. The complement of the false negative rate
Size / Significance level of a test (<math>\alpha</math>)
For simple hypotheses, this is the test's probability of incorrectly rejecting the null hypothesis. The false positive rate. For composite hypotheses this is the upper bound of the probability of rejecting the null hypothesis over all cases covered by the null hypothesis.
Most powerful test
For a given size or significance level, the test with the greatest power.
Uniformly most powerful test (UMP)
A test with the greatest power for all values of the parameter being tested.
Unbiased test
For a specific alternative hypothesis, a test is said to be unbiased when the probability of rejecting the null hypothesis is not less than the significance level when the alternative is true and is less than or equal to the significance level when the null hypothesis is true.
Uniformly most powerful unbiased (UMPU)
A test which is UMP in the set of all unbiased tests.

Common test statistics[edit | edit source]

See legend defining symbols at bottom of table. The statistics for some other tests have their own page on Wikipedia, including the Wald test and the likelihood ratio test.

Name Formula Assumptions
One-sample z-test <math>z=\frac{\overline{x}-\mu_0}{\frac{\sigma}{\sqrt{n}}}</math> (Normal distribution or n > 30) and σ known.

(z is the distance from the mean in relation to the standard deviation of the mean). For non-normal distributions it is possible to calculate a minimum proportion of a population that falls within k standard deviations for any k (see: Chebyshev's inequality).

Two-sample z-test <math>z=\frac{(\overline{x}_1 - \overline{x}_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}}</math> Normal distribution and independent observations and1 AND σ2 known)
One-sample t-test <math>t=\frac{\overline{x}-\mu_0} {\frac{s}{\sqrt{n}}} ,</math>

<math>df=n-1 \ </math>

(Normal population or n > 30) and σ unknown
Paired t-test <math>t=\frac{\overline{d}-d_0} {\frac{s_d}{\sqrt{n}}} ,</math>

<math>df=n-1 \ </math>

(Normal population of differences or n > 30) and σ unknown
One-proportion z-test <math>z=\frac{\hat{p} - p}{\sqrt{\frac{p(1-p)}{n}}}</math> n .p > 10 and n (1 − p) > 10 and it is a SRS (Simple Random Sample).
Two-proportion z-test, equal variances <math>z=\frac{\hat{p}_1 - \hat{p}_2}{\sqrt{\hat{p}(1 - \hat{p})(\frac{1}{n_1} + \frac{1}{n_2})}}</math>

<math>\hat{p}=\frac{x_1 + x_2}{n_1 + n_2}</math>

n1 p1 > 5 AND n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations
Two-proportion z-test, unequal variances <math>z=\frac{(\hat{p}_1 - \hat{p}_2) - (p_1 - p_2)}{\sqrt{\frac{\hat{p}_1(1 - \hat{p}_1)}{n_1} + \frac{\hat{p}_2(1 - \hat{p}_2)}{n_2}}}</math> n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations
Name Formula Assumptions
Two-sample pooled t-test <math>t=\frac{(\overline{x}_1 - \overline{x}_2) - (\mu_1 - \mu_2)}{s_p\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}},</math>

<math>s_p^2=\frac{(n_1 - 1)s_1^2 + (n_2 - 1)s_2^2}{n_1 + n_2 - 2},</math>
<math>df=n_1 + n_2 - 2 \ </math>

(Normal populations or n1 + n2 > 40) and independent observations and σ1 = σ2 and1 and σ2 unknown)
Two-sample unpooled t-test <math>t=\frac{(\overline{x}_1 - \overline{x}_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}},</math>

<math>df=\frac{(n_1 - 1)(n_2 - 1)}{(n_2 - 1)c^2 + (n_1 - 1)(1 - c)^2},</math>
<math>c=\frac{\frac{s_1^2}{n_1}}{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}</math>
or <math>df=\min\{n_1,n_2\} - 1\ </math>

(Normal populations or n1 + n2 > 40) and independent observations and σ1 ≠ σ2 and1 and σ2 unknown)
Definition of symbols <math>n</math> = sample size
<math>\overline{x}</math> = sample mean
<math>\mu_0</math> = population mean
<math>\sigma</math> = population standard deviation
<math>t</math> = t statistic
<math>df</math> = degrees of freedom
<math>n_1</math> = sample 1 size
<math>n_2</math> = sample 2 size
<math>s_1</math> = sample 1 std. deviation
<math>s_2</math> = sample 2 std. deviation

<math>\overline{d}</math> = sample mean of differences
<math>d_0</math> = population mean difference
<math>s_d</math> = std. deviation of differences
<math>p_1</math> = proportion 1
<math>p_2</math> = proportion 2
<math>\mu_1</math> = population 1 mean
<math>\mu_2</math> = population 2 mean
<math>\min\{n_1,n_2\} </math> = minimum of n1 and n2
<math>x_1 = n_1 p_1</math>
<math>x_2 = n_2 p_2</math>

Origins[edit | edit source]

Hypothesis testing is largely the product of Ronald Fisher, Jerzy Neyman, Karl Pearson and (son) Egon Pearson. Fisher was an agricultural statistician who emphasized rigorous experimental design and methods to extract a result from few samples assuming Gaussian distributions. Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an (extended) hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century.

Example[edit | edit source]

The following example is summarized from Fisher[3] Fisher thoroughly explained his method in a proposed experiment to test a Lady's claimed ability to determine the means of tea preparation by taste. The article is less than 10 pages in length and is notable for its simplicity and completeness regarding terminology, calculations and design of the experiment. The example is loosely based on an event in Fisher's life. The Lady proved him wrong.

  1. The null hypothesis was that the Lady had no such ability.
  2. The test statistic was a simple count of the number of successes in 8 trials.
  3. The distribution associated with the null hypothesis was the binomial distribution familiar from coin flipping experiments.
  4. The critical region was the single case of 8 successes in 8 trials based on a conventional probability criterion (< 5%).
  5. Fisher asserted that no alternative hypothesis was (ever) required.

If, and only if the 8 trials produced 8 successes was Fisher willing to reject the null hypothesis - effectively acknowledging the Lady's ability with > 98% confidence (but without quantifying her ability). Fisher later discussed the benefits of more trials and repeated tests.

Meta-criticism[edit | edit source]

Little criticism of the technique appears in introductory statistics texts. Criticism is of the application, or of the interpretation, rather than of the method.

Criticism of null-hypothesis significance testing is available in other articles ("Null-hypothesis" and "Statistical significance") and their references. Attacks and defenses of the null-hypothesis significance test are collected in Harlow et al.[4]

The original purposes of Fisher's formulation, as a tool for the experimenter, was to plan the experiment and to easily assess the information content of the small sample. There is little criticism, Bayesian in nature, of the formulation in its original context.

In other contexts, complaints focus on flawed interpretations of the results and over-dependence/emphasis on one test.

Numerous attacks on the formulation have failed to supplant it as a criterion for publication in scholarly journals. The most persistent attacks originated from the field of Psychology. After review, the American Psychological Association did not explicitly deprecate the use of null-hypothesis significance testing, but adopted enhanced publication guidelines which implicitly reduced the relative importance of such testing. The International Committee of Medical Journal Editors recognizes an obligation to publish negative (not statistically significant) studies under some circumstances. The applicability of the null-hypothesis testing to the publication of observational (as contrasted to experimental) studies is doubtful.

Criticism[edit | edit source]

Some statisticians have commented that pure "significance testing" has what is actually a rather strange goal of detecting the existence of a "real" difference between two populations. In practice a difference can almost always be found given a large enough sample, what is typically the more relevant goal of science is a determination of causal effect size. The amount and nature of the difference, in other words, is what should be studied. Many researchers also feel that hypothesis testing is something of a misnomer. In practice a single statistical test in a single study never "proves" anything.

Even when you reject a null hypothesis, effect sizes should be taken into consideration. If the effect is statistically significant but the effect size is very small, then it is a stretch to consider the effect theoretically important.

Philosophical criticism[edit | edit source]

Philosophical criticism to hypothesis testing includes consideration of borderline cases.

Any process that produces a crisp decision from uncertainty is subject to claims of unfairness near the decision threshold. (Consider close election results.) The premature death of a laboratory rat during testing can impact doctoral theses and academic tenure decisions.

"... surely, God loves the .06 nearly as much as the .05"[5]

The statistical significance required for publication has no mathematical basis, but is based on long tradition.

"It is usual and convenient for experimenters to take 5% as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results."[3]

Fisher, in the cited article, designed an experiment to achieve a statistically significant result based on sampling 8 cups of tea.

Ambivalence attacks all forms of decision making. A mathematical decision-making process is attractive because it is objective and transparent. It is repulsive because it allows authority to avoid taking personal responsibility for decisions.

Pedagogic criticism[edit | edit source]

Pedagogic criticism of the null-hypothesis testing includes the counter-intuitive formulation, the terminology and confusion about the interpretation of results.

"Despite the stranglehold that hypothesis testing has on experimental psychology, I find it difficult to imagine a less insightful means of transiting from data to conclusions."[6]

Students find it difficult to understand the formulation of statistical null-hypothesis testing. In rhetoric, examples often support an argument, but a mathematical proof "is a logical argument, not an empirical one". A single counterexample results in the rejection of a conjecture. Karl Popper defined science by its vulnerability to dis-proof by data. Null-hypothesis testing shares the mathematical and scientific perspective rather the more familiar rhetorical one. Students expect hypothesis testing to be a statistical tool for illumination of the research hypothesis by the sample; It is not. The test asks indirectly whether the sample can illuminate the research hypothesis.

Students also find the terminology confusing. While Fisher disagreed with Neyman and Pearson about the theory of testing, their terminologies have been blended. The blend is not seamless or standardized. While this article teaches a pure Fisher formulation, even it mentions Neyman and Pearson terminology (Type II error and the alternative hypothesis). The typical introductory statistics text is less consistent. The Sage Dictionary of Statistics would not agree with the title of this article, which it would call null-hypothesis testing. [1] "...there is no alternate hypothesis in Fisher's scheme: Indeed, he violently opposed its inclusion by Neyman and Pearson."[7] In discussing test results, "significance" often has two distinct meanings in the same sentence; One is a probability, the other is a subject-matter measurement (such as currency). The significance (meaning) of (statistical) significance is significant (important).

There is widespread and fundamental disagreement on the interpretation of test results.

"A little thought reveals a fact widely understood among statisticians: The null hypothesis, taken literally (and that's the only way you can take it in formal hypothesis testing), is almost always false in the real world.... If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and lead to its rejection. So if the null hypothesis is always false, what's the big deal about rejecting it?"[7] (The above criticism only applies to point hypothesis tests. If one were testing, for example, whether a parameter is greater than zero, it would not apply.)

"How has the virtually barren technique of hypothesis testing come to assume such importance in the process by which we arrive at our conclusions from our data?"[6]

Null-hypothesis testing just answers the question of "how well the findings fit the possibility that chance factors alone might be responsible."[1]

Null-hypothesis significance testing does not determine the truth or falseness of claims. It determines whether confidence in a claim based solely on a sample-based estimate exceeds a threshold. It is a research quality assurance test, widely used as one requirement for publication of experimental research with statistical results. It is uniformly agreed that statistical significance is not the only consideration in assessing the importance of research results. Rejecting the null hypothesis is not a sufficient condition for publication.

"Statistical significance does not necessarily imply practical significance!"[8]

Practical criticism[edit | edit source]

Practical criticism of hypothesis testing includes the sobering observation that published test results are often contradicted. Mathematical models support the conjecture that most published medical research test results are flawed. Null-hypothesis testing has not achieved the goal of a low error probability in medical journals. [9][10]

Improvements[edit | edit source]

Jones and Tukey suggested a modest improvement in the original null-hypothesis formulation to formalize handling of one-tail tests. Fisher ignored the 8-failure case (equally improbable as the 8-success case) in the example tea test which altered the claimed significance by a factor of 2[11].

Killeen proposed an alternative statistic that estimates the probability of duplicating an experimental result. It "provides all of the information now used in evaluating research, while avoiding many of the pitfalls of traditional statistical inference." [12]

See also[edit | edit source]

References[edit | edit source]

  1. 1.0 1.1 1.2 The Sage Dictionary of Statistics, pg. 76, Duncan Cramer, Dennis Howitt, 2004, ISBN 076194138X
  2. E.L. Lehmann and Joseph P. Romano. Testing Statistical Hypotheses (3E ed.). ISBN 0387988645.
  3. 3.0 3.1 Fisher, Sir Ronald A. (1956) [1935]. "Mathematics of a Lady Tasting Tea". In James Roy Newman. The World of Mathematics, volume 3. Unknown parameter |origtitle= ignored (help)
  4. What If There Were No Significance Tests?, Harlow, Mulaik & Steiger, 1997, ISBN 978-0-8058-2634-0.
  5. Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284.
  6. 6.0 6.1 Loftus, G.R. 1991. On the tyranny of hypothesis testing in the social sciences. Contemporary Psychology 36: 102-105.
  7. 7.0 7.1 Cohen, J. 1990. Things I have learned (so far). American Psychologist 45: 1304-1312.
  8. Introductory Statistics, Fifth Edition, 1999, pg. 521, Neil A. Weiss, ISBN 0-201-59877-9
  9. Ioannidis JPA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294: 218-228.
  10. Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2(8): e124.
  11. A Sensible Formulation of the Significance Test, Jones and Tukey, Psychological Methods 2000, Vol. 5, No. 4, pg. 411-414
  12. An Alternative to Null-Hypothesis Significance Tests, Killeen, Psychol Sci. 2005 May ; 16(5): 345-353.

External links[edit | edit source]

Template:Statistics

cs:Testování hypotéz cy:Prawf rhagdybiaeth da:Hypoteseprøvning de:Statistischer Test fa:آزمون فرض آماری it:Test di verifica d'ipotesi lo:ການທົດສອບສົມມຸດຕິຖານສະຖິຕິ nl:Statistische toets simple:Statistical hypothesis test ur:احصائی اختبار مفروضہ

Template:Jb1 Template:WH Template:WS


Licensed under CC BY-SA 3.0 | Source: https://www.wikidoc.org/index.php/Statistical_hypothesis_testing
9 views | Status: cached on July 31 2024 05:26:38
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF