Error exponents in hypothesis testing

From HandWiki - Reading time: 3 min

In statistical hypothesis testing, the error exponent of a hypothesis testing procedure is the rate at which the probabilities of Type I and Type II decay exponentially with the size of the sample used in the test. For example, if the probability of error [math]\displaystyle{ P_{\mathrm{error}} }[/math] of a test decays as [math]\displaystyle{ e^{-n \beta} }[/math], where [math]\displaystyle{ n }[/math] is the sample size, the error exponent is [math]\displaystyle{ \beta }[/math]. Formally, the error exponent of a test is defined as the limiting value of the ratio of the negative logarithm of the error probability to the sample size for large sample sizes: [math]\displaystyle{ \lim_{n \to \infty}\frac{-\ln P_\text{error}}{n} }[/math]. Error exponents for different hypothesis tests are computed using Sanov's theorem and other results from large deviations theory.

Error exponents in binary hypothesis testing

Consider a binary hypothesis testing problem in which observations are modeled as independent and identically distributed random variables under each hypothesis. Let [math]\displaystyle{ Y_1, Y_2, \ldots, Y_n }[/math] denote the observations. Let [math]\displaystyle{ f_0 }[/math] denote the probability density function of each observation [math]\displaystyle{ Y_i }[/math] under the null hypothesis [math]\displaystyle{ H_0 }[/math] and let [math]\displaystyle{ f_1 }[/math] denote the probability density function of each observation [math]\displaystyle{ Y_i }[/math] under the alternate hypothesis [math]\displaystyle{ H_1 }[/math].

In this case there are two possible error events. Error of type 1, also called false positive, occurs when the null hypothesis is true and it is wrongly rejected. Error of type 2, also called false negative, occurs when the alternate hypothesis is true and null hypothesis is not rejected. The probability of type 1 error is denoted [math]\displaystyle{ P (\mathrm{error}\mid H_0) }[/math] and the probability of type 2 error is denoted [math]\displaystyle{ P (\mathrm{error}\mid H_1) }[/math].

Optimal error exponent for Neyman–Pearson testing

In the Neyman–Pearson[1] version of binary hypothesis testing, one is interested in minimizing the probability of type 2 error [math]\displaystyle{ P (\text{error}\mid H_1) }[/math] subject to the constraint that the probability of type 1 error [math]\displaystyle{ P (\text{error}\mid H_0) }[/math] is less than or equal to a pre-specified level [math]\displaystyle{ \alpha }[/math]. In this setting, the optimal testing procedure is a likelihood-ratio test.[2] Furthermore, the optimal test guarantees that the type 2 error probability decays exponentially in the sample size [math]\displaystyle{ n }[/math] according to [math]\displaystyle{ \lim_{n \to \infty} \frac{- \ln P (\mathrm{error}\mid H_1)}{n} = D(f_0\parallel f_1) }[/math].[3] The error exponent [math]\displaystyle{ D(f_0\parallel f_1) }[/math] is the Kullback–Leibler divergence between the probability distributions of the observations under the two hypotheses. This exponent is also referred to as the Chernoff–Stein lemma exponent.

Optimal error exponent for average error probability in Bayesian hypothesis testing

In the Bayesian version of binary hypothesis testing one is interested in minimizing the average error probability under both hypothesis, assuming a prior probability of occurrence on each hypothesis. Let [math]\displaystyle{ \pi_0 }[/math] denote the prior probability of hypothesis [math]\displaystyle{ H_0 }[/math]. In this case the average error probability is given by [math]\displaystyle{ P_\text{ave} = \pi_0 P (\text{error}\mid H_0) + (1-\pi_0)P (\text{error}\mid H_1) }[/math]. In this setting again a likelihood ratio test is optimal and the optimal error decays as [math]\displaystyle{ \lim_{n \to \infty} \frac{- \ln P_\text{ave} }{n} = C(f_0,f_1) }[/math] where [math]\displaystyle{ C(f_0,f_1) }[/math] represents the Chernoff-information between the two distributions defined as [math]\displaystyle{ C(f_0,f_1) = \max_{\lambda \in [0,1]} \left[-\ln \int (f_0(x))^\lambda (f_1(x))^{(1-\lambda)} \, dx \right] }[/math].[3]

References

  1. Neyman, J.; Pearson, E. S. (1933), "On the problem of the most efficient tests of statistical hypotheses", Philosophical Transactions of the Royal Society of London A 231 (694–706): 289–337, doi:10.1098/rsta.1933.0009, Bibcode1933RSPTA.231..289N, http://www.stats.org.uk/statistical-inference/NeymanPearson1933.pdf 
  2. Lehmann, E. L.; Romano, Joseph P. (2005). Testing Statistical Hypotheses (3 ed.). New York: Springer. ISBN 978-0-387-98864-1. 
  3. 3.0 3.1 Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory (2 ed.). New York: Wiley-Interscience. 




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Error_exponents_in_hypothesis_testing
9 views | Status: cached on July 31 2024 05:25:14
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF