Error exponents in hypothesis testing

From HandWiki - Reading time: 4 min


In statistical hypothesis testing, the error exponent of a hypothesis testing procedure is the rate at which the probabilities of Type I and Type II decay exponentially with the size of the sample used in the test. For example, if the probability of error Perror of a test decays as enβ, where n is the sample size, the error exponent is β.

Formally, the error exponent of a test is defined as the limiting value of the ratio of the negative logarithm of the error probability to the sample size for large sample sizes: limnlnPerrorn. Error exponents for different hypothesis tests are computed using Sanov's theorem and other results from large deviations theory. There are various methods used to show that an error exponent is achievable, including the likelihood ratio (which is known to be optimal in certain circumstances), and the empirical distribution[1]. Error exponents are sometimes referred to as error rates, due to the connection between hypothesis testing and information theory[2].

Error exponents in binary hypothesis testing

Consider a binary hypothesis testing problem in which observations are modeled as independent and identically distributed random variables under each hypothesis. Let Y1,Y2,,Yn denote the observations. Let f0 denote the probability density function of each observation Yi under the null hypothesis H0 and let f1 denote the probability density function of each observation Yi under the alternate hypothesis H1.

In this case there are two possible error events. Error of type I, also called false positive, occurs when the null hypothesis is true and it is wrongly rejected. Error of type II, also called false negative, occurs when the alternate hypothesis is true and null hypothesis is not rejected. The probability of type I error is denoted P(errorH0) and the probability of type II error is denoted P(errorH1). In some fields, the type I error is denoted by αn and the type II error is denoted by βn.

Optimal error exponent for Neyman–Pearson testing

In the Neyman–Pearson[3] version of binary hypothesis testing, one is interested in minimizing the probability of type II error P(errorH1) subject to the constraint that the probability of type I error P(errorH0) is less than or equal to a pre-specified level α. In this setting, the optimal testing procedure is a likelihood-ratio test.[4] Furthermore, the optimal test guarantees that the type II error probability decays exponentially in the sample size n according to limnlnP(errorH1)n=D(f0f1).[5] The error exponent D(f0f1) is the Kullback–Leibler divergence between the probability distributions of the observations under the two hypotheses. This exponent is also referred to as the Chernoff–Stein lemma exponent.

Optimal error exponent for average error probability in Bayesian hypothesis testing

In the Bayesian version of binary hypothesis testing one is interested in minimizing the average error probability under both hypothesis, assuming a prior probability of occurrence on each hypothesis. Let π0 denote the prior probability of hypothesis H0. In this case the average error probability is given by Pave=π0P(errorH0)+(1π0)P(errorH1). In this setting again a likelihood ratio test is optimal and the optimal error decays as limnlnPaven=C(f0,f1) where C(f0,f1) represents the Chernoff-information between the two distributions defined as C(f0,f1)=maxλ[0,1][ln(f0(x))λ(f1(x))(1λ)dx].[5]

Trade-off between type I and II error

A more explicit tradeoff between the type I and type II error is observed when the type I error is constrained to decay exponentially, and the type II error is minimized. If we require

P(error|H0)<enr

for some

r<D(f1f0)

, then the optimal type II error exponent is described by

lim supn1nlnP(error|H1)=Hr(f0f1)

. Here

H(f0f1)

is the Hoeffding divergence [6][7][2] described by

Hr(f0f1)=max0s1Ψ(s)(1s)rs (1)

where

Ψ(s)=dx f0(x)1sf1(x)s

.

References

  1. Hoeffding, Wassily (1965). "Asymptotically Optimal Tests for Multinomial Distributions". The Annals of Mathematical Statistics 36 (2): 369–401. ISSN 0003-4851. https://www.jstor.org/stable/2238145. 
  2. 2.0 2.1 Blahut, R. (1974). "Hypothesis testing and information theory". IEEE Transactions on Information Theory 20 (4): 405–417. doi:10.1109/TIT.1974.1055254. ISSN 1557-9654. https://ieeexplore.ieee.org/abstract/document/1055254. 
  3. Neyman, J.; Pearson, E. S. (1933), "On the problem of the most efficient tests of statistical hypotheses", Philosophical Transactions of the Royal Society of London A 231 (694–706): 289–337, doi:10.1098/rsta.1933.0009, Bibcode1933RSPTA.231..289N, http://www.stats.org.uk/statistical-inference/NeymanPearson1933.pdf 
  4. Lehmann, E. L.; Romano, Joseph P. (2005). Testing Statistical Hypotheses (3 ed.). New York: Springer. ISBN 978-0-387-98864-1. 
  5. 5.0 5.1 Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory (2 ed.). New York: Wiley-Interscience. 
  6. Ogawa, Tomohiro; Hayashi, Masahito (2002), On Error Exponents in Quantum Hypothesis Testing, arXiv, doi:10.48550/arXiv.quant-ph/0206151, arXiv:quant-ph/0206151, http://arxiv.org/abs/quant-ph/0206151, retrieved 2026-03-02 
  7. Hoeffding, Wassily (1994), Fisher, N. I.; Sen, P. K., eds., "On Probabilities of Large Deviations" (in en), The Collected Works of Wassily Hoeffding (New York, NY: Springer): pp. 473–490, doi:10.1007/978-1-4612-0865-5_29, ISBN 978-1-4612-0865-5, https://doi.org/10.1007/978-1-4612-0865-5_29, retrieved 2026-03-02 




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Error_exponents_in_hypothesis_testing
37 views | Status: cached on April 20 2026 08:59:22
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF