A region of a sample space with the property that if the observed value of a random variable, the distribution of which is connected with the tested hypothesis, falls in that region, then the hypothesis is rejected (cf. Statistical hypotheses, verification of). Let $ H _ {0} $
be the tested hypothesis concerning the distribution of a random variable $ X $,
taking values in a sample space $ ( \mathfrak X , \mathfrak B ) $.
In constructing a non-randomized test for $ H _ {0} $,
one divides the space $ \mathfrak X $
into two disjoint sets $ K $
and $ \overline{K}\; $
such that $ K \cup \overline{K}\; = \mathfrak X $,
$ K \in \mathfrak B $.
The test amounts to the following rule: Reject $ H _ {0} $
if an experimental realization $ x $
of $ X $
falls in the set $ K $;
accept $ H _ {0} $
otherwise (i.e. if $ x \in \overline{K}\; $).
The set $ K $
is called the critical region of the test; its complement $ \overline{K}\; $
is called the acceptance region. In this sense, the problem of selecting a critical region is equivalent to the construction of a non-randomized statistical test for the hypothesis $ H _ {0} $.
Naturally, the critical region is decided upon before sampling to test for $ H _ {0} $;
on the other hand, within the context of the Neyman–Pearson theory, the actual choice of the critical region is determined by the probabilities of the errors of the first and second kind occurring in problems of statistical hypotheses testing (cf. Neyman–Pearson lemma).
[1] | H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946) |
[2] | E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1959) |
[3] | B.L. van der Waerden, "Mathematische Statistik" , Springer (1957) |