In probability theory, an empirical measure is a random measure arising from a particular realization of a (usually finite) sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical statistics.
The motivation for studying empirical measures is that it is often impossible to know the true underlying probability measure [math]\displaystyle{ P }[/math]. We collect observations [math]\displaystyle{ X_1, X_2, \dots , X_n }[/math] and compute relative frequencies. We can estimate [math]\displaystyle{ P }[/math], or a related distribution function [math]\displaystyle{ F }[/math] by means of the empirical measure or empirical distribution function, respectively. These are uniformly good estimates under certain conditions. Theorems in the area of empirical processes provide rates of this convergence.
Let [math]\displaystyle{ X_1, X_2, \dots }[/math] be a sequence of independent identically distributed random variables with values in the state space S with probability distribution P.
Definition
Properties
Definition
To generalize this notion further, observe that the empirical measure [math]\displaystyle{ P_n }[/math] maps measurable functions [math]\displaystyle{ f:S\to \mathbb{R} }[/math] to their empirical mean,
In particular, the empirical measure of A is simply the empirical mean of the indicator function, Pn(A) = Pn IA.
For a fixed measurable function [math]\displaystyle{ f }[/math], [math]\displaystyle{ P_nf }[/math] is a random variable with mean [math]\displaystyle{ \mathbb{E}f }[/math] and variance [math]\displaystyle{ \frac{1}{n}\mathbb{E}(f -\mathbb{E} f)^2 }[/math].
By the strong law of large numbers, Pn(A) converges to P(A) almost surely for fixed A. Similarly [math]\displaystyle{ P_nf }[/math] converges to [math]\displaystyle{ \mathbb{E} f }[/math] almost surely for a fixed measurable function [math]\displaystyle{ f }[/math]. The problem of uniform convergence of Pn to P was open until Vapnik and Chervonenkis solved it in 1968.[1]
If the class [math]\displaystyle{ \mathcal{C} }[/math] (or [math]\displaystyle{ \mathcal{F} }[/math]) is Glivenko–Cantelli with respect to P then Pn converges to P uniformly over [math]\displaystyle{ c\in\mathcal{C} }[/math] (or [math]\displaystyle{ f\in \mathcal{F} }[/math]). In other words, with probability 1 we have
The empirical distribution function provides an example of empirical measures. For real-valued iid random variables [math]\displaystyle{ X_1,\dots,X_n }[/math] it is given by
In this case, empirical measures are indexed by a class [math]\displaystyle{ \mathcal{C}=\{(-\infty,x]:x\in\mathbb{R}\}. }[/math] It has been shown that [math]\displaystyle{ \mathcal{C} }[/math] is a uniform Glivenko–Cantelli class, in particular,
with probability 1.
Original source: https://en.wikipedia.org/wiki/Empirical measure.
Read more |