The algebra of random variables in statistics, provides rules for the symbolic manipulation of random variables, while avoiding delving too deeply into the mathematically sophisticated ideas of probability theory. Its symbolism allows the treatment of sums, products, ratios and general functions of random variables, as well as dealing with operations such as finding the probability distributions and the expectations (or expected values), variances and covariances of such combinations.
In principle, the elementary algebra of random variables is equivalent to that of conventional non-random (or deterministic) variables. However, the changes occurring on the probability distribution of a random variable obtained after performing algebraic operations are not straightforward. Therefore, the behavior of the different operators of the probability distribution, such as expected values, variances, covariances, and moments, may be different from that observed for the random variable using symbolic algebra. It is possible to identify some key rules for each of those operators, resulting in different types of algebra for random variables, apart from the elementary symbolic algebra: Expectation algebra, Variance algebra, Covariance algebra, Moment algebra, etc.
Considering two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math], the following algebraic operations are possible:
In all cases, the variable [math]\displaystyle{ Z }[/math] resulting from each operation is also a random variable. All commutative and associative properties of conventional algebraic operations are also valid for random variables. If any of the random variables is replaced by a deterministic variable or by a constant value, all the previous properties remain valid.
The expected value [math]\displaystyle{ E }[/math] of the random variable [math]\displaystyle{ Z }[/math] resulting from an algebraic operation between two random variables can be calculated using the following set of rules:
If any of the random variables is replaced by a deterministic variable or by a constant value ([math]\displaystyle{ k }[/math]), the previous properties remain valid considering that [math]\displaystyle{ P[X = k] = 1 }[/math] and, therefore, [math]\displaystyle{ E[X]=k }[/math].
If [math]\displaystyle{ Z }[/math] is defined as a general non-linear algebraic function [math]\displaystyle{ f }[/math] of a random variable [math]\displaystyle{ X }[/math], then:
[math]\displaystyle{ E[Z]=E[f(X)] \neq f(E[X]) }[/math]
Some examples of this property include:
The exact value of the expectation of the non-linear function will depend on the particular probability distribution of the random variable [math]\displaystyle{ X }[/math].
The variance [math]\displaystyle{ \mathrm{Var} }[/math] of the random variable [math]\displaystyle{ Z }[/math] resulting from an algebraic operation between random variables can be calculated using the following set of rules:
where [math]\displaystyle{ \mathrm{Cov}[X,Y]=\mathrm{Cov}[Y,X] }[/math] represents the covariance operator between random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math].
The variance of a random variable can also be expressed directly in terms of the covariance or in terms of the expected value:
[math]\displaystyle{ \mathrm{Var}[X] = \mathrm{Cov}(X,X) = E[X^2] - E[X]^2 }[/math]
If any of the random variables is replaced by a deterministic variable or by a constant value ([math]\displaystyle{ k }[/math]), the previous properties remain valid considering that [math]\displaystyle{ P[X = k] = 1 }[/math] and [math]\displaystyle{ E[X]=k }[/math], [math]\displaystyle{ \mathrm{Var}[X]=0 }[/math] and [math]\displaystyle{ \mathrm{Cov}[Y,k]=0 }[/math]. Special cases are the addition and multiplication of a random variable with a deterministic variable or a constant, where:
If [math]\displaystyle{ Z }[/math] is defined as a general non-linear algebraic function [math]\displaystyle{ f }[/math] of a random variable [math]\displaystyle{ X }[/math], then:
[math]\displaystyle{ \mathrm{Var}[Z]=\mathrm{Var}[f(X)] \neq f(\mathrm{Var}[X]) }[/math]
The exact value of the variance of the non-linear function will depend on the particular probability distribution of the random variable [math]\displaystyle{ X }[/math].
The covariance ([math]\displaystyle{ \mathrm{Cov} }[/math]) between the random variable [math]\displaystyle{ Z }[/math] resulting from an algebraic operation and the random variable [math]\displaystyle{ X }[/math] can be calculated using the following set of rules:
The covariance of a random variable can also be expressed directly in terms of the expected value:
[math]\displaystyle{ \mathrm{Cov}(X,Y) = E[XY] - E[X]E[Y] }[/math]
If any of the random variables is replaced by a deterministic variable or by a constant value ( [math]\displaystyle{ k }[/math]), the previous properties remain valid considering that [math]\displaystyle{ E[k]=k }[/math], [math]\displaystyle{ \mathrm{Var}[k]=0 }[/math] and [math]\displaystyle{ \mathrm{Cov}[X,k]=0 }[/math].
If [math]\displaystyle{ Z }[/math] is defined as a general non-linear algebraic function [math]\displaystyle{ f }[/math]of a random variable [math]\displaystyle{ X }[/math], then:
[math]\displaystyle{ \mathrm{Cov}[Z,X]=\mathrm{Cov}[f(X),X]=E[Xf(X)]-E[f(X)]E[X] }[/math]
The exact value of the variance of the non-linear function will depend on the particular probability distribution of the random variable [math]\displaystyle{ X }[/math].
If the moments of a certain random variable [math]\displaystyle{ X }[/math]are known (or can be determined by integration if the probability density function is known), then it is possible to approximate the expected value of any general non-linear function [math]\displaystyle{ f(X) }[/math]as a Taylor series expansion of the moments, as follows:
[math]\displaystyle{ f(X)= \displaystyle \sum_{n=0}^\infty \displaystyle \frac{1}{n!}\biggl({d^nf \over dX^n}\biggr)_{X=\mu}(X-\mu)^n }[/math], where [math]\displaystyle{ \mu=E[X] }[/math]is the mean value of [math]\displaystyle{ X }[/math].
[math]\displaystyle{ E[f(X)]=E\biggl(\textstyle \sum_{n=0}^\infty \displaystyle {1 \over n!}\biggl({d^nf \over dX^n}\biggr)_{X=\mu}(X-\mu)^n\biggr)=\displaystyle \sum_{n=0}^\infty \displaystyle {1 \over n!}\biggl({d^nf \over dX^n}\biggr)_{X=\mu}E[(X-\mu)^n]=\textstyle \sum_{n=0}^\infty \displaystyle \frac{1}{n!}\biggl({d^nf \over dX^n}\biggr)_{X=\mu}\mu_n(X) }[/math], where [math]\displaystyle{ \mu_n(X)=E[(X-\mu)^n] }[/math]is the n-th moment of [math]\displaystyle{ X }[/math] about its mean. Note that by their definition, [math]\displaystyle{ \mu_0(X)=1 }[/math] and [math]\displaystyle{ \mu_1(X)=0 }[/math]. The first order term always vanishes but was kept to obtain a closed form expression.
Then,
[math]\displaystyle{ E[f(X)]\approx \textstyle \sum_{n=0}^{n_{max}} \displaystyle {1 \over n!}\biggl({d^nf \over dX^n}\biggr)_{X=\mu}\mu_n(X) }[/math], where the Taylor expansion is truncated after the [math]\displaystyle{ n_{max} }[/math]-th moment.
Particularly for functions of normal random variables, it is possible to obtain a Taylor expansion in terms of the standard normal distribution:[1]
[math]\displaystyle{ f(X)= \textstyle \sum_{n=0}^\infty \displaystyle {\sigma^n \over n!}\biggl({d^nf \over dX^n}\biggr)_{X=\mu}\mu_n(Z) }[/math], where [math]\displaystyle{ X\sim N(\mu,\sigma ^2) }[/math]is a normal random variable, and [math]\displaystyle{ Z\sim N(0,1) }[/math]is the standard normal distribution. Thus,
[math]\displaystyle{ E[f(X)]\approx \textstyle \sum_{n=0}^{n_{max}} \displaystyle {\sigma ^n \over n!}\biggl({d^nf \over dX^n}\biggr)_{X=\mu}\mu_n(Z) }[/math], where the moments of the standard normal distribution are given by:
[math]\displaystyle{ \mu_n(Z)= \begin{cases} \prod_{i=1}^{n/2}(2i-1), & \text{if }n\text{ is even} \\ 0, & \text{if }n\text{ is odd} \end{cases} }[/math]
Similarly for normal random variables, it is also possible to approximate the variance of the non-linear function as a Taylor series expansion as:
[math]\displaystyle{ Var[f(X)]\approx \textstyle \sum_{n=1}^{n_{max}} \displaystyle \biggl({\sigma^n \over n!}\biggl({d^nf \over dX^n}\biggr)_{X=\mu}\biggr)^2Var[Z^n]+\textstyle \sum_{n=1}^{n_{max}} \displaystyle \textstyle \sum_{m \neq n} \displaystyle {\sigma^{n+m} \over {n!m!}}\biggl({d^nf \over dX^n}\biggr)_{X=\mu}\biggl({d^mf \over dX^m}\biggr)_{X=\mu}Cov[Z^n,Z^m] }[/math], where
[math]\displaystyle{ Var[Z^n]= \begin{cases} \prod_{i=1}^{n}(2i-1) -\prod_{i=1}^{n/2}(2i-1)^2, & \text{if }n\text{ is even} \\ \prod_{i=1}^{n}(2i-1), & \text{if }n\text{ is odd} \end{cases} }[/math], and
[math]\displaystyle{ Cov[Z^n,Z^m]= \begin{cases} \prod_{i=1}^{(n+m)/2}(2i-1) -\prod_{i=1}^{n/2}(2i-1)\prod_{j=1}^{m/2}(2j-1), & \text{if }n\text{ and }m \text{ are even} \\ \prod_{i=1}^{(n+m)/2}(2i-1), & \text{if }n\text{ and }m\text{ are odd} \\ 0, & \text{otherwise} \end{cases} }[/math]
In the algebraic axiomatization of probability theory, the primary concept is not that of probability of an event, but rather that of a random variable. Probability distributions are determined by assigning an expectation to each random variable. The measurable space and the probability measure arise from the random variables and expectations by means of well-known representation theorems of analysis. One of the important features of the algebraic approach is that apparently infinite-dimensional probability distributions are not harder to formalize than finite-dimensional ones.
Random variables are assumed to have the following properties:
This means that random variables form complex commutative *-algebras. If X = X* then the random variable X is called "real".
An expectation E on an algebra A of random variables is a normalized, positive linear functional. What this means is that
One may generalize this setup, allowing the algebra to be noncommutative. This leads to other areas of noncommutative probability such as quantum probability, random matrix theory, and free probability.
Original source: https://en.wikipedia.org/wiki/Algebra of random variables.
Read more |