Control variates

From HandWiki - Reading time: 3 min

Short description: Technique for increasing the precision of estimates in Monte Carlo experiments

The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.[1] [2][3]

Underlying principle

Let the unknown parameter of interest be μ, and assume we have a statistic m such that the expected value of m is μ: E[m]=μ, i.e. m is an unbiased estimator for μ. Suppose we calculate another statistic t such that E[t]=τ is a known value. Then

m=m+c(tτ)

is also an unbiased estimator for μ for any choice of the coefficient c. The variance of the resulting estimator m is

Var(m)=Var(m)+c2Var(t)+2cCov(m,t).

By differentiating the above expression with respect to c, it can be shown that choosing the optimal coefficient

c=Cov(m,t)Var(t)

minimizes the variance of m. (Note that this coefficient is the same as the coefficient obtained from a linear regression.) With this choice,

Var(m)=Var(m)[Cov(m,t)]2Var(t)=(1ρm,t2)Var(m)

where

ρm,t=Corr(m,t)

is the correlation coefficient of m and t. The greater the value of |ρm,t|, the greater the variance reduction achieved.

In the case that Cov(m,t), Var(t), and/or ρm,t are unknown, they can be estimated across the Monte Carlo replicates. This is equivalent to solving a certain least squares system; therefore this technique is also known as regression sampling.

When the expectation of the control variable, E[t]=τ, is not known analytically, it is still possible to increase the precision in estimating μ (for a given fixed simulation budget), provided that the two conditions are met: 1) evaluating t is significantly cheaper than computing m; 2) the magnitude of the correlation coefficient |ρm,t| is close to unity. [3]

Example

We would like to estimate

I=0111+xdx

using Monte Carlo integration. This integral is the expected value of f(U), where

f(U)=11+U

and U follows a uniform distribution [0, 1]. Using a sample of size n denote the points in the sample as u1,,un. Then the estimate is given by

I1nif(ui).

Now we introduce g(U)=1+U as a control variate with a known expected value E[g(U)]=01(1+x)dx=32 and combine the two into a new estimate

I1nif(ui)+c(1nig(ui)3/2).

Using n=1500 realizations and an estimated optimal coefficient c0.4773 we obtain the following results

Estimate Variance
Classical estimate 0.69475 0.01947
Control variates 0.69295 0.00060

The variance was significantly reduced after using the control variates technique. (The exact result is I=ln20.69314718.)

See also


Notes

  1. Lemieux, C. (2017). "Control Variates". Wiley StatsRef: Statistics Reference Online: 1–8. doi:10.1002/9781118445112.stat07947. ISBN 9781118445112. 
  2. Glasserman, P. (2004). Monte Carlo Methods in Financial Engineering. New York: Springer. ISBN:0-387-00451-3 (p. 185)
  3. 3.0 3.1 Botev, Z.; Ridder, A. (2017). "Variance Reduction". Wiley StatsRef: Statistics Reference Online: 1–6. doi:10.1002/9781118445112.stat07975. ISBN 9781118445112. 

References

  • Ross, Sheldon M. (2002) Simulation 3rd edition ISBN:978-0-12-598053-1
  • Averill M. Law & W. David Kelton (2000), Simulation Modeling and Analysis, 3rd edition. ISBN:0-07-116537-1
  • S. P. Meyn (2007) Control Techniques for Complex Networks, Cambridge University Press. ISBN:978-0-521-88441-9. Downloadable draft (Section 11.4: Control variates and shadow functions)




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Control_variates
32 views | Status: cached on July 15 2024 01:29:01
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF