Twisting properties

From HandWiki - Reading time: 5 min

Twisting properties in general terms are associated with the properties of samples that identify with statistics that are suitable for exchange.

Description

Starting with a sample [math]\displaystyle{ \{x_1,\ldots,x_m\} }[/math] observed from a random variable X having a given distribution law with a non-set parameter, a parametric inference problem consists of computing suitable values – call them estimates – of this parameter precisely on the basis of the sample. An estimate is suitable if replacing it with the unknown parameter does not cause major damage in next computations. In algorithmic inference, suitability of an estimate reads in terms of compatibility with the observed sample.

In turn, parameter compatibility is a probability measure that we derive from the probability distribution of the random variable to which the parameter refers. In this way we identify a random parameter Θ compatible with an observed sample. Given a sampling mechanism [math]\displaystyle{ M_X=(g_\theta,Z) }[/math], the rationale of this operation lies in using the Z seed distribution law to determine both the X distribution law for the given θ, and the Θ distribution law given an X sample. Hence, we may derive the latter distribution directly from the former if we are able to relate domains of the sample space to subsets of Θ support. In more abstract terms, we speak about twisting properties of samples with properties of parameters and identify the former with statistics that are suitable for this exchange, so denoting a well behavior w.r.t. the unknown parameters. The operational goal is to write the analytic expression of the cumulative distribution function [math]\displaystyle{ F_\Theta(\theta) }[/math], in light of the observed value s of a statistic S, as a function of the S distribution law when the X parameter is exactly θ.

Method

Given a sampling mechanism [math]\displaystyle{ M_X=(g_\theta,Z) }[/math] for the random variable X, we model [math]\displaystyle{ \boldsymbol X=\{X_1,\ldots,X_m\} }[/math] to be equal to [math]\displaystyle{ \{g_\theta(Z_1),\ldots,g_\theta(Z_m)\} }[/math]. Focusing on a relevant statistic [math]\displaystyle{ S=h_1(X_1,\ldots,X_m) }[/math] for the parameter θ, the master equation reads

[math]\displaystyle{ s= h(g_\theta(z_1),\ldots, g_\theta(z_m))= \rho(\theta;z_1,\ldots,z_m). }[/math]

When s is a well-behaved statistic w.r.t the parameter, we are sure that a monotone relation exists for each [math]\displaystyle{ \boldsymbol z=\{z_1,\ldots,z_m\} }[/math] between s and θ. We are also assured that Θ, as a function of [math]\displaystyle{ \boldsymbol Z }[/math] for given s, is a random variable since the master equation provides solutions that are feasible and independent of other (hidden) parameters.[1]

The direction of the monotony determines for any [math]\displaystyle{ \boldsymbol z }[/math] a relation between events of the type [math]\displaystyle{ s\geq s'\leftrightarrow \theta\geq \theta' }[/math] or vice versa [math]\displaystyle{ s\geq s'\leftrightarrow \theta\leq \theta' }[/math], where [math]\displaystyle{ s' }[/math] is computed by the master equation with [math]\displaystyle{ \theta' }[/math]. In the case that s assumes discrete values the first relation changes into [math]\displaystyle{ s\geq s'\rightarrow \theta\geq \theta'\rightarrow s\geq s'+\ell }[/math] where [math]\displaystyle{ \ell\gt 0 }[/math] is the size of the s discretization grain, idem with the opposite monotony trend. Resuming these relations on all seeds, for s continuous we have either

[math]\displaystyle{ F_{\Theta\mid S=s}(\theta)= F_{S\mid \Theta=\theta}(s) }[/math]

or

[math]\displaystyle{ F_{\Theta\mid S=s}(\theta)= 1-F_{S\mid\Theta=\theta}(s) }[/math]

For s discrete we have an interval where [math]\displaystyle{ F_{\Theta\mid S=s}(\theta) }[/math] lies, because of [math]\displaystyle{ \ell\gt 0 }[/math]. The whole logical contrivance is called a twisting argument. A procedure implementing it is as follows.

Algorithm

Generating a parameter distribution law through a twisting argument
Given a sample [math]\displaystyle{ \{x_1,\ldots,x_m\} }[/math] from a random variable with parameter θ unknown,
  1. Identify a well behaving statistic S for the parameter θ and its discretization grain [math]\displaystyle{ \ell }[/math] (if any);
  2. decide the monotony versus;
  3. compute [math]\displaystyle{ F_{\Theta}(\theta)\in\left(q_1(F_{S|\Theta=\theta}(s)),q_2(F_{S|\Theta=\theta}(s))\right) }[/math] where:
    • if S is continuous [math]\displaystyle{ q_1=q_2 }[/math]
    • if S is discrete
      1. [math]\displaystyle{ q_2(F_S(s))=q_1(F_S(s-\ell) }[/math] if s does not decrease with θ
      2. [math]\displaystyle{ q_1(F_S(s))=q_2(F_S(s-\ell) }[/math] if s does not increase with θ and
      3. [math]\displaystyle{ q_i(F_S)= 1-F_S }[/math] if s does not decrease with θ and [math]\displaystyle{ q_i(F_S)= F_S }[/math] if s does not increase with θ for [math]\displaystyle{ i=1,2 }[/math].

Remark

The rationale behind twisting arguments does not change when parameters are vectors, though some complication arises from the management of joint inequalities. Instead, the difficulty of dealing with a vector of parameters proved to be the Achilles heel of Fisher's approach to the fiducial distribution of parameters.[2] Also Fraser’s constructive probabilities[3] devised for the same purpose do not treat this point completely.

Example

For [math]\displaystyle{ \boldsymbol x }[/math] drawn from a gamma distribution, whose specification requires values for the parameters λ and k, a twisting argument may be stated by following the below procedure. Given the meaning of these parameters we know that

[math]\displaystyle{ (k\leq k')\leftrightarrow(s_k \leq s_{k'}) \text{ for fixed } \lambda, }[/math]
[math]\displaystyle{ (\lambda\leq\lambda')\leftrightarrow(s_{\lambda'}\leq s_\lambda) \text{ for fixed } k, }[/math]

where [math]\displaystyle{ s_k=\prod_{i=1}^m x_i }[/math] and [math]\displaystyle{ s_\lambda=\sum_{i=1}^m x_i }[/math]. This leads to a joint cumulative distribution function

[math]\displaystyle{ F_{\Lambda,K}(\lambda,k)=F_{\Lambda\,\mid\,K=k}(\lambda) F_K(k) = F_{K\,\mid\,\Lambda = \lambda}(k) F_\Lambda(\lambda). }[/math]

Using the first factorization and replacing [math]\displaystyle{ s_k }[/math] with [math]\displaystyle{ r_k=\frac{s_k}{s_\lambda^m} }[/math] in order to have a distribution of [math]\displaystyle{ K }[/math] that is independent of [math]\displaystyle{ \Lambda }[/math], we have

[math]\displaystyle{ F_{\Lambda\,\mid\,K=k}(\lambda)=1 - \frac{\Gamma(k m, \lambda s_\Lambda)}{\Gamma(k m)} }[/math]
[math]\displaystyle{ F_K(k)=1-F_{R_k}(r_K) }[/math]

with m denoting the sample size, [math]\displaystyle{ s_\Lambda }[/math] and [math]\displaystyle{ r_K }[/math] are the observed statistics (hence with indices denoted by capital letters), [math]\displaystyle{ \Gamma(a,b) }[/math] the incomplete gamma function and [math]\displaystyle{ F_{R_k}(r_K) }[/math] the Fox's H function that can be approximated with a gamma distribution again with proper parameters (for instance estimated through the method of moments) as a function of k and m.

Joint probability density function of parameters [math]\displaystyle{ (K,\Lambda) }[/math] of a Gamma random variable.
Marginal cumulative distribution function of parameter K of a Gamma random variable.

With a sample size [math]\displaystyle{ m=30, s_\Lambda=72.82 }[/math] and [math]\displaystyle{ r_K= }[/math] [math]\displaystyle{ 4.5\times 10^{-46} }[/math], you may find the joint p.d.f. of the Gamma parameters K and [math]\displaystyle{ \Lambda }[/math] on the left. The marginal distribution of K is reported in the picture on the right.

Notes

  1. By default, capital letters (such as U, X) will denote random variables and small letters (u, x) their corresponding realizations.
  2. Fisher 1935.
  3. Fraser 1966.

References

  • Fisher, M.A. (1935). "The fiducial argument in statistical inference". Annals of Eugenics 6 (4): 391–398. doi:10.1111/j.1469-1809.1935.tb02120.x. 
  • Fraser, D. A. S. (1966). "Structural probability and generalization". Biometrika 53 (1/2): 1–9. doi:10.2307/2334048. 
  • Apolloni, B; Malchiodi, D.; Gaito, S. (2006). Algorithmic Inference in Machine Learning. International Series on Advanced Intelligence. 5 (2nd ed.). Adelaide: Magill. "Advanced Knowledge International" 




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Twisting_properties
21 views | Status: cached on July 16 2024 14:38:34
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF