Inequalities in information theory

From HandWiki - Reading time: 6 min

Short description: Concept in information theory

Inequalities are very important in the study of information theory. There are a number of different contexts in which these inequalities appear.

Entropic inequalities

Main page: Entropic vector

Consider a tuple [math]\displaystyle{ X_1,X_2,\dots,X_n }[/math] of [math]\displaystyle{ n }[/math] finitely (or at most countably) supported random variables on the same probability space. There are 2n subsets, for which (joint) entropies can be computed. For example, when n = 2, we may consider the entropies [math]\displaystyle{ H(X_1), }[/math] [math]\displaystyle{ H(X_2), }[/math] and [math]\displaystyle{ H(X_1, X_2) }[/math]. They satisfy the following inequalities (which together characterize the range of the marginal and joint entropies of two random variables):

  • [math]\displaystyle{ H(X_1) \ge 0 }[/math]
  • [math]\displaystyle{ H(X_2) \ge 0 }[/math]
  • [math]\displaystyle{ H(X_1) \le H(X_1, X_2) }[/math]
  • [math]\displaystyle{ H(X_2) \le H(X_1, X_2) }[/math]
  • [math]\displaystyle{ H(X_1, X_2) \le H(X_1) + H(X_2). }[/math]

In fact, these can all be expressed as special cases of a single inequality involving the conditional mutual information, namely

[math]\displaystyle{ I(A;B|C) \ge 0, }[/math]

where [math]\displaystyle{ A }[/math], [math]\displaystyle{ B }[/math], and [math]\displaystyle{ C }[/math] each denote the joint distribution of some arbitrary (possibly empty) subset of our collection of random variables. Inequalities that can be derived as linear combinations of this are known as Shannon-type inequalities.

For larger [math]\displaystyle{ n }[/math] there are further restrictions on possible values of entropy. To make this precise, a vector [math]\displaystyle{ h }[/math] in [math]\displaystyle{ \mathbb{R}^{2^n} }[/math] indexed by subsets of [math]\displaystyle{ \{1,\dots,n\} }[/math] is said to be entropic if there is a joint, discrete distribution of n random variables [math]\displaystyle{ X_1,\dots,X_n }[/math] such that [math]\displaystyle{ h_I = H(X_i \colon i \in I) }[/math] is their joint entropy, for each subset [math]\displaystyle{ I }[/math]. The set of entropic vectors is denoted [math]\displaystyle{ \Gamma^*_n }[/math], following the notation of Yeung.[1] It is not closed nor convex for [math]\displaystyle{ n \geq 3 }[/math], but its topological closure [math]\displaystyle{ \overline{\Gamma^*_n} }[/math] is known to be convex and hence it can be characterized by the (infinitely many) linear inequalities satisfied by all entropic vectors, called entropic inequalities.

The set of all vectors that satisfy Shannon-type inequalities (but not necessarily other entropic inequalities) contains [math]\displaystyle{ \overline{\Gamma^*_n} }[/math]. This containment is strict for [math]\displaystyle{ n \geq 4 }[/math] and further inequalities are known as non-Shannon type inequalities. Zhang and Yeung reported the first non-Shannon-type inequality,[2] often referred to as the Zhang-Yeung inequality. Matus[3] proved that no finite set of inequalities can characterize (by linear combinations) all entropic inequalities. In other words, the region [math]\displaystyle{ \overline{\Gamma^*_n} }[/math] is not a polytope.

Lower bounds for the Kullback–Leibler divergence

A great many important inequalities in information theory are actually lower bounds for the Kullback–Leibler divergence. Even the Shannon-type inequalities can be considered part of this category, since the interaction information can be expressed as the Kullback–Leibler divergence of the joint distribution with respect to the product of the marginals, and thus these inequalities can be seen as a special case of Gibbs' inequality.

On the other hand, it seems to be much more difficult to derive useful upper bounds for the Kullback–Leibler divergence. This is because the Kullback–Leibler divergence DKL(P||Q) depends very sensitively on events that are very rare in the reference distribution Q. DKL(P||Q) increases without bound as an event of finite non-zero probability in the distribution P becomes exceedingly rare in the reference distribution Q, and in fact DKL(P||Q) is not even defined if an event of non-zero probability in P has zero probability in Q. (Hence the requirement that P be absolutely continuous with respect to Q.)

Gibbs' inequality

Main page: Gibbs' inequality

This fundamental inequality states that the Kullback–Leibler divergence is non-negative.

Kullback's inequality

Main page: Kullback's inequality

Another inequality concerning the Kullback–Leibler divergence is known as Kullback's inequality.[4] If P and Q are probability distributions on the real line with P absolutely continuous with respect to Q, and whose first moments exist, then

[math]\displaystyle{ D_{KL}(P\parallel Q) \ge \Psi_Q^*(\mu'_1(P)), }[/math]

where [math]\displaystyle{ \Psi_Q^* }[/math] is the large deviations rate function, i.e. the convex conjugate of the cumulant-generating function, of Q, and [math]\displaystyle{ \mu'_1(P) }[/math] is the first moment of P.

The Cramér–Rao bound is a corollary of this result.

Pinsker's inequality

Main page: Pinsker's inequality

Pinsker's inequality relates Kullback–Leibler divergence and total variation distance. It states that if P, Q are two probability distributions, then

[math]\displaystyle{ \sqrt{\frac{1}{2}D_{KL}^{(e)}(P\parallel Q)} \ge \sup \{ |P(A) - Q(A)| : A\text{ is an event to which probabilities are assigned.} \}. }[/math]

where

[math]\displaystyle{ D_{KL}^{(e)}(P\parallel Q) }[/math]

is the Kullback–Leibler divergence in nats and

[math]\displaystyle{ \sup_A |P(A) - Q(A)| }[/math]

is the total variation distance.

Other inequalities

Hirschman uncertainty

In 1957,[5] Hirschman showed that for a (reasonably well-behaved) function [math]\displaystyle{ f:\mathbb R \rightarrow \mathbb C }[/math] such that [math]\displaystyle{ \int_{-\infty}^\infty |f(x)|^2\,dx = 1, }[/math] and its Fourier transform [math]\displaystyle{ g(y)=\int_{-\infty}^\infty f(x) e^{-2 \pi i x y}\,dx, }[/math] the sum of the differential entropies of [math]\displaystyle{ |f|^2 }[/math] and [math]\displaystyle{ |g|^2 }[/math] is non-negative, i.e.

[math]\displaystyle{ -\int_{-\infty}^\infty |f(x)|^2 \log |f(x)|^2 \,dx -\int_{-\infty}^\infty |g(y)|^2 \log |g(y)|^2 \,dy \ge 0. }[/math]

Hirschman conjectured, and it was later proved,[6] that a sharper bound of [math]\displaystyle{ \log(e/2), }[/math] which is attained in the case of a Gaussian distribution, could replace the right-hand side of this inequality. This is especially significant since it implies, and is stronger than, Weyl's formulation of Heisenberg's uncertainty principle.

Tao's inequality

Given discrete random variables [math]\displaystyle{ X }[/math], [math]\displaystyle{ Y }[/math], and [math]\displaystyle{ Y' }[/math], such that [math]\displaystyle{ X }[/math] takes values only in the interval [−1, 1] and [math]\displaystyle{ Y' }[/math] is determined by [math]\displaystyle{ Y }[/math] (such that [math]\displaystyle{ H(Y'|Y)=0 }[/math]), we have[7][8]

[math]\displaystyle{ \operatorname E \big( \big| \operatorname E(X|Y') - \operatorname E(X\mid Y) \big| \big) \le \sqrt {I(X;Y\mid Y') \, 2 \log 2 }, }[/math]

relating the conditional expectation to the conditional mutual information. This is a simple consequence of Pinsker's inequality. (Note: the correction factor log 2 inside the radical arises because we are measuring the conditional mutual information in bits rather than nats.)

Machine based proof checker of information-theoretic inequalities

Several machine based proof checker algorithms are now available. Proof checker algorithms typically verify the inequalities as either true or false. More advanced proof checker algorithms can produce proof or counterexamples.[9]ITIP is a Matlab based proof checker for all Shannon type Inequalities. Xitip is an open source, faster version of the same algorithm implemented in C with a graphical front end. Xitip also has a built in language parsing feature which support a broader range of random variable descriptions as input. AITIP and oXitip are cloud based implementations for validating the Shannon type inequalities. oXitip uses GLPK optimizer and has a C++ backend based on Xitip with a web based user interface. AITIP uses Gurobi solver for optimization and a mix of python and C++ in the backend implementation. It can also provide the canonical break down of the inequalities in terms of basic Information measures.[9]

See also

References

  1. Yeung, R.W. (1997). "A framework for linear information inequalities". IEEE Transactions on Information Theory 43 (6): 1924–1934. doi:10.1109/18.641556. )
  2. Zhang, Z.; Yeung, R. W. (1998). "On characterization of entropy function via information inequalities". IEEE Transactions on Information Theory 44 (4): 1440–1452. doi:10.1109/18.681320. 
  3. Matus, F. (2007). "Infinitely many information inequalities". 2007 IEEE International Symposium on Information Theory. 
  4. Fuchs, Aimé; Letta, Giorgio (1970). "L'Inégalité de KULLBACK. Application à la théorie de l'estimation". Séminaire de Probabilités IV Université de Strasbourg. Lecture Notes in Mathematics. 4. Strasbourg. 108–131. doi:10.1007/bfb0059338. ISBN 978-3-540-04913-5. http://www.numdam.org/item/SPS_1970__4__108_0/. 
  5. Hirschman, I. I. (1957). "A Note on Entropy". American Journal of Mathematics 79 (1): 152–156. doi:10.2307/2372390. 
  6. Beckner, W. (1975). "Inequalities in Fourier Analysis". Annals of Mathematics 102 (6): 159–182. doi:10.2307/1970980. 
  7. Tao, T. (2006). "Szemerédi's regularity lemma revisited". Contrib. Discrete Math. 1: 8–28. Bibcode2005math......4472T. 
  8. Ahlswede, Rudolf (2007). "The final form of Tao's inequality relating conditional expectation and conditional mutual information". Advances in Mathematics of Communications 1 (2): 239–242. doi:10.3934/amc.2007.1.239. 
  9. 9.0 9.1 Ho, S.W.; Ling, L.; Tan, C.W.; Yeung, R.W. (2020). "Proving and Disproving Information Inequalities: Theory and Scalable Algorithms". IEEE Transactions on Information Theory 66 (9): 5525–5536. doi:10.1109/TIT.2020.2982642. 

External links

  • Thomas M. Cover, Joy A. Thomas. Elements of Information Theory, Chapter 16, "Inequalities in Information Theory" John Wiley & Sons, Inc. 1991 Print ISBN:0-471-06259-6 Online ISBN:0-471-20061-1 pdf[yes|permanent dead link|dead link}}]
  • Amir Dembo, Thomas M. Cover, Joy A. Thomas. Information Theoretic Inequalities. IEEE Transactions on Information Theory, Vol. 37, No. 6, November 1991. pdf
  • ITIP: http://user-www.ie.cuhk.edu.hk/~ITIP/
  • XITIP: http://xitip.epfl.ch
  • N. R. Pai, Suhas Diggavi, T. Gläßle, E. Perron, R.Pulikkoonattu, R. W. Yeung, Y. Yan, oXitip: An Online Information Theoretic Inequalities Prover http://www.oxitip.com
  • Siu Wai Ho, Lin Ling, Chee Wei Tan and Raymond W. Yeung, AITIP (Information Theoretic Inequality Prover): https://aitip.org
  • Nivedita Rethnakar, Suhas Diggavi, Raymond. W. Yeung, InformationInequalities.jl: Exploring Information-Theoretic Inequalities, Julia Package, 2021 [1]




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Inequalities_in_information_theory
12 views | Status: cached on August 16 2024 09:26:09
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF