Distribution learning theory

From HandWiki - Reading time: 13 min

The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, Robert Schapire and Linda Sellie in 1994 [1] and it was inspired from the PAC-framework introduced by Leslie Valiant.[2] In this framework the input is a number of samples drawn from a distribution that belongs to a specific class of distributions. The goal is to find an efficient algorithm that, based on these samples, determines with high probability the distribution from which the samples have been drawn. Because of its generality, this framework has been used in a large variety of different fields like machine learning, approximation algorithms, applied probability and statistics.

This article explains the basic definitions, tools and results in this framework from the theory of computation point of view.

Definitions

Let [math]\displaystyle{ \textstyle X }[/math] be the support of the distributions of interest. As in the original work of Kearns et al.[1] if [math]\displaystyle{ \textstyle X }[/math] is finite it can be assumed without loss of generality that [math]\displaystyle{ \textstyle X = \{0, 1\}^n }[/math] where [math]\displaystyle{ \textstyle n }[/math] is the number of bits that have to be used in order to represent any [math]\displaystyle{ \textstyle y \in X }[/math]. We focus in probability distributions over [math]\displaystyle{ \textstyle X }[/math].

There are two possible representations of a probability distribution [math]\displaystyle{ \textstyle D }[/math] over [math]\displaystyle{ \textstyle X }[/math].

  • probability distribution function (or evaluator) an evaluator [math]\displaystyle{ \textstyle E_D }[/math] for [math]\displaystyle{ \textstyle D }[/math] takes as input any [math]\displaystyle{ \textstyle y \in X }[/math] and outputs a real number [math]\displaystyle{ \textstyle E_D[y] }[/math] which denotes the probability that of [math]\displaystyle{ \textstyle y }[/math] according to [math]\displaystyle{ \textstyle D }[/math], i.e. [math]\displaystyle{ \textstyle E_D[y] = \Pr [Y = y] }[/math] if [math]\displaystyle{ \textstyle Y \sim D }[/math].
  • generator a generator [math]\displaystyle{ \textstyle G_D }[/math] for [math]\displaystyle{ \textstyle D }[/math] takes as input a string of truly random bits [math]\displaystyle{ \textstyle y }[/math] and outputs [math]\displaystyle{ \textstyle G_D[y] \in X }[/math] according to the distribution [math]\displaystyle{ \textstyle D }[/math]. Generator can be interpreted as a routine that simulates sampling from the distribution [math]\displaystyle{ \textstyle D }[/math] given a sequence of fair coin tosses.

A distribution [math]\displaystyle{ \textstyle D }[/math] is called to have a polynomial generator (respectively evaluator) if its generator (respectively evaluator) exists and can be computed in polynomial time.

Let [math]\displaystyle{ \textstyle C_X }[/math] a class of distribution over X, that is [math]\displaystyle{ \textstyle C_X }[/math] is a set such that every [math]\displaystyle{ \textstyle D \in C_X }[/math] is a probability distribution with support [math]\displaystyle{ \textstyle X }[/math]. The [math]\displaystyle{ \textstyle C_X }[/math] can also be written as [math]\displaystyle{ \textstyle C }[/math] for simplicity.

Before defining learnability, it is necessary to define good approximations of a distribution [math]\displaystyle{ \textstyle D }[/math]. There are several ways to measure the distance between two distribution. The three more common possibilities are

The strongest of these distances is the Kullback-Leibler divergence and the weakest is the Kolmogorov distance. This means that for any pair of distributions [math]\displaystyle{ \textstyle D }[/math], [math]\displaystyle{ \textstyle D' }[/math] :

[math]\displaystyle{ \text{KL-distance}(D, D') \ge \text{TV-distance}(D, D') \ge \text{Kolmogorov-distance}(D, D') }[/math]

Therefore, for example if [math]\displaystyle{ \textstyle D }[/math] and [math]\displaystyle{ \textstyle D' }[/math] are close with respect to Kullback-Leibler divergence then they are also close with respect to all the other distances.

Next definitions hold for all the distances and therefore the symbol [math]\displaystyle{ \textstyle d(D, D') }[/math] denotes the distance between the distribution [math]\displaystyle{ \textstyle D }[/math] and the distribution [math]\displaystyle{ \textstyle D' }[/math] using one of the distances that we describe above. Although learnability of a class of distributions can be defined using any of these distances, applications refer to a specific distance.

The basic input that we use in order to learn a distribution is a number of samples drawn by this distribution. For the computational point of view the assumption is that such a sample is given in a constant amount of time. So it's like having access to an oracle [math]\displaystyle{ \textstyle GEN(D) }[/math] that returns a sample from the distribution [math]\displaystyle{ \textstyle D }[/math]. Sometimes the interest is, apart from measuring the time complexity, to measure the number of samples that have to be used in order to learn a specific distribution [math]\displaystyle{ \textstyle D }[/math] in class of distributions [math]\displaystyle{ \textstyle C }[/math]. This quantity is called sample complexity of the learning algorithm.

In order for the problem of distribution learning to be more clear consider the problem of supervised learning as defined in.[3] In this framework of statistical learning theory a training set [math]\displaystyle{ \textstyle S = \{(x_1, y_1), \dots, (x_n, y_n) \} }[/math] and the goal is to find a target function [math]\displaystyle{ \textstyle f : X \rightarrow Y }[/math] that minimizes some loss function, e.g. the square loss function. More formally [math]\displaystyle{ f = \arg \min_{g} \int V(y, g(x)) d\rho(x, y) }[/math], where [math]\displaystyle{ V(\cdot, \cdot) }[/math] is the loss function, e.g. [math]\displaystyle{ V(y, z) = (y - z)^2 }[/math] and [math]\displaystyle{ \rho(x, y) }[/math] the probability distribution according to which the elements of the training set are sampled. If the conditional probability distribution [math]\displaystyle{ \rho_x(y) }[/math] is known then the target function has the closed form [math]\displaystyle{ f(x) = \int_y y d\rho_x(y) }[/math]. So the set [math]\displaystyle{ S }[/math] is a set of samples from the probability distribution [math]\displaystyle{ \rho(x, y) }[/math]. Now the goal of distributional learning theory if to find [math]\displaystyle{ \rho }[/math] given [math]\displaystyle{ S }[/math] which can be used to find the target function [math]\displaystyle{ f }[/math].

Definition of learnability

A class of distributions [math]\displaystyle{ \textstyle C }[/math] is called efficiently learnable if for every [math]\displaystyle{ \textstyle \epsilon \gt 0 }[/math] and [math]\displaystyle{ \textstyle 0 \lt \delta \le 1 }[/math] given access to [math]\displaystyle{ \textstyle GEN(D) }[/math] for an unknown distribution [math]\displaystyle{ \textstyle D \in C }[/math], there exists a polynomial time algorithm [math]\displaystyle{ \textstyle A }[/math], called learning algorithm of [math]\displaystyle{ \textstyle C }[/math], that outputs a generator or an evaluator of a distribution [math]\displaystyle{ \textstyle D' }[/math] such that

[math]\displaystyle{ \Pr[ d(D, D') \le \epsilon ] \ge 1 - \delta }[/math]

If we know that [math]\displaystyle{ \textstyle D' \in C }[/math] then [math]\displaystyle{ \textstyle A }[/math] is called proper learning algorithm, otherwise is called improper learning algorithm.

In some settings the class of distributions [math]\displaystyle{ \textstyle C }[/math] is a class with well known distributions which can be described by a set of parameters. For instance [math]\displaystyle{ \textstyle C }[/math] could be the class of all the Gaussian distributions [math]\displaystyle{ \textstyle N(\mu, \sigma^2) }[/math]. In this case the algorithm [math]\displaystyle{ \textstyle A }[/math] should be able to estimate the parameters [math]\displaystyle{ \textstyle \mu, \sigma }[/math]. In this case [math]\displaystyle{ \textstyle A }[/math] is called parameter learning algorithm.

Obviously the parameter learning for simple distributions is a very well studied field that is called statistical estimation and there is a very long bibliography on different estimators for different kinds of simple known distributions. But distributions learning theory deals with learning class of distributions that have more complicated description.

First results

In their seminal work, Kearns et al. deal with the case where [math]\displaystyle{ \textstyle A }[/math] is described in term of a finite polynomial sized circuit and they proved the following for some specific classes of distribution.[1]

  • [math]\displaystyle{ \textstyle OR }[/math] gate distributions for this kind of distributions there is no polynomial-sized evaluator, unless [math]\displaystyle{ \textstyle \#P \subseteq P/\text{poly} }[/math]. On the other hand, this class is efficiently learnable with generator.
  • Parity gate distributions this class is efficiently learnable with both generator and evaluator.
  • Mixtures of Hamming Balls this class is efficiently learnable with both generator and evaluator.
  • Probabilistic Finite Automata this class is not efficiently learnable with evaluator under the Noisy Parity Assumption which is an impossibility assumption in the PAC learning framework.

[math]\displaystyle{ \textstyle \epsilon- }[/math]Covers

One very common technique in order to find a learning algorithm for a class of distributions [math]\displaystyle{ \textstyle C }[/math] is to first find a small [math]\displaystyle{ \textstyle \epsilon- }[/math]cover of [math]\displaystyle{ \textstyle C }[/math].

Definition

A set [math]\displaystyle{ \textstyle C_{\epsilon} }[/math] is called [math]\displaystyle{ \textstyle \epsilon }[/math]-cover of [math]\displaystyle{ \textstyle C }[/math] if for every [math]\displaystyle{ \textstyle D \in C }[/math] there is a [math]\displaystyle{ \textstyle D' \in C_{\epsilon} }[/math] such that [math]\displaystyle{ \textstyle d(D, D') \le \epsilon }[/math]. An [math]\displaystyle{ \textstyle \epsilon- }[/math] cover is small if it has polynomial size with respect to the parameters that describe [math]\displaystyle{ \textstyle D }[/math].

Once there is an efficient procedure that for every [math]\displaystyle{ \textstyle \epsilon \gt 0 }[/math] finds a small [math]\displaystyle{ \textstyle \epsilon- }[/math]cover [math]\displaystyle{ \textstyle C_{\epsilon} }[/math] of C then the only left task is to select from [math]\displaystyle{ \textstyle C_{\epsilon} }[/math] the distribution [math]\displaystyle{ \textstyle D' \in C_{\epsilon} }[/math] that is closer to the distribution [math]\displaystyle{ \textstyle D \in C }[/math] that has to be learned.

The problem is that given [math]\displaystyle{ \textstyle D', D'' \in C_{\epsilon} }[/math] it is not trivial how we can compare [math]\displaystyle{ \textstyle d(D, D') }[/math] and [math]\displaystyle{ \textstyle d(D, D'') }[/math] in order to decide which one is the closest to [math]\displaystyle{ \textstyle D }[/math], because [math]\displaystyle{ \textstyle D }[/math] is unknown. Therefore, the samples from [math]\displaystyle{ \textstyle D }[/math] have to be used to do these comparisons. Obviously the result of the comparison always has a probability of error. So the task is similar with finding the minimum in a set of element using noisy comparisons. There are a lot of classical algorithms in order to achieve this goal. The most recent one which achieves the best guarantees was proposed by Daskalakis and Kamath[4] This algorithm sets up a fast tournament between the elements of [math]\displaystyle{ \textstyle C_{\epsilon} }[/math] where the winner [math]\displaystyle{ \textstyle D^* }[/math] of this tournament is the element which is [math]\displaystyle{ \textstyle \epsilon- }[/math]close to [math]\displaystyle{ \textstyle D }[/math] (i.e. [math]\displaystyle{ \textstyle d(D^*, D) \le \epsilon }[/math]) with probability at least [math]\displaystyle{ \textstyle 1 - \delta }[/math]. In order to do so their algorithm uses [math]\displaystyle{ \textstyle O(\log N / \epsilon^2) }[/math] samples from [math]\displaystyle{ \textstyle D }[/math] and runs in [math]\displaystyle{ \textstyle O(N \log N / \epsilon^2) }[/math] time, where [math]\displaystyle{ \textstyle N = |C_{\epsilon}| }[/math].

Learning sums of random variables

Learning of simple well known distributions is a well studied field and there are a lot of estimators that can be used. One more complicated class of distributions is the distribution of a sum of variables that follow simple distributions. These learning procedure have a close relation with limit theorems like the central limit theorem because they tend to examine the same object when the sum tends to an infinite sum. Recently there are two results that described here include the learning Poisson binomial distributions and learning sums of independent integer random variables. All the results below hold using the total variation distance as a distance measure.

Learning Poisson binomial distributions

Consider [math]\displaystyle{ \textstyle n }[/math] independent Bernoulli random variables [math]\displaystyle{ \textstyle X_1, \dots, X_n }[/math] with probabilities of success [math]\displaystyle{ \textstyle p_1, \dots, p_n }[/math]. A Poisson Binomial Distribution of order [math]\displaystyle{ \textstyle n }[/math] is the distribution of the sum [math]\displaystyle{ \textstyle X = \sum_i X_i }[/math]. For learning the class [math]\displaystyle{ \textstyle PBD = \{ D : D ~ \text{ is a Poisson binomial distribution} \} }[/math]. The first of the following results deals with the case of improper learning of [math]\displaystyle{ \textstyle PBD }[/math] and the second with the proper learning of [math]\displaystyle{ \textstyle PBD }[/math].[5]

Theorem

Let [math]\displaystyle{ \textstyle D \in PBD }[/math] then there is an algorithm which given [math]\displaystyle{ \textstyle n }[/math], [math]\displaystyle{ \textstyle \epsilon \gt 0 }[/math], [math]\displaystyle{ \textstyle 0 \lt \delta \le 1 }[/math] and access to [math]\displaystyle{ \textstyle GEN(D) }[/math] finds a [math]\displaystyle{ \textstyle D' }[/math] such that [math]\displaystyle{ \textstyle \Pr[ d(D, D') \le \epsilon ] \ge 1 - \delta }[/math]. The sample complexity of this algorithm is [math]\displaystyle{ \textstyle \tilde{O}( ( 1 / \epsilon^3 ) \log (1 / \delta) ) }[/math] and the running time is [math]\displaystyle{ \textstyle \tilde{O}( (1 / \epsilon^3) \log n \log^2 (1 / \delta) ) }[/math].

Theorem

Let [math]\displaystyle{ \textstyle D \in PBD }[/math] then there is an algorithm which given [math]\displaystyle{ \textstyle n }[/math], [math]\displaystyle{ \textstyle \epsilon \gt 0 }[/math], [math]\displaystyle{ \textstyle 0 \lt \delta \le 1 }[/math] and access to [math]\displaystyle{ \textstyle GEN(D) }[/math] finds a [math]\displaystyle{ \textstyle D' \in PBD }[/math] such that [math]\displaystyle{ \textstyle \Pr[ d(D, D') \le \epsilon ] \ge 1 - \delta }[/math]. The sample complexity of this algorithm is [math]\displaystyle{ \textstyle \tilde{O}( ( 1 / \epsilon^2 ) ) \log (1 / \delta) }[/math] and the running time is [math]\displaystyle{ \textstyle (1 / \epsilon)^{O(\log^2 (1 / \epsilon))} \tilde{O}( \log n \log (1 / \delta) ) }[/math].

One part of the above results is that the sample complexity of the learning algorithm doesn't depend on [math]\displaystyle{ \textstyle n }[/math], although the description of [math]\displaystyle{ \textstyle D }[/math] is linear in [math]\displaystyle{ \textstyle n }[/math]. Also the second result is almost optimal with respect to the sample complexity because there is also a lower bound of [math]\displaystyle{ \textstyle O(1 / \epsilon^2) }[/math].

The proof uses a small [math]\displaystyle{ \textstyle \epsilon- }[/math]cover of [math]\displaystyle{ \textstyle PBD }[/math] that has been produced by Daskalakis and Papadimitriou,[6] in order to get this algorithm.

Learning Sums of Independent Integer Random Variables

Consider [math]\displaystyle{ \textstyle n }[/math] independent random variables [math]\displaystyle{ \textstyle X_1, \dots, X_n }[/math] each of which follows an arbitrary distribution with support [math]\displaystyle{ \textstyle \{0, 1, \dots, k - 1\} }[/math]. A [math]\displaystyle{ \textstyle k- }[/math]sum of independent integer random variable of order [math]\displaystyle{ \textstyle n }[/math] is the distribution of the sum [math]\displaystyle{ \textstyle X = \sum_i X_i }[/math]. For learning the class

[math]\displaystyle{ \textstyle k-SIIRV = \{ D : D \text{is a k-sum of independent integer random variable } \} }[/math]

there is the following result

Theorem

Let [math]\displaystyle{ \textstyle D \in k-SIIRV }[/math] then there is an algorithm which given [math]\displaystyle{ \textstyle n }[/math], [math]\displaystyle{ \textstyle \epsilon \gt 0 }[/math] and access to [math]\displaystyle{ \textstyle GEN(D) }[/math] finds a [math]\displaystyle{ \textstyle D' }[/math] such that [math]\displaystyle{ \textstyle \Pr[ d(D, D') \le \epsilon ] \ge 1 - \delta }[/math]. The sample complexity of this algorithm is [math]\displaystyle{ \textstyle \text{poly}(k / \epsilon) }[/math] and the running time is also [math]\displaystyle{ \textstyle \text{poly}(k / \epsilon) }[/math].

Another part is that the sample and the time complexity does not depend on [math]\displaystyle{ \textstyle n }[/math]. Its possible to conclude this independence for the previous section if we set [math]\displaystyle{ \textstyle k = 2 }[/math].[7]

Learning mixtures of Gaussians

Let the random variables [math]\displaystyle{ \textstyle X \sim N(\mu_1, \Sigma_1) }[/math] and [math]\displaystyle{ \textstyle Y \sim N(\mu_2, \Sigma_2) }[/math]. Define the random variable [math]\displaystyle{ \textstyle Z }[/math] which takes the same value as [math]\displaystyle{ \textstyle X }[/math] with probability [math]\displaystyle{ \textstyle w_1 }[/math] and the same value as [math]\displaystyle{ \textstyle Y }[/math] with probability [math]\displaystyle{ \textstyle w_2 = 1 - w_1 }[/math]. Then if [math]\displaystyle{ \textstyle F_1 }[/math] is the density of [math]\displaystyle{ \textstyle X }[/math] and [math]\displaystyle{ \textstyle F_2 }[/math] is the density of [math]\displaystyle{ \textstyle Y }[/math] the density of [math]\displaystyle{ \textstyle Z }[/math] is [math]\displaystyle{ \textstyle F = w_1 F_1 + w_2 F_2 }[/math]. In this case [math]\displaystyle{ \textstyle Z }[/math] is said to follow a mixture of Gaussians. Pearson [8] was the first who introduced the notion of the mixtures of Gaussians in his attempt to explain the probability distribution from which he got same data that he wanted to analyze. So after doing a lot of calculations by hand, he finally fitted his data to a mixture of Gaussians. The learning task in this case is to determine the parameters of the mixture [math]\displaystyle{ \textstyle w_1, w_2, \mu_1, \mu_2, \Sigma_1, \Sigma_2 }[/math].

The first attempt to solve this problem was from Dasgupta.[9] In this work Dasgupta assumes that the two means of the Gaussians are far enough from each other. This means that there is a lower bound on the distance [math]\displaystyle{ \textstyle ||\mu_1 - \mu_2|| }[/math]. Using this assumption Dasgupta and a lot of scientists after him were able to learn the parameters of the mixture. The learning procedure starts with clustering the samples into two different clusters minimizing some metric. Using the assumption that the means of the Gaussians are far away from each other with high probability the samples in the first cluster correspond to samples from the first Gaussian and the samples in the second cluster to samples from the second one. Now that the samples are partitioned the [math]\displaystyle{ \textstyle \mu_i, \Sigma_i }[/math] can be computed from simple statistical estimators and [math]\displaystyle{ \textstyle w_i }[/math] by comparing the magnitude of the clusters.

If [math]\displaystyle{ \textstyle GM }[/math] is the set of all the mixtures of two Gaussians, using the above procedure theorems like the following can be proved.

Theorem [9]

Let [math]\displaystyle{ \textstyle D \in GM }[/math] with [math]\displaystyle{ \textstyle ||\mu_1 - \mu_2|| \ge c \sqrt{n \max (\lambda_{max}(\Sigma_1), \lambda_{max}(\Sigma_2))} }[/math], where [math]\displaystyle{ \textstyle c \gt 1/2 }[/math] and [math]\displaystyle{ \textstyle \lambda_{max}(A) }[/math] the largest eigenvalue of [math]\displaystyle{ \textstyle A }[/math], then there is an algorithm which given [math]\displaystyle{ \textstyle \epsilon \gt 0 }[/math], [math]\displaystyle{ \textstyle 0 \lt \delta \le 1 }[/math] and access to [math]\displaystyle{ \textstyle GEN(D) }[/math] finds an approximation [math]\displaystyle{ \textstyle w'_i, \mu'_i, \Sigma'_i }[/math] of the parameters such that [math]\displaystyle{ \textstyle \Pr[ ||w_i - w'_i|| \le \epsilon ] \ge 1 - \delta }[/math] (respectively for [math]\displaystyle{ \textstyle \mu_i }[/math] and [math]\displaystyle{ \textstyle \Sigma_i }[/math]. The sample complexity of this algorithm is [math]\displaystyle{ \textstyle M = 2^{O(\log^2 ( 1 / ( \epsilon \delta ) ))} }[/math] and the running time is [math]\displaystyle{ \textstyle O(M^2 d + M d n) }[/math].

The above result could also be generalized in [math]\displaystyle{ \textstyle k- }[/math]mixture of Gaussians.[9]

For the case of mixture of two Gaussians there are learning results without the assumption of the distance between their means, like the following one which uses the total variation distance as a distance measure.

Theorem [10]

Let [math]\displaystyle{ \textstyle F \in GM }[/math] then there is an algorithm which given [math]\displaystyle{ \textstyle \epsilon \gt 0 }[/math], [math]\displaystyle{ \textstyle 0 \lt \delta \le 1 }[/math] and access to [math]\displaystyle{ \textstyle GEN(D) }[/math] finds [math]\displaystyle{ \textstyle w'_i, \mu'_i, \Sigma'_i }[/math] such that if [math]\displaystyle{ \textstyle F' = w'_1 F'_1 + w'_2 F'_2 }[/math], where [math]\displaystyle{ \textstyle F'_i = N(\mu'_i, \Sigma'_i) }[/math] then [math]\displaystyle{ \textstyle \Pr[ d(F, F') \le \epsilon ] \ge 1 - \delta }[/math]. The sample complexity and the running time of this algorithm is [math]\displaystyle{ \textstyle \text{poly}(n, 1 / \epsilon, 1 / \delta, 1 / w_1, 1 / w_2, 1 / d(F_1, F_2)) }[/math].

The distance between [math]\displaystyle{ \textstyle F_1 }[/math] and [math]\displaystyle{ \textstyle F_2 }[/math] doesn't affect the quality of the result of the algorithm but just the sample complexity and the running time.[9][10]

References

  1. 1.0 1.1 1.2 M. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R. Schapire, L. Sellie On the Learnability of Discrete Distributions. ACM Symposium on Theory of Computing, 1994 [1]
  2. L. Valiant A theory of the learnable. Communications of ACM, 1984
  3. Lorenzo Rosasco, Tomaso Poggio, "A Regularization Tour of Machine Learning — MIT-9.520 Lectures Notes" Manuscript, Dec. 2014 [2]
  4. C. Daskalakis, G. Kamath Faster and Sample Near-Optimal Algorithms for Proper Learning Mixtures of Gaussians. Annual Conference on Learning Theory, 2014 [3]
  5. C. Daskalakis, I. Diakonikolas, R. Servedio Learning Poisson Binomial Distributions. ACM Symposium on Theory of Computing, 2012 [4]
  6. C. Daskalakis, C. Papadimitriou Sparse Covers for Sums of Indicators. Probability Theory and Related Fields, 2014 [5]
  7. C. Daskalakis, I. Diakonikolas, R. O’Donnell, R. Servedio, L. Tan Learning Sums of Independent Integer Random Variables. IEEE Symposium on Foundations of Computer Science, 2013 [6]
  8. K. Pearson Contribution to the Mathematical Theory of Evolution. Philosophical Transactions of the Royal Society in London, 1894 [7]
  9. 9.0 9.1 9.2 9.3 S. Dasgupta Learning Mixtures of Gaussians. IEEE Symposium on Foundations of Computer Science, 1999 [8]
  10. 10.0 10.1 A. Kalai, A. Moitra, G. Valiant Efficiently Learning Mixtures of Two Gaussians ACM Symposium on Theory of Computing, 2010 [9]




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Distribution_learning_theory
11 views | Status: cached on August 08 2024 12:14:09
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF