In the mathematical theory of stochastic processes, variable-order Markov (VOM) models are an important class of models that extend the well known Markov chain models. In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization.
This realization sequence is often called the context; therefore the VOM models are also called context trees.[1] VOM models are nicely rendered by colorized probabilistic suffix trees (PST).[2] The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such as statistical analysis, classification and prediction.[3][4][5]
Consider for example a sequence of random variables, each of which takes a value from the ternary alphabet {a, b, c}. Specifically, consider the string constructed from infinite concatenations of the sub-string aaabc: aaabcaaabcaaabcaaabc…aaabc.
The VOM model of maximal order 2 can approximate the above string using only the following five conditional probability components: Pr(a | aa) = 0.5, Pr(b | aa) = 0.5, Pr(c | b) = 1.0, Pr(a | c)= 1.0, Pr(a | ca) = 1.0.
In this example, Pr(c | ab) = Pr(c | b) = 1.0; therefore, the shorter context b is sufficient to determine the next character. Similarly, the VOM model of maximal order 3 can generate the string exactly using only five conditional probability components, which are all equal to 1.0.
To construct the Markov chain of order 1 for the next character in that string, one must estimate the following 9 conditional probability components: Pr(a | a), Pr(a | b), Pr(a | c), Pr(b | a), Pr(b | b), Pr(b | c), Pr(c | a), Pr(c | b), Pr(c | c). To construct the Markov chain of order 2 for the next character, one must estimate 27 conditional probability components: Pr(a | aa), Pr(a | ab), …, Pr(c | cc). And to construct the Markov chain of order three for the next character one must estimate the following 81 conditional probability components: Pr(a | aaa), Pr(a | aab), …, Pr(c | ccc).
In practical settings there is seldom sufficient data to accurately estimate the exponentially increasing number of conditional probability components as the order of the Markov chain increases.
The variable-order Markov model assumes that in realistic settings, there are certain realizations of states (represented by contexts) in which some past states are independent from the future states; accordingly, "a great reduction in the number of model parameters can be achieved."[1]
Let A be a state space (finite alphabet) of size [math]\displaystyle{ |A| }[/math].
Consider a sequence with the Markov property [math]\displaystyle{ x_1^{n}=x_1x_2\dots x_n }[/math] of n realizations of random variables, where [math]\displaystyle{ x_i\in A }[/math] is the state (symbol) at position i [math]\displaystyle{ \scriptstyle (1 \le i \le n) }[/math], and the concatenation of states [math]\displaystyle{ x_i }[/math] and [math]\displaystyle{ x_{i+1} }[/math] is denoted by [math]\displaystyle{ x_ix_{i+1} }[/math].
Given a training set of observed states, [math]\displaystyle{ x_1^{n} }[/math], the construction algorithm of the VOM models[3][4][5] learns a model P that provides a probability assignment for each state in the sequence given its past (previously observed symbols) or future states.
Specifically, the learner generates a conditional probability distribution [math]\displaystyle{ P(x_i\mid s) }[/math] for a symbol [math]\displaystyle{ x_i \in A }[/math] given a context [math]\displaystyle{ s\in A^* }[/math], where the * sign represents a sequence of states of any length, including the empty context.
VOM models attempt to estimate conditional distributions of the form [math]\displaystyle{ P(x_i\mid s) }[/math] where the context length [math]\displaystyle{ |s| \le D }[/math] varies depending on the available statistics. In contrast, conventional Markov models attempt to estimate these conditional distributions by assuming a fixed contexts' length [math]\displaystyle{ |s| = D }[/math] and, hence, can be considered as special cases of the VOM models.
Effectively, for a given training sequence, the VOM models are found to obtain better model parameterization than the fixed-order Markov models that leads to a better variance-bias tradeoff of the learned models.[3][4][5]
Various efficient algorithms have been devised for estimating the parameters of the VOM model.[4]
VOM models have been successfully applied to areas such as machine learning, information theory and bioinformatics, including specific applications such as coding and data compression,[1] document compression,[4] classification and identification of DNA and protein sequences,[6] [1][3] statistical process control,[5] spam filtering,[7] haplotyping,[8] speech recognition,[9] sequence analysis in social sciences,[2] and others.
Original source: https://en.wikipedia.org/wiki/Variable-order Markov model.
Read more |