Context mixing is a type of data compression algorithm in which the next-symbol predictions of two or more statistical models are combined to yield a prediction that is often more accurate than any of the individual predictions. For example, one simple method (not necessarily the best) is to average the probabilities assigned by each model. The random forest is another method: it outputs the prediction that is the mode of the predictions output by individual models. Combining models is an active area of research in machine learning.[citation needed] The PAQ series of data compression programs use context mixing to assign probabilities to individual bits of the input.
Suppose that we are given two conditional probabilities, [math]\displaystyle{ P(X|A) }[/math] and [math]\displaystyle{ P(X|B) }[/math], and we wish to estimate [math]\displaystyle{ P(X|A,B) }[/math], the probability of event X given both conditions [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math]. There is insufficient information for probability theory to give a result. In fact, it is possible to construct scenarios in which the result could be anything at all. But intuitively, we would expect the result to be some kind of average of the two.
The problem is important for data compression. In this application, [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] are contexts, [math]\displaystyle{ X }[/math] is the event that the next bit or symbol of the data to be compressed has a particular value, and [math]\displaystyle{ P(X|A) }[/math] and [math]\displaystyle{ P(X|B) }[/math] are the probability estimates by two independent models. The compression ratio depends on how closely the estimated probability approaches the true but unknown probability of event [math]\displaystyle{ X }[/math]. It is often the case that contexts [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] have occurred often enough to accurately estimate [math]\displaystyle{ P(X|A) }[/math] and [math]\displaystyle{ P(X|B) }[/math] by counting occurrences of [math]\displaystyle{ X }[/math] in each context, but the two contexts either have not occurred together frequently, or there are insufficient computing resources (time and memory) to collect statistics for the combined case.
For example, suppose that we are compressing a text file. We wish to predict whether the next character will be a linefeed, given that the previous character was a period (context [math]\displaystyle{ A }[/math]) and that the last linefeed occurred 72 characters ago (context [math]\displaystyle{ B }[/math]). Suppose that a linefeed previously occurred after 1 of the last 5 periods ([math]\displaystyle{ P(X|A=0.2 }[/math]) and in 5 out of the last 10 lines at column 72 ([math]\displaystyle{ P(X|B)=0.5 }[/math]). How should these predictions be combined?
Two general approaches have been used, linear and logistic mixing. Linear mixing uses a weighted average of the predictions weighted by evidence. In this example, [math]\displaystyle{ P(X|B) }[/math] gets more weight than [math]\displaystyle{ P(X|A) }[/math] because [math]\displaystyle{ P(X|B) }[/math] is based on a greater number of tests. Older versions of PAQ uses this approach.[1] Newer versions use logistic (or neural network) mixing by first transforming the predictions into the logistic domain, log(p/(1-p)) before averaging.[2] This effectively gives greater weight to predictions near 0 or 1, in this case [math]\displaystyle{ P(X|A) }[/math]. In both cases, additional weights may be given to each of the input models and adapted to favor the models that have given the most accurate predictions in the past. All but the oldest versions of PAQ use adaptive weighting.
Most context mixing compressors predict one bit of input at a time. The output probability is simply the probability that the next bit will be a 1.
We are given a set of predictions Pi(1) = n1i/ni, where ni = n0i + n1i, and n0i and n1i are the counts of 0 and 1 bits respectively for the i'th model. The probabilities are computed by weighted addition of the 0 and 1 counts:
The weights wi are initially equal and always sum to 1. Under the initial conditions, each model is weighted in proportion to evidence. The weights are then adjusted to favor the more accurate models. Suppose we are given that the actual bit being predicted is y (0 or 1). Then the weight adjustment is:
Compression can be improved by bounding ni so that the model weighting is better balanced. In PAQ6, whenever one of the bit counts is incremented, the part of the other count that exceeds 2 is halved. For example, after the sequence 000000001, the counts would go from (n0, n1) = (8, 0) to (5, 1).
Let Pi(1) be the prediction by the i'th model that the next bit will be a 1. Then the final prediction P(1) is calculated:
where P(1) is the probability that the next bit will be a 1, Pi(1) is the probability estimated by the i'th model, and
After each prediction, the model is updated by adjusting the weights to minimize coding cost.
where η is the learning rate (typically 0.002 to 0.01), y is the predicted bit, and (y - P(1)) is the prediction error.
All versions below use logistic mixing unless otherwise indicated.
Original source: https://en.wikipedia.org/wiki/Context mixing.
Read more |