Machine learning and data mining |
---|
Mixture of experts (MoE) is a machine learning technique where multiple expert networks (learners) are used to divide a problem space into homogeneous regions.[1] It differs from ensemble techniques in that for MoE, typically only one or a few expert models are run for each input, whereas in ensemble techniques, all models are run on every input.
In mixture of experts, we always have the following ingredients, but they are constructed and combined differently.
Both the experts and the weighting function are trained by minimizing some form of loss function, generally by gradient descent. There is a lot of freedom in choosing the precise form of experts, the weighting function, and the loss function.
The meta-pi network, reported by Hampshire and Waibel,[2] uses [math]\displaystyle{ f(x) = \sum_i w(x)_i f_i(x) }[/math] as the output. The model is trained by performing gradient descent on the mean-squared error loss [math]\displaystyle{ L := \frac 1N \sum_k \|y_k - f(x_k)\|^2 }[/math]. The experts may be arbitrary functions.
In their original publication, they were solving the problem of classifying phonemes in speech signal from 6 different Japanese speakers, 2 females and 4 males. They trained 6 experts, each being a "time-delayed neural network"[3] (essentially a multilayered convolution network over the mel spectrogram). They found that the resulting mixture of experts dedicated 5 experts for 5 of the speakers, but the 6th (male) speaker does not have a dedicated expert, instead his voice was classified by a linear combination of the experts for the other 3 male speakers.
The adaptive mixtures of local experts [4][5] uses a gaussian mixture model. Each expert simply predicts a gaussian distribution, and totally ignores the input. Specifically, the [math]\displaystyle{ i }[/math]-th expert predicts that the output is [math]\displaystyle{ y \sim N(\mu_i, I) }[/math], where [math]\displaystyle{ \mu_i }[/math] is a learnable parameter. The weighting function is a linear-softmax function:[math]\displaystyle{ w(x)_i = \frac{e^{k_i^T x + b_i}}{\sum_j e^{k_j^T x + b_j}} }[/math]The mixture of experts predict that the output is distributed according to the probability density function:[math]\displaystyle{ f_\theta(y|x) = \ln\left[\sum_i \frac{e^{k_i^T x + b_i}}{\sum_j e^{k_j^T x + b_j}} N(y | \mu_i, I)\right] = \ln\left[(2\pi)^{-d/2} \sum_i \frac{e^{k_i^T x + b_i}}{\sum_j e^{k_j^T x + b_j}} e^{-\frac 12 \|y-\mu_i\|^2}\right] }[/math]It is trained by maximal likelihood estimation, that is, gradient ascent on [math]\displaystyle{ f(y|x) }[/math]. The gradient for the [math]\displaystyle{ i }[/math]-th expert is
[math]\displaystyle{ \nabla_{\mu_i} f_\theta(y|x) = \frac{w(x)_i N(y|\mu_i, I)}{\sum_j w(x)_j N(y|\mu_j, I)}\; (y-\mu_i) }[/math]
and the gradient for the weighting function is[math]\displaystyle{ \nabla_{[k_i, b_i]} f_\theta(y|x) = \begin{bmatrix}x\\ 1\end{bmatrix} \frac{w(x)_i}{\sum_j w(x)_j N(y|\mu_j, I)} (f_{i}(x)- f_\theta(y|x)) }[/math]
For each input-output pair [math]\displaystyle{ (x, y) }[/math], the weighting function is changed to increase the weight on all experts that performed above average, and decrease the weight on all experts that performed below average. This encourages the weighting function to learn to select only the expects that make the right predictions for each input.
The [math]\displaystyle{ i }[/math]-th expert is changed to make its prediction closer to [math]\displaystyle{ y }[/math], but the amount of change is proportional to [math]\displaystyle{ w(x)_i N(y|\mu_i, I) }[/math]. This has a Bayesian interpretation. Given input [math]\displaystyle{ x }[/math], the prior probability that expert [math]\displaystyle{ i }[/math] is the right one is [math]\displaystyle{ w(x)_i }[/math], and [math]\displaystyle{ N(y|\mu_i, I) }[/math] is the likelihood of evidence [math]\displaystyle{ y }[/math]. So, [math]\displaystyle{ \frac{w(x)_i N(y|\mu_i, I)}{\sum_j w(x)_j N(y|\mu_j, I)} }[/math] is the posterior probability for expert [math]\displaystyle{ i }[/math], and so the rate of change for the [math]\displaystyle{ i }[/math]-th expert is proportional to its posterior probability.
In words, the experts that, in hindsight, seemed like the good experts to consult, are asked to learn on the example. The experts that, in hindsight, were not, are left alone.
The combined effect is that the experts become specialized: Suppose two experts are both good at predicting a certain kind of input, but one is slightly better, then the weighting function would eventually learn to favor the better one. After that happens, the lesser expert is unable to obtain a high gradient signal, and becomes even worse at predicting such kind of input. Conversely, the lesser expert can become better at predicting other kinds of input, and increasingly pulled away into another region. This has a positive feedback effect, causing each expert to move apart from the rest and take care of a local region alone (thus the name "local experts").
Hierarchical mixtures of experts[6][7] uses multiple levels of gating in a tree. Each gating is a probability distribution over the next level of gatings, and the experts are on the leaf nodes of the tree. They are similar to decision trees.
For example, a 2-level hierarchical MoE would have a first order gating function [math]\displaystyle{ w_i }[/math], and second order gating functions [math]\displaystyle{ w_{j|i} }[/math] and experts [math]\displaystyle{ f_{j|i} }[/math]. The total prediction is then [math]\displaystyle{ \sum_i w_i(x) \sum_j w_{j|i}(x) f_{j|i}(x) }[/math].
The mixture of experts, being similar to the gaussian mixture model, can also be trained by the expectation-maximization algorithm, just like gaussian mixture models. Specifically, during the expectation step, the "burden" for explaining each data point is assigned over the experts, and during the maximization step, the experts are trained to improve the explanations they got a high burden for, while the gate is trained to improve its burden assignment. This can converge faster than gradient ascent on the log-likelihood.[7][8]
The choice of gating function is often a softmax gating. Other than that, [9] proposed using gaussian distributions, and [8] proposed using exponential families.
Instead of performing a weighted sum of all the experts, in hard MoE [10] only the highest ranked expert is chosen. That is, [math]\displaystyle{ f(x) = f_{\arg\max_i w_i(x)}(x) }[/math]. This can accelerate training and inference time.[11]
The experts can use more general forms of multivariant gaussian distributions. For example, [6] proposed [math]\displaystyle{ f_i(y|x) = N(y | A_i x + b_i, \Sigma_i) }[/math], where [math]\displaystyle{ A_i, b_i, \Sigma_i }[/math] are learnable parameters. In words, each expert learns to do linear regression, with a learnable uncertainty estimate.
One can use different experts than gaussian distributions. For example, one can use Laplace distribution,[12] or Student's t-distribution.[13] For binary classification, it also proposed logistic regression experts, with[math]\displaystyle{ f_i(y|x) = \begin{cases} \frac{1}{1+e^{\beta_i^T x + \beta_{i,0}}}, & y = 0 \\ 1-\frac{1}{1+e^{\beta_i^T x + \beta_{i,0}}}, & y= 1 \end{cases} }[/math]where [math]\displaystyle{ \beta_{i}, \beta_{i, 0} }[/math] are learnable parameters. This is later generalized for multi-class classification, with multinomial logistic regression experts.[14]
The previous section described MoE as it was used before the era of deep learning. After deep learning, MoE found applications in running the largest models, as a simple way to perform conditional computation: only parts of the model are used, the parts chosen according to what the input is.[15]
The earliest paper that applies MoE to deep learning is,[16] which proposes to use a different gating network at each layer in a deep neural network. Specifically, each gating is a linear-ReLU-linear-softmax network, and each expert is a linear-ReLU network.
The key design desideratum for MoE in deep learning is to reduce computing cost. Consequently, for each query, only a small subset of the experts should be queried. This makes MoE in deep learning different from classical MoE. In classical MoE, the output for each query is a weighted sum of all experts' outputs. In deep learning MoE, the output for each query can only involve a few experts' outputs. Consequently, the key design choice in MoE becomes routing: given a batch of queries, how to route the queries to the best experts.
The sparsely-gated MoE layer,[17] published by researchers from Google Brain, uses feedforward networks as experts, and linear-softmax gating. Similar to the previously proposed hard MoE, they achieve sparsity by a weighted sum of only the top-k experts, instead of the weighted sum of all of them. Specifically, in a MoE layer, there are feedforward networks [math]\displaystyle{ f_1, ..., f_n }[/math], and a gating network [math]\displaystyle{ w }[/math]. The gating network is defined by [math]\displaystyle{ w(x) = \mathrm{softmax}(\mathrm{top}_k(W x + \text{noise})) }[/math], where [math]\displaystyle{ \mathrm{top}_k }[/math] is a function that keeps the top-k entries of a vector the same, but sets all other entries to [math]\displaystyle{ -\infty }[/math]. The addition of noise helps with load balancing.
The choice of [math]\displaystyle{ k }[/math] is a hyperparameter that is chosen according to application. Typical values are [math]\displaystyle{ k = 1, 2 }[/math]. The [math]\displaystyle{ k = 1 }[/math] version is also called the Switch Transformer.[18]
As demonstration, they trained a series of models for machine translation with alternating layers of MoE and LSTM, and compared with deep LSTM models.[19] Table 3 shows that the MoE models used less inference time compute, despite having 30x more parameters.
Vanilla MoE tend to have issues of load balancing: some experts are consulted often, while other experts rarely or not at all. To encourage the gate to select each expert with equal frequency (proper load balancing) within each batch, each MoE layer has two auxiliary loss functions. This is improved by [18] into a single auxiliary loss function. Specifically, let [math]\displaystyle{ n }[/math] be the number of experts, then for a given batch of queries [math]\displaystyle{ \{x_1, x_2, ..., x_T\} }[/math], the auxiliary loss for the batch is[math]\displaystyle{ n\sum_{i=1}^n f_i P_i }[/math]Here, [math]\displaystyle{ f_i = \frac 1T \#(\text{queries sent to expert }i) }[/math] is the fraction of time where expert [math]\displaystyle{ i }[/math] is ranked highest, and [math]\displaystyle{ P_i = \frac 1T \sum_{j=1}^T w_i(x_j) }[/math] is the fraction of weight on expert [math]\displaystyle{ i }[/math]. This loss is minimized at [math]\displaystyle{ 1 }[/math], precisely when every expert has equal weight [math]\displaystyle{ 1/n }[/math] in all situations.
In sparsely-gated MoE, only the top-k experts are queried, and their outputs are weighted-summed. There are other methods.[20]
In Hash MoE,[21] routing is performed deterministically by a hash function, fixed before learning begins. For example, if the model is a 4-layered Transformer, and input is a token for word "eat", and the hash of "eat" is [math]\displaystyle{ (1, 4, 2, 3) }[/math], then the token would be routed to the 1st expert in layer 1, 4th expert in layer 2, etc. Despite its simplicity, it achieves competitive performance as sparsely gated MoE with [math]\displaystyle{ k = 1 }[/math].
In soft MoE, suppose in each batch, each expert can process [math]\displaystyle{ p }[/math] queries, then there are [math]\displaystyle{ n\times p }[/math] queries that can be assigned per batch. Now for each batch of queries [math]\displaystyle{ \{x_1, x_2, ..., x_T\} }[/math], the soft MoE layer computes an array [math]\displaystyle{ w_{i, j, k} }[/math], such that [math]\displaystyle{ (w_{i, j, 1}, ..., w_{i, j, T}) }[/math] is a probability distribution over queries, and the [math]\displaystyle{ i }[/math]-th expert's [math]\displaystyle{ j }[/math]-th query is [math]\displaystyle{ \sum_k w_{i,j,k}x_k }[/math].[22] However, this does not work with autoregressive modelling, since the weights [math]\displaystyle{ w_{i, j, k} }[/math] over one token depends on all other tokens'.[23]
Other approaches include solving it as a constrained linear programming problem,[24] making each expert choose the top-k queries it wants (instead of each query choosing the top-k experts for it),[25] using reinforcement learning to train the routing algorithm (since picking an expert is a discrete action, like in RL).[26]
Suppose there are [math]\displaystyle{ n }[/math] experts in a layer. For a given batch of queries [math]\displaystyle{ \{x_1, x_2, ..., x_T\} }[/math], each query is routed to one or more experts. For example, if each query is routed to one expert as in Switch Transformers, and if the experts are load-balanced, then each expert should expect on average [math]\displaystyle{ T/n }[/math] queries in a batch. In practice, the experts cannot expect perfect load balancing: in some batches, one expert might be underworked, while in other batches, it would be overworked.
Since the inputs cannot move through the layer until every expert in the layer has finished the queries it is assigned, load balancing is important. As a hard constraint on load balancing, there is the capacity factor: each expert is only allowed to process up to [math]\displaystyle{ c \cdot T/n }[/math] queries in a batch. [20] found [math]\displaystyle{ c \in [1.25, 2] }[/math] to work in practice.
MoE layers are used in very large Transformer models, for which learning and inferring over the full model is too costly. In Transformer models, the MoE layers are often used to select the feedforward layers (typically a linear-ReLU-linear network), appearing in each Transformer block after the multiheaded attention. This is because the feedforward layers take up an increasing portion of the computing cost as models grow larger. For example, in the Palm-540B model, 90% of parameters are in its feedforward layers.[27]
A series of large language models from Google used MoE. GShard[28] uses MoE with up to top-2 experts per layer. Specifically, the top-1 expert is always selected, and the top-2th expert is selected with probability proportional to that experts' weight according to the gating function. Later, GLaM[29] demonstrated a language model with 1.2 trillion parameters, each MoE layer using top-2 out of 64 experts. Switch Transformers[18] use top-1 in all MoE layers.
The NLLB-200 by Meta AI is a machine translation model for 200 languages.[30] Each MoE layer uses a hierarchical MoE with two levels. On the first level, the gating function chooses to use either a "shared" feedforward layer, or to use the experts. If using the experts, then another gating function computes the weights and chooses the top-2 experts.[31]
MoE large language models can be adapted for downstream tasks by instruction tuning.[32]
Generally, MoE are used when dense models have become too costly. As of 2023, the largest models tend to be large language models. Outside of those, Vision MoE[33] is a Transformer model with MoE layers. They demonstrated it by training a model with 15 billion parameters.
In December 2023 the french startup Mistral AI released the open source model Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights. It is licensed under Apache 2.0 and outperforms Llama 2 70B on most benchmarks with 6x faster inference. Also compared to GPT-3.5 the model shows superior performance according to the companies blog post.[34] Mixtral 8x7B is noted for its cost/performance trade-offs, handling multiple languages (English, French, Italian, German, and Spanish), and strong code generation performance. The model is a decoder-only model with 46.7B total parameters, but uses only 12.9B parameters per token. Mixtral 8x7B is also available in an instructed version optimized for instruction following.
Original source: https://en.wikipedia.org/wiki/Mixture of experts.
Read more |