Dilution (neural networks)

From HandWiki - Reading time: 6 min

Dilution and dropout (also called DropConnect[1]) are regularization techniques for reducing overfitting in artificial neural networks by preventing complex co-adaptations on training data. They are an efficient way of performing model averaging with neural networks.[2] Dilution refers to thinning weights,[3] while dropout refers to randomly "dropping out", or omitting, units (both hidden and visible) during the training process of a neural network.[4][5][2] Both trigger the same type of regularization.

Types and uses

Dilution is usually split in weak dilution and strong dilution. Weak dilution describes the process in which the finite fraction of removed connections is small, and strong dilution refers to when this fraction is large. There is no clear distinction on where the limit between strong and weak dilution is, and often the distinction is dependent on the precedent of a specific use-case and has implications for how to solve for exact solutions.

Sometimes dilution is used for adding damping noise to the inputs. In that case, weak dilution refers to adding a small amount of damping noise, while strong dilution refers to adding a greater amount of damping noise. Both can be rewritten as variants of weight dilution.

These techniques are also sometimes referred to as random pruning of weights, but this is usually a non-recurring one-way operation. The network is pruned, and then kept if it is an improvement over the previous model. Dilution and dropout both refer to an iterative process. The pruning of weights typically does not imply that the network continues learning, while in dilution/dropout, the network continues to learn after the technique is applied.

Generalized linear network

Output from a layer of linear nodes, in an artificial neural net can be described as

[math]\displaystyle{ y_i = \sum_j w_{ij} x_j }[/math]

 

 

 

 

(1)

  • [math]\displaystyle{ y_i }[/math] – output from node [math]\displaystyle{ i }[/math]
  • [math]\displaystyle{ w_{ij} }[/math] – real weight before dilution, also called the Hebb connection strength
  • [math]\displaystyle{ x_j }[/math] – input from node [math]\displaystyle{ j }[/math]

This can be written in vector notation as

[math]\displaystyle{ \mathbf{y} = \mathbf{W} \mathbf{x} }[/math]

 

 

 

 

(2)

  • [math]\displaystyle{ \mathbf{y} }[/math] – output vector
  • [math]\displaystyle{ \mathbf{W} }[/math] – weight matrix
  • [math]\displaystyle{ \mathbf{x} }[/math] – input vector

Equations (1) and (2) are used in the subsequent sections.

Weak dilution

During weak dilution, the finite fraction of removed connections (the weights) is small, giving rise to a tiny uncertainty. This edge-case can be solved exactly with mean field theory. In weak dilution the impact on the weights can be described as

[math]\displaystyle{ \hat{w_{ij}} = \begin{cases} w_{ij}, & \mbox{with } P(c) \\ 0, & \mbox{otherwise} \end{cases} }[/math]

 

 

 

 

(3)

  • [math]\displaystyle{ \hat{w_{ij}} }[/math] – diluted weight
  • [math]\displaystyle{ w_{ij} }[/math] – real weight before dilution
  • [math]\displaystyle{ P(c) }[/math] – the probability of [math]\displaystyle{ c }[/math], the probability of keeping a weight

The interpretation of probability [math]\displaystyle{ P(c) }[/math] can also be changed from keeping a weight into pruning a weight.

In vector notation this can be written as

[math]\displaystyle{ \hat{\mathbf{W}} = \operatorname{g} \left ( \mathbf{W}, c \right ) }[/math]

 

 

 

 

(4)

where the function [math]\displaystyle{ \operatorname{g} ( \cdot ) }[/math] imposes the previous dilution.

In weak dilution only a small and fixed fraction of the weights are diluted. When the number of terms in the sum goes to infinite (the weights for each node) it is still infinite (the fraction is fixed), thus mean field theory can be applied. In the notation from Hertz et al.[3] this would be written as

[math]\displaystyle{ \left \langle h_i \right \rangle = c \sum_j w_{ij} \left \langle S_j \right \rangle }[/math]

 

 

 

 

(5)

  • [math]\displaystyle{ \left \langle h_i \right \rangle }[/math] the mean field temperature
  • [math]\displaystyle{ c }[/math] – a scaling factor for the temperature from the probability of keeping the weight
  • [math]\displaystyle{ w_{ij} }[/math] – real weight before dilution, also called the Hebb connection strength
  • [math]\displaystyle{ \left \langle S_j \right \rangle }[/math] – the mean stable equilibrium states

There are some assumptions for this to hold, which are not listed here.[6][7]

Strong dilution

When the dilution is strong, the finite fraction of removed connections (the weights) is large, giving rise to a huge uncertainty.

Dropout

Dropout is a special case of the previous weight equation (3), where the aforementioned equation is adjusted to remove a whole row in the vector matrix, and not only random weights

[math]\displaystyle{ \hat{\mathbf{w}_{j}} = \begin{cases} \mathbf{w}_{j}, & \mbox{with } P(c) \\ \mathbf{0}, & \mbox{otherwise} \end{cases} }[/math]

 

 

 

 

(6)

  • [math]\displaystyle{ P(c) }[/math] – the probability [math]\displaystyle{ c }[/math] to keep a row in the weight matrix
  • [math]\displaystyle{ \mathbf{w}_{j} }[/math] – real row in the weight matrix before dropout
  • [math]\displaystyle{ \hat{\mathbf{w}_{j}} }[/math] – diluted row in the weight matrix

Because dropout removes a whole row from the vector matrix, the previous (unlisted) assumptions for weak dilution and the use of mean field theory are not applicable.

The process by which the node is driven to zero, whether by setting the weights to zero, by “removing the node”, or by some other means, does not impact the end result and does not create a new and unique case. If the neural net is processed by a high-performance digital array-multiplicator, then it is likely more effective to drive the value to zero late in the process graph. If the net is processed by a constrained processor, perhaps even an analog neuromorph processor, then it is likely a more power-efficient solution is to drive the value to zero early in the process graph.

Google's patent

Although there have been examples of randomly removing connections between neurons in a neural network to improve models,[3] this technique was first introduced with the name dropout by Geoffrey Hinton, et al. in 2012.[2] Google currently holds the patent for the dropout technique.[8][note 1]

See also

Notes

  1. The patent is most likely not valid due to previous art. “Dropout” has been described as “dilution” in previous publications. It is described by Hertz, Krogh, and Palmer in Introduction to the Theory of Neural Computation (1991) ISBN:0-201-51560-1, pp. 45, Weak Dilution. The text references Sompolinsky The Theory of Neural Networks: The Hebb Rules and Beyond in Heidelberg Colloquium on Glossy Dynamics (1987) and Canning and Gardner Partially Connected Models of Neural Networks in Journal of Physics (1988). It goes on to describe strong dilution. This predates Hinton's paper.

References

  1. Wan, Li; Zeiler, Matthew; Zhang, Sixin; Le Cun, Yann; Fergus, Rob (2013). "Regularization of Neural Networks using DropConnect". Proceedings of the 30th International Conference on Machine Learning, PMLR 28 (3): 1058–1066. https://proceedings.mlr.press/v28/wan13.html. 
  2. 2.0 2.1 2.2 Hinton, Geoffrey E.; Srivastava, Nitish; Krizhevsky, Alex; Sutskever, Ilya; Salakhutdinov, Ruslan R. (2012). "Improving neural networks by preventing co-adaptation of feature detectors". arXiv:1207.0580 [cs.NE].
  3. 3.0 3.1 3.2 Hertz, John; Krogh, Anders; Palmer, Richard (1991). Introduction to the Theory of Neural Computation. Redwood City, California: Addison-Wesley Pub. Co.. pp. 45–46. ISBN 0-201-51560-1. 
  4. "Dropout: A Simple Way to Prevent Neural Networks from Overfitting". http://jmlr.org/papers/v15/srivastava14a.html. Retrieved July 26, 2015. 
  5. Warde-Farley, David; Goodfellow, Ian J.; Courville, Aaron; Bengio, Yoshua (2013-12-20). "An empirical analysis of dropout in piecewise linear networks". arXiv:1312.6197 [stat.ML].
  6. Sompolinsky, H. (1987), "The theory of neural networks: The Hebb rule and beyond", Heidelberg Colloquium on Glassy Dynamics, Lecture Notes in Physics (Berlin, Heidelberg: Springer Berlin Heidelberg) 275: pp. 485–527, doi:10.1007/bfb0057531, ISBN 978-3-540-17777-7, Bibcode1987LNP...275..485S 
  7. Canning, A; Gardner, E (1988-08-07). "Partially connected models of neural networks". Journal of Physics A: Mathematical and General 21 (15): 3275–3284. doi:10.1088/0305-4470/21/15/016. ISSN 0305-4470. Bibcode1988JPhA...21.3275C. 
  8. , Geoffrey E."System and method for addressing overfitting in a neural network" US patent 9406017B2, published 2016-08-02, issued 2016-08-02




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Dilution_(neural_networks)
11 views | Status: cached on July 21 2024 08:00:50
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF