Dropout (Neural Networks)

From Handwiki

Dropout is a regularization technique patented by Google[1] for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks.[2] The term "dropout" refers to dropping out units (both hidden and visible) in a neural network.[3][4]

See also

  • AlexNet
  • Convolutional neural network § Dropout

References

  1. "System and method for addressing overfitting in a neural network" patent
  2. Hinton, Geoffrey E.; Srivastava, Nitish; Krizhevsky, Alex; Sutskever, Ilya; Salakhutdinov, Ruslan R. (2012). "Improving neural networks by preventing co-adaptation of feature detectors". arXiv:1207.0580 [cs.NE].
  3. "Dropout: A Simple Way to Prevent Neural Networks from Overfitting". http://jmlr.org/papers/v15/srivastava14a.html. Retrieved July 26, 2015. 
  4. Warde-Farley, David; Goodfellow, Ian J.; Courville, Aaron; Bengio, Yoshua (2013-12-20). "An empirical analysis of dropout in piecewise linear networks". arXiv:1312.6197 [stat.ML].





Retrieved from "https://handwiki.org/wiki/index.php?title=Dropout_(neural_networks)&oldid=61828"

Categories: [Artificial neural networks]


Download as ZWI file | Last modified: 03/28/2024 16:48:11 | 12 views
☰ Source: https://handwiki.org/wiki/Dropout_(neural_networks) | License: CC BY-SA 3.0

ZWI is not signed. [what is this?]