Machine learning and data mining |
---|
In machine learning, the perceptron (or McCulloch–Pitts neuron) is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class.[1] It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.
The perceptron was invented in 1943 by Warren McCulloch and Walter Pitts.[5] The first hardware implementation was Mark I Perceptron machine built in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt,[6] funded by the Information Systems Branch of the United States Office of Naval Research and the Rome Air Development Center. It was first publicly demonstrated on 23 June 1960.[7] The machine was "part of a previously secret four-year NPIC [the US' National Photographic Interpretation Center] effort from 1963 through 1966 to develop this algorithm into a useful tool for photo-interpreters".[8]
Rosenblatt described the details of the perceptron in a 1958 paper.[9] His organization of a perceptron is constructed of three kinds of cells ("units"): AI, AII, R, which stand for "projection", "association" and "response".
Rosenblatt's project was funded under Contract Nonr-401(40) "Cognitive Systems Research Program", which lasted from 1959 to 1970,[10] and Contract Nonr-2381(00) "Project PARA" ("PARA" means "Perceiving and Recognition Automata"), which lasted from 1957[6] to 1963.[11]
The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the "Mark I perceptron" with the project name "Project PARA",[12] designed for image recognition. The machine is currently in Smithsonian National Museum of American History.[13]
The Mark I Perceptron has 3 layers.
Rosenblatt called this three-layered perceptron network the alpha-perceptron, to distinguish it from other perceptron models he experimented with.[7]
The S-units are connected to the A-units randomly (according to a table of random numbers) via a plugboard (see photo), to "eliminate any particular intentional bias in the perceptron". The connection weights are fixed, not learned. Rosenblatt was adamant about the random connections, as he believed the retina was randomly connected to the visual cortex, and he wanted his perceptron machine to resemble human visual perception.[14]
The A-units are connected to the R-units, with adjustable weights encoded in potentiometers, and weight updates during learning were performed by electric motors.[2]:193The hardware details are in an operators' manual.[12]
In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."[15]
Rosenblatt described his experiments with many variants of the Perceptron machine in a book Principles of Neurodynamics (1962). The book is a published version of the 1961 report.[16]
Among the variants are:
The machine was shipped from Cornell to Smithsonian in 1967, under a government transfer administered by the Office of Naval Research.[8]
Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of neural network research to stagnate for many years, before it was recognised that a feedforward neural network with two or more layers (also called a multilayer perceptron) had greater processing power than perceptrons with one layer (also called a single-layer perceptron).
Single-layer perceptrons are only capable of learning linearly separable patterns.[17] For a classification task with some step activation function, a single node will have a single line dividing the data points forming the patterns. More nodes can create more dividing lines, but those lines must somehow be combined to form more complex classifications. A second layer of perceptrons, or even linear nodes, are sufficient to solve many otherwise non-separable problems.
In 1969, a famous book entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. It is often believed (incorrectly) that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page on Perceptrons (book) for more information.) Nevertheless, the often-miscited Minsky/Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s.[17] This text was reprinted in 1987 as "Perceptrons - Expanded Edition" where some errors in the original text are shown and corrected.
Rosenblatt continued working on perceptrons despite diminishing funding. The last attempt was Tobermory, built between 1961 and 1967, built for speech recognition.[18] It occupied an entire room.[19] It had 4 layers with 12,000 weights implemented by toroidal magnetic cores. By the time of its completion, simulation on digital computers had become faster than purpose-built perceptron machines.[20] He died in a boating accident in 1971.
The kernel perceptron algorithm was already introduced in 1964 by Aizerman et al.[21] Margin bounds guarantees were given for the Perceptron algorithm in the general non-separable case first by Freund and Schapire (1998),[1] and more recently by Mohri and Rostamizadeh (2013) who extend previous results and give new L1 bounds.[22]
The perceptron is a simplified model of a biological neuron. While the complexity of biological neuron models is often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons.[23]
The solution spaces of decision boundaries for all binary functions and learning behaviors are studied in.[24]
In the modern sense, the perceptron is an algorithm for learning a binary classifier called a threshold function: a function that maps its input [math]\displaystyle{ \mathbf{x} }[/math] (a real-valued vector) to an output value [math]\displaystyle{ f(\mathbf{x}) }[/math] (a single binary value):
where [math]\displaystyle{ \theta }[/math] is the heaviside step-function, [math]\displaystyle{ \mathbf{w} }[/math] is a vector of real-valued weights, [math]\displaystyle{ \mathbf{w} \cdot \mathbf{x} }[/math] is the dot product [math]\displaystyle{ \sum_{i=1}^m w_i x_i }[/math], where m is the number of inputs to the perceptron, and b is the bias. The bias shifts the decision boundary away from the origin and does not depend on any input value.
Equivalently, since [math]\displaystyle{ \mathbf{w}\cdot \mathbf{x} + b = (\mathbf{w}, b) \cdot (\mathbf{x}, 1) }[/math], we can add the bias term [math]\displaystyle{ b }[/math] as another weight [math]\displaystyle{ \mathbf{w}_{m+1} }[/math] and add a coordinate [math]\displaystyle{ 1 }[/math] to each input [math]\displaystyle{ \mathbf{x} }[/math], and then write it as a linear classifier that passes the origin:[math]\displaystyle{ f(\mathbf{x}) = \theta(\mathbf{w} \cdot \mathbf{x}) }[/math]
The binary value of [math]\displaystyle{ f(\mathbf{x}) }[/math] (0 or 1) is used to perform binary classification on [math]\displaystyle{ \mathbf{x} }[/math] as either a positive or a negative instance. Spatially, the bias shifts the position (though not the orientation) of the planar decision boundary.
In the context of neural networks, a perceptron is an artificial neuron using the Heaviside step function as the activation function. The perceptron algorithm is also termed the single-layer perceptron, to distinguish it from a multilayer perceptron, which is a misnomer for a more complicated neural network. As a linear classifier, the single-layer perceptron is the simplest feedforward neural network.
From an information theory point of view, a single perceptron with K inputs has a capacity of 2K bits of information.[25] This result is due to Thomas Cover.[26]
Specifically let [math]\displaystyle{ T(N, K) }[/math] be the number of ways to linearly separate N points in K dimensions, then[math]\displaystyle{ T(N, K)=\left\{\begin{array}{cc} 2^N & K \geq N \\ 2 \sum_{k=0}^{K-1}\left(\begin{array}{c} N-1 \\ k \end{array}\right) & K\lt N \end{array}\right. }[/math]When K is large, [math]\displaystyle{ T(N, K)/2^N }[/math] is very close to one when [math]\displaystyle{ N \leq 2K }[/math], but very close to zero when [math]\displaystyle{ N\gt 2K }[/math]. In words, one perceptron unit can almost certainly memorize a random assignment of binary labels on N points when [math]\displaystyle{ N \leq 2K }[/math], but almost certainly not when [math]\displaystyle{ N\gt 2K }[/math].
When operating on only binary inputs, a perceptron is called a linearly separable Boolean function, or threshold Boolean function. The sequence of numbers of threshold Boolean functions on n inputs is OEIS A000609. The value is only known exactly up to [math]\displaystyle{ n=9 }[/math] case, but the order of magnitude is known quite exactly: it has upper bound [math]\displaystyle{ 2^{n^2 - n \log_2 n + O(n)} }[/math] and lower bound [math]\displaystyle{ 2^{n^2 - n \log_2 n - O(n)} }[/math].[27]
Any Boolean linear threshold function can be implemented with only integer weights. Furthermore, the number of bits necessary and sufficient for representing a single integer weight parameter is [math]\displaystyle{ \Theta(n \ln n) }[/math].[27]
A single perceptron can learn to classify any half-space. It cannot solve any linearly nonseparable vectors, such as the Boolean exclusive-or problem (the famous "XOR problem").
A perceptron network with one hidden layer can learn to classify any compact subset arbitrarily closely. Similarly, it can also approximate any compactly-supported continuous function arbitrarily closely. This is essentially a special case of the theorems by George Cybenko and Kurt Hornik.
Perceptrons (Minsky and Papert, 1969) studied the kind of perceptron networks necessary to learn various boolean functions.
Consider a perceptron network with [math]\displaystyle{ n }[/math] input units, one hidden layer, and one output, similar to the Mark I Perceptron machine. It computes a boolean function of type [math]\displaystyle{ f: 2^n \to 2 }[/math]. They call a function conjuctively local of order [math]\displaystyle{ k }[/math], iff there exists a perceptron network such that each unit in the hidden layer connects to at most [math]\displaystyle{ k }[/math] input units.
Theorem. (Theorem 3.1.1): The parity function is conjuctively local of order [math]\displaystyle{ n }[/math].
Theorem. (Section 5.5): The connectedness function is conjuctively local of order [math]\displaystyle{ \Omega(n^{1/2}) }[/math].
Below is an example of a learning algorithm for a single-layer perceptron with a single output unit. For a single-layer perceptron with multiple output units, since the weights of one output unit are completely separate from all the others', the same algorithm can be run for each output unit.
For multilayer perceptrons, where a hidden layer exists, more sophisticated algorithms such as backpropagation must be used. If the activation function or the underlying process being modeled by the perceptron is nonlinear, alternative learning algorithms such as the delta rule can be used as long as the activation function is differentiable. Nonetheless, the learning algorithm described in the steps below will often work, even for multilayer perceptrons with nonlinear activation functions.
When multiple perceptrons are combined in an artificial neural network, each output neuron operates independently of all the others; thus, learning each output can be considered in isolation.
We first define some variables:
We show the values of the features as follows:
To represent the weights:
To show the time-dependence of [math]\displaystyle{ \mathbf{w} }[/math], we use:
The algorithm updates the weights after every training sample in step 2b.
A single perceptron is a linear classifier. It can only reach a stable state if all input vectors are classified correctly. In case the training set D is not linearly separable, i.e. if the positive examples cannot be separated from the negative examples by a hyperplane, then the algorithm would not converge since there is no solution. Hence, if linear separability of the training set is not known a priori, one of the training variants below should be used. Detailed analysis and extensions to the convergence theorem are in Chapter 11 of Perceptrons (1969).
Linear separability is testable in time [math]\displaystyle{ \min(O(n^{d/2}), O(d^{2n}), O(n^{d-1} \ln n)) }[/math], where [math]\displaystyle{ n }[/math] is the number of data points, and [math]\displaystyle{ d }[/math] is the dimension of each point.[28]
If the training set is linearly separable, then the perceptron is guaranteed to converge after making finitely many mistakes.[29] The theorem is proved by Rosenblatt et al.
Perceptron convergence theorem — Given a dataset [math]\displaystyle{ D }[/math], such that [math]\displaystyle{ \max_{(x,y) \in D}\|x\|_2 = R }[/math], and it is linearly separable by some unit vector [math]\displaystyle{ w^* }[/math], with margin [math]\displaystyle{ \gamma }[/math]: [math]\displaystyle{ \gamma := \min_{(x,y) \in D} y(w^*\cdot x ) }[/math]
Then the perceptron 0-1 learning algorithm converges after making at most [math]\displaystyle{ (R/\gamma)^2 }[/math] mistakes, for any learning rate, and any method of sampling from the dataset.
The following simple proof is due to Novikoff (1962). The idea of the proof is that the weight vector is always adjusted by a bounded amount in a direction with which it has a negative dot product, and thus can be bounded above by O(√t), where t is the number of changes to the weight vector. However, it can also be bounded below by O(t) because if there exists an (unknown) satisfactory weight vector, then every change makes progress in this (unknown) direction by a positive amount that depends only on the input vector.
Suppose at step [math]\displaystyle{ t }[/math], the perceptron with weight [math]\displaystyle{ w_t }[/math] makes a mistake on data point [math]\displaystyle{ (x, y) }[/math], then it updates to [math]\displaystyle{ w_{t+1} = w_t + r(y-f_{w_t}(x) ) x }[/math].
If [math]\displaystyle{ y = 0 }[/math], the argument is symmetric, so we omit it.
WLOG, [math]\displaystyle{ y = 1 }[/math], then [math]\displaystyle{ f_{w_t}(x) = 0 }[/math], [math]\displaystyle{ f_{w^*}(x) = 1 }[/math], and [math]\displaystyle{ w_{t+1} = w_t + rx }[/math].
By assumption, we have separation with margins: [math]\displaystyle{ w^* \cdot x \geq \gamma }[/math] Thus,
[math]\displaystyle{ w^* \cdot w_{t+1} - w^* \cdot w_{t} = w^* \cdot (rx) \geq r\gamma }[/math]
Also [math]\displaystyle{ \|w_{t+1}\|_2^2 - \|w_{t}\|_2^2 = \|w_{t} + rx\|_2^2 - \|w_{t}\|_2^2 = 2r (w_t \cdot x) + r^2 \|x\|_2^2 }[/math] and since the perceptron made a mistake, [math]\displaystyle{ w_t \cdot x \leq 0 }[/math], and so
[math]\displaystyle{ \|w_{t+1}\|_2^2 - \|w_{t}\|_2^2 \leq \|x\|_2^2 \leq r^2R^2 }[/math]
Since we started with [math]\displaystyle{ w_0 = 0 }[/math], after making [math]\displaystyle{ N }[/math] mistakes, [math]\displaystyle{ \|w\|_2 \leq \sqrt{Nr^2R^2} }[/math] but also
[math]\displaystyle{ \|w\|_2 \geq w \cdot w^* \geq Nr\gamma }[/math]
Combining the two, we have [math]\displaystyle{ N \leq (R/\gamma)^2 }[/math]
While the perceptron algorithm is guaranteed to converge on some solution in the case of a linearly separable training set, it may still pick any solution and problems may admit many solutions of varying quality.[30] The perceptron of optimal stability, nowadays better known as the linear support-vector machine, was designed to solve this problem (Krauth and Mezard, 1987).[31]
When the dataset is not linearly separable, then there is no way for a single perceptron to converge. However, we still have[32]
Perceptron cycling theorem — If the dataset [math]\displaystyle{ D }[/math] has only finitely many points, then there exists an upper bound number [math]\displaystyle{ M }[/math], such that for any starting weight vector [math]\displaystyle{ w_0 }[/math] all weight vector [math]\displaystyle{ w_t }[/math] has norm bounded by [math]\displaystyle{ \|w_t\| \leq \|w_0\|+M }[/math]
This is proved first by Bradley Efron.[33]
Consider a dataset where the [math]\displaystyle{ x }[/math] are from [math]\displaystyle{ \{-1, +1\}^n }[/math], that is, the vertices of an n-dimensional hypercube centered at origin, and [math]\displaystyle{ y = \theta(x_i) }[/math]. That is, all data points with positive [math]\displaystyle{ x_i }[/math] have [math]\displaystyle{ y=1 }[/math], and vice versa. By the perceptron convergence theorem, a perceptron would converge after making at most [math]\displaystyle{ n }[/math] mistakes.
If we were to write a logical program to perform the same task, each positive example shows that one of the coordinates is the right one, and each negative example shows that its complement is a positive example. By collecting all the known positive examples, we eventually eliminate all but one coordinate, at which point the dataset is learned.[34]
This bound is asymptotically tight in terms of the worst-case. In the worst-case, the first presented example is entirely new, and gives [math]\displaystyle{ n }[/math] bits of information, but each subsequent example would differ minimally from previous examples, and gives 1 bit each. After [math]\displaystyle{ n+1 }[/math] examples, there are [math]\displaystyle{ 2n }[/math] bits of information, which is sufficient for the perceptron (with [math]\displaystyle{ 2n }[/math] bits of information).[25]
However, it is not tight in terms of expectation if the examples are presented uniformly at random, since the first would give [math]\displaystyle{ n }[/math] bits, the second [math]\displaystyle{ n/2 }[/math] bits, and so on, taking [math]\displaystyle{ O(\ln n) }[/math] examples in total.[34]
The pocket algorithm with ratchet (Gallant, 1990) solves the stability problem of perceptron learning by keeping the best solution seen so far "in its pocket". The pocket algorithm then returns the solution in the pocket, rather than the last solution. It can be used also for non-separable data sets, where the aim is to find a perceptron with a small number of misclassifications. However, these solutions appear purely stochastically and hence the pocket algorithm neither approaches them gradually in the course of learning, nor are they guaranteed to show up within a given number of learning steps.
The Maxover algorithm (Wendemuth, 1995) is "robust" in the sense that it will converge regardless of (prior) knowledge of linear separability of the data set.[35] In the linearly separable case, it will solve the training problem – if desired, even with optimal stability (maximum margin between the classes). For non-separable data sets, it will return a solution with a small number of misclassifications. In all cases, the algorithm gradually approaches the solution in the course of learning, without memorizing previous states and without stochastic jumps. Convergence is to global optimality for separable data sets and to local optimality for non-separable data sets.
The Voted Perceptron (Freund and Schapire, 1999), is a variant using multiple weighted perceptrons. The algorithm starts a new perceptron every time an example is wrongly classified, initializing the weights vector with the final weights of the last perceptron. Each perceptron will also be given another weight corresponding to how many examples do they correctly classify before wrongly classifying one, and at the end the output will be a weighted vote on all perceptrons.
In separable problems, perceptron training can also aim at finding the largest separating margin between the classes. The so-called perceptron of optimal stability can be determined by means of iterative training and optimization schemes, such as the Min-Over algorithm (Krauth and Mezard, 1987)[31] or the AdaTron (Anlauf and Biehl, 1989)).[36] AdaTron uses the fact that the corresponding quadratic optimization problem is convex. The perceptron of optimal stability, together with the kernel trick, are the conceptual foundations of the support-vector machine.
The [math]\displaystyle{ \alpha }[/math]-perceptron further used a pre-processing layer of fixed random weights, with thresholded output units. This enabled the perceptron to classify analogue patterns, by projecting them into a binary space. In fact, for a projection space of sufficiently high dimension, patterns can become linearly separable.
Another way to solve nonlinear problems without using multiple layers is to use higher order networks (sigma-pi unit). In this type of network, each element in the input vector is extended with each pairwise combination of multiplied inputs (second order). This can be extended to an n-order network.
It should be kept in mind, however, that the best classifier is not necessarily that which classifies all the training data perfectly. Indeed, if we had the prior constraint that the data come from equi-variant Gaussian distributions, the linear separation in the input space is optimal, and the nonlinear solution is overfitted.
Other linear classification algorithms include Winnow, support-vector machine, and logistic regression.
Like most other techniques for training linear classifiers, the perceptron generalizes naturally to multiclass classification. Here, the input [math]\displaystyle{ x }[/math] and the output [math]\displaystyle{ y }[/math] are drawn from arbitrary sets. A feature representation function [math]\displaystyle{ f(x,y) }[/math] maps each possible input/output pair to a finite-dimensional real-valued feature vector. As before, the feature vector is multiplied by a weight vector [math]\displaystyle{ w }[/math], but now the resulting score is used to choose among many possible outputs:
Learning again iterates over the examples, predicting an output for each, leaving the weights unchanged when the predicted output matches the target, and changing them when it does not. The update becomes:
This multiclass feedback formulation reduces to the original perceptron when [math]\displaystyle{ x }[/math] is a real-valued vector, [math]\displaystyle{ y }[/math] is chosen from [math]\displaystyle{ \{0,1\} }[/math], and [math]\displaystyle{ f(x,y) = y x }[/math].
For certain problems, input/output representations and features can be chosen so that [math]\displaystyle{ \mathrm{argmax}_y f(x,y) \cdot w }[/math] can be found efficiently even though [math]\displaystyle{ y }[/math] is chosen from a very large or even infinite set.
Since 2002, perceptron training has become popular in the field of natural language processing for such tasks as part-of-speech tagging and syntactic parsing (Collins, 2002). It has also been applied to large-scale machine learning problems in a distributed computing setting.[37]
Original source: https://en.wikipedia.org/wiki/Perceptron.
Read more |