Gaussian adaptation is designed for the maximization of manufacturing yield due to statistical deviation of component values of signal processing systems. The process uses the theorem of Gaussian adaptation stating that:
if the centre of gravity of a high-dimensional Gaussian distribution coincides with the centre of gravity of the part of the distribution belonging to some region of acceptability in a state of selective equilibrium, then the hitting probability on the region is maximal.
The theorem is valid for all regions of acceptability and all Gaussian distributions. It may be used by cyclic repetition of random variation and selection (like the natural evolution). In every cycle a sufficiently large number of Gaussian distributed points are sampled and tested for membership in the region of acceptability. The centre of gravity of the Gaussian is then moved to the centre of gravity of the approved points. Thus, the process converges to a state of equilibrium fulfilling the theorem. A solution is always approximate because the centre of gravity is always determined for a limited number of points.
Gaussian adaptation was used for the first time in 1969 as a pure optimization algorithm making the regions of acceptability smaller and smaller. Since 1970 it has been used for both ordinary optimization and manufacturing yield maximization.
In 1972 Gaussian adaptation was tested on a complex signal processing system. The model of the system included 450 components (each of which had a parameter value) of which 130 were adjustable. Some early tests seemed promising. But then a joker said: “In this project nothing should be left to chance”. This, of course, was a good intention as long as all the stops were pulled out. But there was also a hidden meaning: The algorithm worked at random, and it should therefore not be used in this project.
After one year of hard work with deterministic methods - available at that time - a passable system was found, but because parameter values of components vary in the manufacturing process, only 5% of the manufactured systems were able to meet all requirements according to the specifications.
Those responsible for the project got nervous and caught at a straw; the random algorithm. After two months with the random algorithm, 95% of the systems were acceptable, and the system could now be manufactured and sold at a good profit. The example showed that random algorithms may create an enormous amount of information that is hardly available by other means.
This example was never published, but Kjellström & Taxén, 1981, showed an example with 76 parameters, which is also a large number in this context.
It has also been compared to the natural evolution of populations of living organisms. In this case the region of acceptability is preferably replaced by a probability function, s(x), where x is an array of quantitative characters (phenotypes) determining the organism. This is possible because the theorem of Gaussian adaptation is valid for any region of acceptability independent of structure and extension.
Mean fitness may be defined as
P(m) = integral { s(x) N(m – x) dx }.
Here N is the Gaussian probability density function, p. d. f., of phenotypes and m is the centre of gravity of ditto. Because a Gaussian is the exponential of negative squared phenotypes, it is fairly easily proved that the process maximizes the mean fitness of a large population. The condition for maximal mean fitness is obtained by letting the derivative, dP(m)/dm, become equal to zero. Because the derivative of the exponential is equal to the exponential itself the result becomes proportional to P (m* - m) = 0, where m* is the centre of gravity of the phenotypes of the parents to offspring in the progeny. This leads to the theorem of Gaussian adaptation in its simplest form.
More generally, if m slightly deviates from its optimal position in any arbitrary direction, then mean fitness will be decreased, but may be recovered if determinant of the moment matrix is slightly decreased. According to the information theory due to Claude Shannon this means that the average information (entropy, disorder, diversity) is simultaneously maximal, also meaning that average information is as important to survival as mean fitness.
As long as the ontogenetic program my be seen as a stepwise modified recapitulation of the evolution of a particular individual organism, the central limit theorem states that the sum of contributions from many random steps tend to become Gaussian distributed. A necessary condition for the natural evolution to be able to fulfill the theorem of Gaussian adaptation is that it may push the centre of gravity of the Gaussian towards the centre of gravity of the surviving individuals. The Hardy-Weinberg law may accomplish this. In this case the rules of genetic variation such as crossover, inversion, transposition etcetera may be seen as random number generators for the phenotypes. So, in this sense GA may be seen as a genetic algorithm.
A remarcable thing seems to be that a Gaussian distribution is the most disordered distribution as compared to all other distributions having the same moment matrix, and is connected to some remarcable theorems such as the central limit theorem, the Hardy-Weinberg law, the entropy law and the theorem of gaussian adaptation. Those theorems make it possible to centre a gene pool in an arbitrary region of acceptability so as to maximize mean fitness, which is an extremely difficult problem in a high-dimensional space.
So, we may perhaps ask if this combination of chaos and mathematical order makes evolution a tool for intelligent design, with the intelligence represented by the mathematical theorems?
Mean fitness may be calculated provided that the distribution of parameters and the structure of the landscape is known. The real landscape is not known, but figure below shows a fictitious profile (blue) of a landscape along a line (x) in a room spanned by such parameters. The red curve is the mean based on the red bell curve at the bottom of figure. It is obtained by letting the bell curve slide along the x-axis, calculating the mean at every location. As can be seen, small peaks and pits are smoothed out. Thus, if evolution is started at A with a relatively small variance (the red bell curve), then climbing will take place on the red curve. The process may get stuck for millions of years at B or C, as long as the hollows to the right of these points remain, and the mutation rate is too small.
If the mutation rate is sufficiently high, the disorder or variance may increase and the parameter(s) may become distributed like the green bell curve. Then the climbing will take place on the green curve, which is even more smoothed out. Because the hollows to the right of B and C have now disappeared, the process may continue up to the peaks at D. But of course the landscape puts a limit on the disorder or variability. Besides - dependent on the landscape - the process may become very jerky, and if the ratio between the time spent by the process at a local peak and the time of transition to the next peak is very high, it may as well look like a punctuated equilibrium as suggested by Gould (see Ridley).
Categories: [Science] [Physics] [Evolution]