Evolutionary computation is a relatively new technique. The term itself was first suggested in 1991 and represents many researchers have worked on different methods of simulating different aspects of evolution. These include genetic algorithms, evolution strategy and evolutionary programming. All of these techniques involve the reproduction, random variation, competition and selection of contending individuals. These make up the principles of evolution on a computer.
Evolution works as an optimization process. This does not mean that it is perfection but evolution can still solve extremely complex issues with precise solutions. For this reason researchers have sought to discover an algorithm to solve difficult engineering problems.
Previous techniques were to use gradient descent, deterministic hill climbing, and purely random search without heredity. The results of these techniques were unsatisfactory when applied to many problems of optimization that natural evolution seemed to solve well.
Truly intelligent machines have been sought for many years. Some define intelligence as the capability of a system to change its behaviors in order to meet its goals. This requires prediction because it must predict the circumstances and then make the appropriate changes. Instead of attempting to replicate humans through rules or neural connections a new alternative is to simulate evolution by predictive evolutionary programming. This was the base of evolutionary programming of Fogel.
Computer evolution is being used not only to solve particular problems but also to capture evolution in a simulation. This gives insight into the natural evolution process. The success of this allows the study of biological systems that are only possibilities of what life might be like. This questions what these imagined systems might have in common with naturally evolved life.
Principles of Evolutionary Computation
Evolutionary Computation relies heavily on the use of neo-Darwinian views. This paradigm breaks evolution into seperate processes which are reproduction, mutation, competition and selection. Reproduction is accomplished through the transfer of genetic program to its ancestors. Mutation is guaranteed to have replication errors during information transfer. Competition is the result of a population in with limited space and resources. Selction is the result of the competitive process.
Evolutionary computation is formed from characteristics defined by Mayer that are a summarization of the more salient characteristics of neo-Dawinism. These characteristics are:
(i) The individual is the primary target of selection
(ii) Genetic variation is primarily a chance phenomenon. Random process plays a significant role in evolution
(iii) Genotypic is largely a product of recombination and only lastly of mutation
(iv) 'Gradual' evolution may incorporate phenotypic discontinuties
(v) Not all changes are the result of natural selection
(vi) Evolution is not just a change in gene frequency but a change in adaptation and diversity
(vii) Selection is probabilistic
Evolutionary computation covers a wide continuum of areas. These can be broken up into five broad categories which are not absolute or definitive:
Simulation and identification
One of the largest problems for optimization is the traveling salesman problem. A salesman must travel to a number of different cities and get home. Trying to optimize the speed and accuracy to predict the best order for the salesman to visit the cities to minimize time spent traveling.
Evolutionary Computation has been applied greatly to artificial neural networks, both in their design and in the search for the optimal weights. It has also been applied to several engineering applications, especially structure design in both the two-dimensional aspects as well as three-dimensional aspects.
Simulation and Identification
Simulations are often created when researchers are unsure about what the behaviors might be. Evolutionary Computation is often applied to very complex problems in scientific fields. Roosen and Meyer used such as strategy with they were trying to determine the equilibrium of chemically reactive systems. This simulation determined what the minimum free enthalpy of the system involved.
Evolutionary computation is used in physically simulated animation through space time constraints. Through evolutionary algorithms the simulation is able to apply physics to a character while allowing the animators to have control over the actions of the characters. One major application of this is Natural Motion’s Endorphin.
Evolutionary Computation has two approaches to control, an on-line approach and an off-line approach. Off-line refers to the use of evolutionary algorithms to create a contoller for a system. The on-line approach refers to the use of evolutionary algorithms as an active control process. Evolutionary computation allows for a controller that is able to adapt to a system that is able to change over time. Examples of this are the control of guidance and navigation systems and blancing a pole on a moving cart. The balancing of a pole using evolutionary computation is closely related to controlling FK joints of a character to allow it to balance.
Classification systems are used by many systems to define characteristics of the environment so that they may be controled by a control system. Gameplaying is an application that also utilizes classification systems. It is usually applied to simple games but has also been used for serious military games.
Advantages of Evolutionary Computation
Evolutionary Algorithms work well with discontinuous, multimodal and noisy data. It is able to extend to broader application then many other systems. Evolutionary Algorithms offer a methodological framework that is easy to understand and handle as well as being usable as a black-box method or to incorporate new or old systems for sophistication, specialization and hybridization. They work well with situations where constraints and goals are changing over time.
Genetic Algorithms were first proposed by John Holland. It is distinquished from other evolutionary algorithims by three features: (1) the use of bitstrings; (2) the use of proportional selection; (3) the use of crossover to produce variation. The use of crossover is what primarily makes genetic algoritims distinctive. It is the only feature that has gone unedited by other implimentations of genetic algorithms.
Genetic Algorithms begins with a randomly generated population that is evaluated for fitness. These are then selected for "mating" and are copied to the mating buffer. In Hollands original algorithm each mate was given a probability for selection based on how fitting they are to the problem. This way the better individuals are given a higher probability of selection. Next the individuals that have been selected for mating through mutation and crossover to produce offspring. It is decided by each implementation of the algorithm as to the rates of mutation and crossover as well as how many offspring are created. In the original implimentation only on pair was selected for mating per cycle. Once the offspring are created the two populations (parents and children) are merged to create a new population to the fixed size. There are several variation on how this is accomplished. Once is to use all the children and then select randomly from the older population to bring the new population to its fixed size. This can either replace one or two individuals from the old population or completely replace the population with new offspring.
These selections are able to be biased in many situations. There can be a bias in the selection for mating or a bias in the selection between the parent and child populations to make the new population. Most genetic algorithms base themselves on the biasing of the mating step of the algorithm much like the original created by Holland. There is still a great amount of variations with this. Usually fitness is determined through having each member of the population evaluated for fitness and assigning it a value based on its fitness. By dividing the fitness of each member but the total fitness of the entire group once can assign it a probability for selection.
There is also another method for selecting a pair for mating. By selecting a small set of individuals from the original population and determining the best individual or pair for mating it is able to locate several of the best individuals of each portion of the population are able to mate.
Mutation and Crossover<p>Evolutionary algorithms use mutation and crossover to create variations. Mutation works by making small random changes to the original solutions. With binary logic this is achieved by randomly changing the 1 or 0 representation to its counterpart, similarly to randomly flipping switches off and on. A normal rate of mutation used is one flip per mutation. This provides randomization without completely changing the outcome.
When there are two individuals who are extremely fit but for different reasons the best outcome would be to combine the two. Or course with generic algorithms there is no way of knowing which features of the individual accounts for the good performance. Crossover fixes this by randomly combining features from the population to create better offspring. This random will sometimes create weak individuals that will not survive long, but it may also create a much stronger individual.
Crossover works by choosing two strings from the parent population and then creating two new individuals by swapping random bits between the two of them. Holland’s original theory was to create a point along the string and swap all bits after this point. This method is known as the one-point crossover. There is also a two-point crossover method that works similarly to Holland’s original method but chooses two-points instead. The most random crossover method is the uniform crossover which randomly swaps individual bits of information, not segments like the others, with both the location and the parent decided at random until the new string is created.