Difference between revisions of "Evolutionary Computation"

From GICL Wiki
Jump to: navigation, search
(Mutation and Crossover)
m (Simulation and Identification)
 
(2 intermediate revisions by one user not shown)
Line 6: Line 6:
  
 
==Principles of Evolutionary Computation==
 
==Principles of Evolutionary Computation==
<p> Evolutionary Computation relies heavily on the use of neo-Darwinian views. This paradigm breaks evolution into seperate processes which are reproduction, mutation, competition and selection. Reproduction is accomplished through the transfer of genetic program to its ancestors. Mutation is guaranteed to have replication errors during information transfer. Competition is the result of a population in with limited space and resources. Selction is the result of the competitive process.</p>
+
<p> Evolutionary Computation relies heavily on the use of neo-Darwinian views. This paradigm breaks evolution into separate processes which are reproduction, mutation, competition and selection. Reproduction is accomplished through the transfer of genetic program to its ancestors. Mutation is guaranteed to have replication errors during information transfer. Competition is the result of a population in with limited space and resources. Selection is the result of the competitive process.</p>
<p>Evolutionary computation is formed from characteristics defined by Mayer that are a summarization of the more salient characteristics of neo-Dawinism. These characteristics are: <br>
+
<p>Evolutionary computation is formed from characteristics defined by Mayer that are a summarization of the more salient characteristics of neo-Darwinism. These characteristics are: <br>
 
(i) The individual is the primary target of selection <br>
 
(i) The individual is the primary target of selection <br>
 
(ii) Genetic variation is primarily a chance phenomenon. Random process plays a significant role in evolution <br>
 
(ii) Genetic variation is primarily a chance phenomenon. Random process plays a significant role in evolution <br>
 
(iii) Genotypic is largely a product of recombination and only lastly of mutation <br>
 
(iii) Genotypic is largely a product of recombination and only lastly of mutation <br>
(iv) 'Gradual' evolution may incorporate phenotypic discontinuties <br>
+
(iv) 'Gradual' evolution may incorporate phenotypic discontinuities <br>
 
(v) Not all changes are the result of natural selection <br>
 
(v) Not all changes are the result of natural selection <br>
 
(vi) Evolution is not just a change in gene frequency but a change in adaptation and diversity <br>
 
(vi) Evolution is not just a change in gene frequency but a change in adaptation and diversity <br>
Line 17: Line 17:
 
</p>
 
</p>
  
<p>Evolutionary computation covers a wide continuum of areas. These can be broken up into five broad categories which are not absolute or definitive:<br><b>
+
<p>Evolutionary computation covers a wide continuum of areas. These can be broken up into five broad categories which are not absolute or definitive :<br><b>
 
Planning<br>
 
Planning<br>
 
Design<br>
 
Design<br>
Line 31: Line 31:
 
===Simulation and Identification===
 
===Simulation and Identification===
 
<p>Simulations are often created when researchers are unsure about what the behaviors might be. Evolutionary Computation is often applied to very complex problems in scientific fields. Roosen and Meyer used such as strategy with they were trying to determine the equilibrium of chemically reactive systems. This simulation determined what the minimum free enthalpy of the system involved.</p>
 
<p>Simulations are often created when researchers are unsure about what the behaviors might be. Evolutionary Computation is often applied to very complex problems in scientific fields. Roosen and Meyer used such as strategy with they were trying to determine the equilibrium of chemically reactive systems. This simulation determined what the minimum free enthalpy of the system involved.</p>
<p>Evolutionary computation is used in physically simulated animation through space time constraints. Through evolutionary algorithms the simulation is able to apply physics to a character while allowing the animators to have control over the actions of the characters. One major application of this is Natural Motion’s Endorphin.</p>
+
<p>Evolutionary computation is used in physically simulated animation through space time constraints. Through evolutionary algorithms the simulation is able to apply physics to a character while allowing the animators to have control over the actions of the characters. One major application of this is Natural Motion’s [http://www.naturalmotion.com/endorphin.htm Endorphin].</p>
  
 
===Control===
 
===Control===
Evolutionary Computation has two approaches to control, an on-line approach and an off-line approach. Off-line refers to the use of evolutionary algorithms to create a contoller for a system. The on-line approach refers to the use of evolutionary algorithms as an active control process. Evolutionary computation allows for a controller that is able to adapt to a system that is able to change over time. Examples of this are the control of guidance and navigation systems and blancing a pole on a moving cart. The balancing of a pole using evolutionary computation is closely related to controlling FK joints of a character to allow it to balance.
+
Evolutionary Computation has two approaches to control, an on-line approach and an off-line approach. Off-line refers to the use of evolutionary algorithms to create a controller for a system. The on-line approach refers to the use of evolutionary algorithms as an active control process. Evolutionary computation allows for a controller that is able to adapt to a system that is able to change over time. Examples of this are the control of guidance and navigation systems and balancing a pole on a moving cart. The balancing of a pole using evolutionary computation is closely related to controlling FK joints of a character to allow it to balance.
  
 
===Classification===
 
===Classification===
Classification systems are used by many systems to define characteristics of the environment so that they may be controled by a control system. Gameplaying is an application that also utilizes classification systems. It is usually applied to simple games but has also been used for serious military games.
+
Classification systems are used by many systems to define characteristics of the environment so that they may be controlled by a control system. Game playing is an application that also utilizes classification systems. It is usually applied to simple games but has also been used for serious military games.
  
  
Line 44: Line 44:
  
 
==Genetic Algorithms==
 
==Genetic Algorithms==
<p>Genetic Algorithms were first proposed by John Holland. It is distinquished from other evolutionary algorithims by three features: (1) the use of bitstrings; (2) the use of proportional selection; (3) the use of crossover to produce variation. The use of crossover is what primarily makes genetic algoritims distinctive. It is the only feature that has gone unedited by other implimentations of genetic algorithms.</p>
+
<p>Genetic Algorithms were first proposed by John Holland. It is distinguished from other evolutionary algorithms by three features: (1) the use of bit strings; (2) the use of proportional selection; (3) the use of crossover to produce variation. The use of crossover is what primarily makes genetic algorithms distinctive. It is the only feature that has gone unedited by other implementations of genetic algorithms.</p>
 
<p>
 
<p>
Genetic Algorithms begins with a randomly generated population that is evaluated for fitness. These are then selected for "mating" and are copied to the mating buffer. In Hollands original algorithm each mate was given a probability for selection based on how fitting they are to the problem. This way the better individuals are given a higher probability of selection. Next the individuals that have been selected for mating through mutation and crossover to produce offspring. It is decided by each implementation of the algorithm as to the rates of mutation and crossover as well as how many offspring are created. In the original implimentation only on pair was selected for mating per cycle. Once the offspring are created the two populations (parents and children) are merged to create a new population to the fixed size. There are several variation on how this is accomplished. Once is to use all the children and then select randomly from the older population to bring the new population to its fixed size. This can either replace one or two individuals from the old population or completely replace the population with new offspring.
+
Genetic Algorithms begins with a randomly generated population that is evaluated for fitness. These are then selected for "mating" and are copied to the mating buffer. In Hollands original algorithm each mate was given a probability for selection based on how fitting they are to the problem. This way the better individuals are given a higher probability of selection. Next the individuals that have been selected for mating go through mutation and crossover to produce offspring. It is decided by each implementation of the algorithm as to the rates of mutation and crossover as well as how many offspring are created. In the original implementation only on pair was selected for mating per cycle. Once the offspring are created the two populations (parents and children) are merged to create a new population to the fixed size. There are several variation on how this is accomplished. Once is to use all the children and then select randomly from the older population to bring the new population to its fixed size. This can either replace one or two individuals from the old population or completely replace the population with new offspring.
 
</p>
 
</p>
 
<p>These selections are able to be biased in many situations. There can be a bias in the selection for mating or a bias in the selection between the parent and child populations to make the new population. Most genetic algorithms base themselves on the biasing of the mating step of the algorithm much like the original created by Holland. There is still a great amount of variations with this. Usually fitness is determined through having each member of the population evaluated for fitness and assigning it a value based on its fitness. By dividing the fitness of each member but the total fitness of the entire group once can assign it a probability for selection.
 
<p>These selections are able to be biased in many situations. There can be a bias in the selection for mating or a bias in the selection between the parent and child populations to make the new population. Most genetic algorithms base themselves on the biasing of the mating step of the algorithm much like the original created by Holland. There is still a great amount of variations with this. Usually fitness is determined through having each member of the population evaluated for fitness and assigning it a value based on its fitness. By dividing the fitness of each member but the total fitness of the entire group once can assign it a probability for selection.
 
</p>
 
</p>
 
<p>
 
<p>
There is also another method for selecting a pair for mating. By selecting a small set of individuals from the original population and determining the best individual or pair for mating it is able to locate several of the best individuals of each portion of the population are able to mate.
+
There is also another method for selecting a pair for mating. By selecting a small set of individuals from the original population and determining the best individual or pair for mating it is able to locate several of the best individuals of each portion of the population are able to mate.</p>
 
===Mutation and Crossover===
 
===Mutation and Crossover===
Evolutionary algorithms use mutation and crossover to create variations. Mutation works by making small random changes to the original solutions. With binary logic this is achieved by randomly changing the 1 or 0 representation to its counterpart, similarly to randomly flipping switches off and on. A normal rate of mutation used is one flip per mutation. This provides randomization without completely changing the outcome.</p>
+
<p>Evolutionary algorithms use mutation and crossover to create variations. Mutation works by making small random changes to the original solutions. With binary logic this is achieved by randomly changing the 1 or 0 representation to its counterpart, similarly to randomly flipping switches off and on. A normal rate of mutation used is one flip per mutation. This provides randomization without completely changing the outcome.</p>
 
<p> When there are two individuals who are extremely fit but for different reasons the best outcome would be to combine the two. Or course with generic algorithms there is no way of knowing which features of the individual accounts for the good performance. Crossover fixes this by randomly combining features from the population to create better offspring. This random will sometimes create weak individuals that will not survive long, but it may also create a much stronger individual. </p>
 
<p> When there are two individuals who are extremely fit but for different reasons the best outcome would be to combine the two. Or course with generic algorithms there is no way of knowing which features of the individual accounts for the good performance. Crossover fixes this by randomly combining features from the population to create better offspring. This random will sometimes create weak individuals that will not survive long, but it may also create a much stronger individual. </p>
 
<p>Crossover works by choosing two strings from the parent population and then creating two new individuals by swapping random bits between the two of them. Holland’s original theory was to create a point along the string and swap all bits after this point. This method is known as the one-point crossover. There is also a two-point crossover method that works similarly to Holland’s original method but chooses two-points instead. The most random crossover method is the uniform crossover which randomly swaps individual bits of information, not segments like the others, with both the location and the parent decided at random until the new string is created.</p>
 
<p>Crossover works by choosing two strings from the parent population and then creating two new individuals by swapping random bits between the two of them. Holland’s original theory was to create a point along the string and swap all bits after this point. This method is known as the one-point crossover. There is also a two-point crossover method that works similarly to Holland’s original method but chooses two-points instead. The most random crossover method is the uniform crossover which randomly swaps individual bits of information, not segments like the others, with both the location and the parent decided at random until the new string is created.</p>
 +
 +
==Evolutionary Programming==
 +
<p>
 +
Evolutionary Programming also attempts to simulate evolution which iteratively generates solutions that are increasingly more accurate even if the environment is static or dynamically changing. It closely resembles an optimization technique because it iteratively optimizes the outcome. This method utilizes the same four components of all evolutionary computation.
 +
</p>
 +
<p> The initial algorithm starts with a population of possible solutions that is randomly created. The population size of these trial solutions can range greatly but are generally large. These trial solutions are evaluated for their fitness and are then altered through mutation and the fitness of each offspring is then evaluated. There are several methods of maintaining the populations. The best offspring are selected to become parents of the next generation until only the number of offspring that remain is equal to the original population size, the best solutions are retained via the tournament method discussed earlier, or a proportionally based selection.</p>

Latest revision as of 00:29, 8 September 2007

Evolutionary computation is a relatively new technique. The term itself was first suggested in 1991 and represents many researchers have worked on different methods of simulating different aspects of evolution. These include genetic algorithms, evolution strategy and evolutionary programming. All of these techniques involve the reproduction, random variation, competition and selection of contending individuals. These make up the principles of evolution on a computer.

Evolution works as an optimization process. This does not mean that it is perfection but evolution can still solve extremely complex issues with precise solutions. For this reason researchers have sought to discover an algorithm to solve difficult engineering problems.

Previous techniques were to use gradient descent, deterministic hill climbing, and purely random search without heredity. The results of these techniques were unsatisfactory when applied to many problems of optimization that natural evolution seemed to solve well.

Truly intelligent machines have been sought for many years. Some define intelligence as the capability of a system to change its behaviors in order to meet its goals. This requires prediction because it must predict the circumstances and then make the appropriate changes. Instead of attempting to replicate humans through rules or neural connections a new alternative is to simulate evolution by predictive evolutionary programming. This was the base of evolutionary programming of Fogel.

Computer evolution is being used not only to solve particular problems but also to capture evolution in a simulation. This gives insight into the natural evolution process. The success of this allows the study of biological systems that are only possibilities of what life might be like. This questions what these imagined systems might have in common with naturally evolved life.

Contents

Principles of Evolutionary Computation

Evolutionary Computation relies heavily on the use of neo-Darwinian views. This paradigm breaks evolution into separate processes which are reproduction, mutation, competition and selection. Reproduction is accomplished through the transfer of genetic program to its ancestors. Mutation is guaranteed to have replication errors during information transfer. Competition is the result of a population in with limited space and resources. Selection is the result of the competitive process.

Evolutionary computation is formed from characteristics defined by Mayer that are a summarization of the more salient characteristics of neo-Darwinism. These characteristics are:
(i) The individual is the primary target of selection
(ii) Genetic variation is primarily a chance phenomenon. Random process plays a significant role in evolution
(iii) Genotypic is largely a product of recombination and only lastly of mutation
(iv) 'Gradual' evolution may incorporate phenotypic discontinuities
(v) Not all changes are the result of natural selection
(vi) Evolution is not just a change in gene frequency but a change in adaptation and diversity
(vii) Selection is probabilistic

Evolutionary computation covers a wide continuum of areas. These can be broken up into five broad categories which are not absolute or definitive :
Planning
Design
Simulation and identification
Control
Classification

Planning

One of the largest problems for optimization is the traveling salesman problem. A salesman must travel to a number of different cities and get home. Trying to optimize the speed and accuracy to predict the best order for the salesman to visit the cities to minimize time spent traveling.

Design

Evolutionary Computation has been applied greatly to artificial neural networks, both in their design and in the search for the optimal weights. It has also been applied to several engineering applications, especially structure design in both the two-dimensional aspects as well as three-dimensional aspects.

Simulation and Identification

Simulations are often created when researchers are unsure about what the behaviors might be. Evolutionary Computation is often applied to very complex problems in scientific fields. Roosen and Meyer used such as strategy with they were trying to determine the equilibrium of chemically reactive systems. This simulation determined what the minimum free enthalpy of the system involved.

Evolutionary computation is used in physically simulated animation through space time constraints. Through evolutionary algorithms the simulation is able to apply physics to a character while allowing the animators to have control over the actions of the characters. One major application of this is Natural Motion’s Endorphin.

Control

Evolutionary Computation has two approaches to control, an on-line approach and an off-line approach. Off-line refers to the use of evolutionary algorithms to create a controller for a system. The on-line approach refers to the use of evolutionary algorithms as an active control process. Evolutionary computation allows for a controller that is able to adapt to a system that is able to change over time. Examples of this are the control of guidance and navigation systems and balancing a pole on a moving cart. The balancing of a pole using evolutionary computation is closely related to controlling FK joints of a character to allow it to balance.

Classification

Classification systems are used by many systems to define characteristics of the environment so that they may be controlled by a control system. Game playing is an application that also utilizes classification systems. It is usually applied to simple games but has also been used for serious military games.


Advantages of Evolutionary Computation

Evolutionary Algorithms work well with discontinuous, multimodal and noisy data. It is able to extend to broader application then many other systems. Evolutionary Algorithms offer a methodological framework that is easy to understand and handle as well as being usable as a black-box method or to incorporate new or old systems for sophistication, specialization and hybridization. They work well with situations where constraints and goals are changing over time.

Genetic Algorithms

Genetic Algorithms were first proposed by John Holland. It is distinguished from other evolutionary algorithms by three features: (1) the use of bit strings; (2) the use of proportional selection; (3) the use of crossover to produce variation. The use of crossover is what primarily makes genetic algorithms distinctive. It is the only feature that has gone unedited by other implementations of genetic algorithms.

Genetic Algorithms begins with a randomly generated population that is evaluated for fitness. These are then selected for "mating" and are copied to the mating buffer. In Hollands original algorithm each mate was given a probability for selection based on how fitting they are to the problem. This way the better individuals are given a higher probability of selection. Next the individuals that have been selected for mating go through mutation and crossover to produce offspring. It is decided by each implementation of the algorithm as to the rates of mutation and crossover as well as how many offspring are created. In the original implementation only on pair was selected for mating per cycle. Once the offspring are created the two populations (parents and children) are merged to create a new population to the fixed size. There are several variation on how this is accomplished. Once is to use all the children and then select randomly from the older population to bring the new population to its fixed size. This can either replace one or two individuals from the old population or completely replace the population with new offspring.

These selections are able to be biased in many situations. There can be a bias in the selection for mating or a bias in the selection between the parent and child populations to make the new population. Most genetic algorithms base themselves on the biasing of the mating step of the algorithm much like the original created by Holland. There is still a great amount of variations with this. Usually fitness is determined through having each member of the population evaluated for fitness and assigning it a value based on its fitness. By dividing the fitness of each member but the total fitness of the entire group once can assign it a probability for selection.

There is also another method for selecting a pair for mating. By selecting a small set of individuals from the original population and determining the best individual or pair for mating it is able to locate several of the best individuals of each portion of the population are able to mate.

Mutation and Crossover

Evolutionary algorithms use mutation and crossover to create variations. Mutation works by making small random changes to the original solutions. With binary logic this is achieved by randomly changing the 1 or 0 representation to its counterpart, similarly to randomly flipping switches off and on. A normal rate of mutation used is one flip per mutation. This provides randomization without completely changing the outcome.

When there are two individuals who are extremely fit but for different reasons the best outcome would be to combine the two. Or course with generic algorithms there is no way of knowing which features of the individual accounts for the good performance. Crossover fixes this by randomly combining features from the population to create better offspring. This random will sometimes create weak individuals that will not survive long, but it may also create a much stronger individual.

Crossover works by choosing two strings from the parent population and then creating two new individuals by swapping random bits between the two of them. Holland’s original theory was to create a point along the string and swap all bits after this point. This method is known as the one-point crossover. There is also a two-point crossover method that works similarly to Holland’s original method but chooses two-points instead. The most random crossover method is the uniform crossover which randomly swaps individual bits of information, not segments like the others, with both the location and the parent decided at random until the new string is created.

Evolutionary Programming

Evolutionary Programming also attempts to simulate evolution which iteratively generates solutions that are increasingly more accurate even if the environment is static or dynamically changing. It closely resembles an optimization technique because it iteratively optimizes the outcome. This method utilizes the same four components of all evolutionary computation.

The initial algorithm starts with a population of possible solutions that is randomly created. The population size of these trial solutions can range greatly but are generally large. These trial solutions are evaluated for their fitness and are then altered through mutation and the fitness of each offspring is then evaluated. There are several methods of maintaining the populations. The best offspring are selected to become parents of the next generation until only the number of offspring that remain is equal to the original population size, the best solutions are retained via the tournament method discussed earlier, or a proportionally based selection.