Memetic algorithm: Difference between revisions
m →Memetic optimization: dablink |
SeineRiver (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
Memetic algorithm (MA) represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. Quite often, MA are also referred to in the literature as Baldwinian EAs, Lamarckian EAs, cultural algorithms or or genetic local search. |
|||
'''Memetic algorithm''' is a general name for a broad class of population-based [[Heuristic algorithm|heuristics]]. More specifically, these heuristics are [[Global optimization|optimization techniques]] which seek to integrate as much '''domain-specific knowledge''' as possible. Therefore, memetic algorithms are commonly described as very abstract templates, which must be instantiated for each problem. Some researchers refer to memetic algorithms as '''hybrid genetic algorithms''' and '''Lamarckian genetic algorithms'''. Memetic algorithms are sometimes shortened to the acronym '''MA'''. |
|||
== |
==Introduction== |
||
The theory of “Universal Darwinism” was coined by Richard |
|||
A memetic algorithm uses a population of agents to seek optimal (or at least very good) solutions to a problem, using a given [[fitness function]] which ranks the goodness of the solutions. The agents seek candidate solutions throughout the [[fitness landscape]] (also named search space), using knowledge about the problem to improve the solutions, and cooperating and competing among themselves. Cooperation means that cooperating agents give rise to new agents which share characteristics from them, while competition is achieved by selection pressure over the population of agents. Although this description sounds very much in the manner of conventional [[genetic algorithm]]s, memetic algorithms take a qualitatively different approach. |
|||
Dawkins <ref name=dawkins1989sg>{{cite journal | |
|||
title = The Selfish Gene | |
|||
publisher = Oxford University Press | |
|||
year = 1989 | |
|||
author = Dawkins R. and others |
|||
}}</ref> in 1983 to provide a unifying framework governing |
|||
the evolution of any complex systems. In particular, “Universal |
|||
Darwinism” suggests that evolution is not exclusive to biological |
|||
systems, i.e., it is not confined to the narrow context of the genes, |
|||
but applicable to any complex systems that exhibit the principles of |
|||
inheritance, variation and selection, thus fulfilling the traits of an evolving |
|||
system. For example, the new science of memetics represents |
|||
the mind-universe analogue to genetics in culture evolution that |
|||
stretches across the fields of biology, cognition and psychology, |
|||
which has attracted significant attention in the last decades. The |
|||
term “meme” was also introduced and defined by Dawkins <ref name=dawkins1989sg/> in |
|||
1989 as “the basic unit of cultural transmission, or imitation”, and |
|||
in the English Oxford Dictionary as “an element of culture that |
|||
may be considered to be passed on by non-genetic means”. |
|||
Inspired by both Darwinian principles of natural evolution and |
|||
==No Free Lunch Theorem== |
|||
Dawkins’ notion of a meme, the term “Memetic Algorithm” (MA) |
|||
was first introduced by Moscato in his technical report <ref name=martial_arts>{{cite journal |
|||
[[Image:Effects of NFL Therorem.PNG|right|thumb|400px|<small>Graphical explanation of the consequences of the NFL theorems for [[heuristic algorithm]]s: all heuristics have the same average performance. A given heuristic may outperform another one for a given problem or problem domain, but it will be worse in other ones. Furthermore, the better it is in that domain, the worse it will be in other domains.</small>]] |
|||
{{seealso|No free lunch in search and optimization}} |
|||
The '''no free lunch''' (NFL) theorem ensures that, averaging over all possible optimization problems (that is to say, all possible realizations of a search space with an associated fitness function), all the optimization algorithms have the same average performance: there is not any algorithm that (on average) outperforms the rest of them. So, if there is a problem or a subset of problems for which an algorithm is better than another, the algorithm will be necessarily worse for some other problems. |
|||
===Consequences=== |
|||
The applicability of the NFL theorem is not as straightforward as it would seem at first glance, but on the whole, it brings up a clear message: whenever the best possible performance is sought for solving an optimization problem, results must not rely on a general purpose search algorithm. Instead, it must be well-suited for the specific problem domain; and the best way to implement it is to embed specific-domain knowledge into the algorithm. So, the NFL theorem gives strong theoretical support to memetic algorithms as a legitimate way to solving computationally-hard problems. |
|||
==History== |
|||
In the early days of [[evolutionary algorithm]]s, they were believed to be a kind of flawless Swiss-knife for optimization, being in general more robust and more efficient than more specialized techniques. For instance, when studying [[genetic algorithm]]s (GAs), researchers sought to improve performance by applying general modifications to them (for example, introducing new schemes for selecting candidate solutions to breed), hoping that by obtaining good results in performance tests (with specific sets of test problems), these good results were able to be straightforwardly extrapolated to any possible domain of problems. Domain-specific adaptations were specifically thought to make the algorithms less robust, so they were discouraged unless deemed absolutely necessary. |
|||
However, by the late 80s and early 90s a [[Paradigm shift|shift]] took place, and more and more researchers were underscoring the importance of gathering problem knowledge in algorithms, so as to enhance GA's performance. The term ''hybridization'' was coined to mean embedding problem knowledge in a GA. The '''no free lunch''' theorem (see [[#No Free Lunch Theorem|this section]]) came to sanction this change in mainstream attitudes. |
|||
In the seminal paper ''On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts''<ref name=martial_arts>{{cite journal |
|||
| last = Moscato |
| last = Moscato |
||
| first = P. |
| first = P. |
||
Line 30: | Line 34: | ||
| volume = |
| volume = |
||
| issue = report 826 |
| issue = report 826 |
||
}}</ref> in 1989 |
|||
| url = http://www.densis.fee.unicamp.br/~moscato/papers/bigone.ps}}</ref>, written by Pablo Moscato, the concept of memetic algorithm was introduced. Broadly speaking, a [[meme]] is a unit of cultural knowledge which is transmitted among individuals (for example, a way of building arches). As memes propagate, individuals can consciously change them in order to improve the results (for example, a slightly improved way of building arches), in opposition to genes, which are changed in a blind way. In other words, the cultural evolution of memes is an example of [[Lamarckian evolution]], in opposition to the more traditional [[Darwinian evolution]] of genes. |
|||
where he viewed MA as being close to a form of population-based |
|||
hybrid genetic algorithm (GA) coupled with an individual learning |
|||
procedure capable of performing local refinements. The metaphorical |
|||
parallels, on the one hand, to Darwinian evolution and, on |
|||
the other hand, between memes and domain specific (local search) |
|||
heuristics are captured within memetic algorithms thus rendering a |
|||
methodology that balances well between generality and problem specificity. |
|||
In a more diverse context, memetic algorithms is now |
|||
used under various names including Hybrid Evolutionary Algorithm, |
|||
Baldwinian Evolutionary Algorithm, Lamarckian Evolutionary |
|||
Algorithms, Cultural Algorithm or Genetic Local Search. In |
|||
the context of complex optimization, many different instantiations |
|||
of memetic algorithms have been reported across a wide range of |
|||
[[#Applications|application domains]], in general, converging to high |
|||
quality solutions more efficiently than their conventional evolutionary |
|||
counterparts. |
|||
Aspects of memetics when dealt within a computational framework is termed as "Memetic Computing" (MC). With MC, the traits of Universal Darwinism are more appropriately captured. Viewed in this perspective, MA is a more constrained notion of MC. More specifically, MA covers one area of MC, in particular dealing with areas of evolutionary algorithms that marry other deterministic refinement techniques for solving optimization problems. Henceforth, the framework of MC extends the notion of memes to cover conceptual entities of knowledge-enhanced procedures or representations. |
|||
Moscato felt that [[cultural evolution]] was a more fitting metaphor for the emerging trend of '''hybridization''' techniques than the more biological [[genotype]] of genetic algorithms, since the former actively aim on improving solutions, while the latter blindly wander over the search space. So, he coined the term '''memetic''' for a broader class of hybridization techniques explicitly concerned with exploiting all available knowledge about each specific problem. Therefore, the terminology used for describing memetic algorithms is slightly different in respect to the traditional one used for evolutionary algorithms (for example, candidate solutions are ''agents'' instead of ''individuals''; as ''agent'' suggests an active entity, while ''individual'' hints to a passive one). |
|||
==The development of MAs== |
|||
===Alternative definitions=== |
|||
===1st generation:=== |
|||
The first generation of MA refers to hybrid |
|||
algorithms, a marriage between a population-based global |
|||
search (often in the form of an evolutionary algorithm) coupled |
|||
with a cultural evolutionary stage. This first generation |
|||
of MA although encompasses characteristics of cultural |
|||
evolution (in the form of local refinement) in the search cycle, |
|||
it may not qualify as a true evolving system according |
|||
to Universal Darwinism, since all the core principles of inheritance/ |
|||
memetic transmission, variation and selection are |
|||
missing. This suggests why the term MA stirs up criticisms |
|||
and controversies among researchers when first introduced in <ref name=dawkins1989sg/>. |
|||
Pseudo code: |
|||
The most common approach to design a memetic algorithm ('''MA''') is to ''hybridize'' (join) a genetic algorithm ('''GA''') with a local search ('''LS''') technique, using LS as an additional step within the GA. Thus, sometimes, they are briefly resumed as the equation '''MA = GA + LS'''. Indeed, some researchers have claimed<ref name=ieee_tutorial> {{cite journal |
|||
| author = Krasnogor, N.; Smith, J. |
|||
| year = 2005 |
|||
| title = A tutorial for competent memetic algorithms: model, taxonomy and design issues |
|||
| journal = IEEE Transactions on Evolutionary Computation |
|||
| volume = 9 |
|||
| issue = 5 |
|||
| pages = 474-488 |
|||
| url = |
|||
}}</ref> that Moscato's definition, which encompasses any population-based technique, is too loose and should be constrained to the '''MA = GA + LS''' way in order to enable a formal analysis of MA's performance and taxonomy. Notwithstanding, Moscato's definition came first and provides a broader scope, so this article will mainly present his point of view on memetic algorithms. From this point of view, the term '''hybrid genetic algorithm''' can be seen as a [[hyponym]] for memetic algorithm, although in practice, common use has rendered both terms as synonyms. |
|||
'''Procedure''' Memetic Algorithm |
|||
==Description== |
|||
'''Initialize:''' Generate an initial population; |
|||
'''while''' Stopping conditions are not satisfied '''do''' |
|||
''Evaluate'' all individuals in the population. |
|||
''Evolve'' a new population using stochastic search operators. |
|||
''Select'' the subset of individuals, <math>\Omega_{il}</math>, that should undergo the individual improvement procedure. |
|||
'''for''' each individual in <math>\Omega_{il}</math> '''do''' |
|||
''Perform'' individual learning using meme(s) with frequency or probability of fil, for a period of til. |
|||
''Proceed'' with Lamarckian or Baldwinian learning. |
|||
'''end for''' |
|||
'''end while''' |
|||
---- |
|||
===2nd generation:=== |
|||
As memetic algorithms must be tailored with domain-specific knowledge for each optimization problem, specific instances are not representative enough for a comprehensive description. Rather, memetic algorithms are usually presented using very general [[pseudocode]] templates. In this section, a basic, [[Evolutionary algorithm|EA]]-based template of memetic algorithms will be provided, but it must be noted that other approaches to memetic algorithms may exist, as long as they are population-based. |
|||
Multi-meme <ref name=krasnogor1999cga>{{cite journal | |
|||
author = Krasnogor N. | |
|||
title = Coevolution of genes and memes in memetic algorithms | |
|||
journal = Graduate Student Workshop | |
|||
year = 1999 | |
|||
pages = 371 |
|||
}}</ref>, Hyper-heuristic <ref name=kendall2002cfa>{{cite journal | |
|||
author = Kendall G. and Soubeiga E. and Cowling P. | |
|||
title = Choice function and random hyperheuristics | |
|||
journal = 4th Asia-Pacific Conference on Simulated Evolution And Learning |
|||
SEAL 2002 | |
|||
pages = 667--671 |
|||
}}</ref> |
|||
and Meta-Lamarckian MA <ref name=ong2004mll>{{cite journal | |
|||
author = Ong Y. S. and Keane A. J. | |
|||
title = Meta-Lamarckian learning in memetic algorithms | |
|||
journal = IEEE Transactions on Evolutionary Computation | |
|||
year = 2004 | |
|||
volume = 8 | |
|||
pages = 99--110 | |
|||
number = 2 |
|||
}}</ref> are referred to as second generation |
|||
MA exhibiting the principles of memetic transmission |
|||
and selection in their design. In Multi-meme MA, the |
|||
memetic material is encoded as part of the genotype. Subsequently, |
|||
the decoded meme of each respective individual / |
|||
chromosome is then used to perform a local refinement. The |
|||
memetic material is then transmitted through a simple inheritance |
|||
mechanism from parent to offspring(s). On the other |
|||
hand, in hyper-heuristic and meta-Lamarckian MA, the pool |
|||
of candidate memes considered will compete, based on their |
|||
past merits in generating local improvements through a reward |
|||
mechanism, deciding on which meme to be selected to |
|||
proceed for future local refinements. Meme having higher rewards |
|||
will have greater chances of being replicated or copied |
|||
subsequently. For a review on second generation MA, i.e., |
|||
MA considering multiple individual learning methods within |
|||
an evolutionary system, the reader is referred to <ref name=ong2006cam>{{cite journal | |
|||
author = Ong Y. S. and Lim M. H. and Zhu N. and Wong K. W. | |
|||
title = Classification of Adaptive Memetic Algorithms: A Comparative Study | |
|||
journal = IEEE Transactions on Systems Man and Cybernetics -- Part B. | |
|||
year = 2006 | |
|||
volume = 36 | |
|||
pages = 141 | |
|||
number = 1 |
|||
}}</ref>. |
|||
===3rd generation:=== |
|||
In order to thoroughly accomplish the description, templates for the main algorithm as well as relevant [[Subroutine|subfunctions]] will be provided. After each piece of pseudocode, its main features will be detailed. |
|||
Co-evolution <ref name=smith2007cma>{{cite journal | |
|||
author = Smith J. E. | |
|||
title = Coevolving Memetic Algorithms: A Review and Progress Report | |
|||
journal = IEEE Transactions on Systems Man and Cybernetics - Part B | |
|||
year = 2007 | |
|||
volume = 37 | |
|||
pages = 6--17 | |
|||
number = 1 |
|||
}}</ref> and self-generation MAs <ref name=krasnogor2002ttm>{{cite journal | |
|||
author = Krasnogor N. and Gustafson S. | |
|||
title = Toward truly "memetic" memetic algorithms: discussion and proof |
|||
of concepts | |
|||
journal = Advances in Nature-Inspired Computation: The PPSN VII Workshops. |
|||
PEDAL (Parallel Emergent and Distributed Architectures Lab). University |
|||
of Reading | |
|||
year = 2002 |
|||
}}</ref> may be regarded as 3rd generation |
|||
MA where all three principles satisfying the definitions |
|||
of a basic evolving system has been considered. In contrast to |
|||
2nd generation MA which assumes the pool of memes to be |
|||
used being known a priori, a rule-based representation of local |
|||
search is co-adapted alongside candidate solutions within |
|||
the evolutionary system, thus capturing regular repeated features |
|||
or patterns in the problem space. |
|||
== |
==Some design notes== |
||
The frequency and intensity of individual learning directly define the degree of evolution (exploration) against |
|||
<div id="memeticalgorithm"/>As population-based techniques, memetic algorithms feature a well-characterized loop at the top level: |
|||
individual learning (exploitation) in the MA search, for a given fixed limited computational budget. Clearly, a more intense |
|||
individual learning provides greater chance of convergence to the local optima but limits the amount of evolution that |
|||
may be expended without incurring excessive computational resources. Therefore, care should be taken when setting |
|||
these two parameters to balance the computational budget available in achieving maximum search performance. When only a portion of the population individuals undergo learning, the issue on which subset of individuals to improve need to be considered to maximize the utility of MA search. Last but not least, the individual learning procedure/meme used also favors a different neighborhood structure, hence the need to decide which meme or memes to use for a given optimization problem at hand would be required. |
|||
===How often should individual learning be applied?=== |
|||
'''process''' Memetic-Algorithm () <big><big>→</big></big> ''Agent'' <nowiki>[]</nowiki> |
|||
One of the first issues pertinent to memetic algorithm design is to consider how often the individual learning should be applied, i.e., individual learning frequency. In <ref name=hart1994ago>{{cite journal | |
|||
'''variables''' |
|||
author = Hart W. E. | |
|||
''pop'' : ''Agent'' <nowiki>[]</nowiki> |
|||
title = Adaptive Global Optimization with Local Search | |
|||
'''begin''' |
|||
school = University of California | |
|||
''pop'' <big><big>←</big></big> [[#generateinitialpopulation|Generate-Initial-Population]] () |
|||
year = 1994 |
|||
'''repeat''' |
|||
}}</ref>, the effect of individual learning frequency on MA search performance was considered where various configurations of the individual learning frequency at different stages of the MA search were investigated. Conversely, it was shown in <ref name=ku2000sle>{{cite journal | |
|||
''pop'' <big><big>←</big></big> [[#dogeneration|Do-Generation]] (''pop'') |
|||
author = Ku K. W. C. and Mak M. W. and Siu W. C. | |
|||
'''if''' Converged(''pop'') '''then''' |
|||
title = A study of the Lamarckian evolution of recurrent neural networks | |
|||
''pop'' <big><big>←</big></big> [[#restartpopulation|Restart-Population]](''pop'') |
|||
journal = IEEE Transactions on Evolutionary Computation | |
|||
'''endif''' |
|||
year = 2000 | |
|||
'''until''' MA-Termination-Criterion(''pop'') |
|||
volume = 4 | |
|||
'''return best of''' ''pop'' |
|||
pages = 31--42 | |
|||
'''end''' |
|||
number = 1 |
|||
}}</ref> that it may be worthwhile to apply individual learning on every individual if the computational complexity of the individual learning is relatively low. |
|||
===On which solutions should individual learning be used?=== |
|||
*As its name hints, the function <code>[[#generateinitialpopulation|Generate-Initial-Population]]</code> creates an initial population to feed the main loop. The variable ''pop'' is a collection which stores the agents. |
|||
On the issue of selecting appropriate individuals among the EA population that should undergo individual learning, fitness-based and distribution-based strategies were studied for adapting the probability of applying individual learning on the population of chromosomes in continuous parametric search problems with Land <ref name=land1998eal>{{cite journal | |
|||
*<code>[[#dogeneration|Do-Generation]]</code> is the core of the algorithm, being the most computationally expensive and the key step; it evolves the population through one step at a time. It presents most of the opportunities for [[Parallel computing|parallelizing]] the algorithm. |
|||
author = Land M. W. S. | |
|||
*The function <code>Converged</code> must be instanced for each specific problem, and tests to confirm whether the population has converged. [[Premature convergence]] refers to a common pitfall in population-based heuristics, where sometimes all the population converge too early over a not quite good solution. It is a so common concern that it is best to take it into account rather than to actively try to avoid it, at least in a very general template, as this is. Common convergence criteria are low rates of improvement on solutions, low levels of [[information entropy]] in the population, or closeness of all or most of the agents in the [[fitness landscape]] (for a given [[Metric (mathematics)|metric]] in the search space). Whenever convergence happens, the <code>[[#restartpopulation|Restart-Population]]</code> function is executed in order to try to mend the issue. |
|||
title = Evolutionary Algorithms with Local Search for Combinatorial Optimization | |
|||
*<code>MA-Termination-Criterion</code> must also be tailored for each problem instance, since deciding whether to stop the algorithm is a very application-specific matter. There are many possible criteria; to name just the most usual: |
|||
school = UNIVERSITY OF CALIFORNIA | |
|||
**limited amounts of execution time or iteration steps of the main loop |
|||
year = 1998 |
|||
**low rates of improvement on solution (after taking into account the restartings of the population) |
|||
}}</ref> extending the work to combinatorial optimization problems. Bambha et al. <ref name=bambha2004sip>{{cite journal | |
|||
**a heuristic combination of some others. |
|||
author = Bambha N. K. and Bhattacharyya S. S. and Teich J. and Zitzler |
|||
E. | |
|||
title = Systematic integration of parameterized local search into evolutionary |
|||
algorithms | |
|||
journal = IEEE Transactions on Evolutionary Computation | |
|||
year = 2004 | |
|||
volume = 8 | |
|||
pages = 137--155 | |
|||
number = 2 |
|||
}}</ref> introduced a simulated heating technique for systematically integrating parameterized individual learning into evolutionary algorithms to achieve maximum solution quality. |
|||
===How long should individual learning be run?=== |
|||
===Setting up populations=== |
|||
Individual learning intensity, <math>t_{il}</math>, is the amount of computational budget allocated to an iteration of individual learning, i.e., the maximum computational budget allowable for individual learning to expend on improving a single solution. |
|||
<div id="generateinitialpopulation"/>Before entering the [[#memeticalgorithm|main loop]], the population must be set up: |
|||
===What individual learning method or meme should be used for a particular problem or individual?=== |
|||
'''process''' Generate-Initial-Population () <big><big>→</big></big> ''Agent'' <nowiki>[]</nowiki> |
|||
'''variables''' |
|||
''pop'' : ''Agent'' <nowiki>[]</nowiki> |
|||
''j'' : ''Integer'' |
|||
'''begin''' |
|||
'''for''' ''j'' <big><big>←</big></big> 1 : ''INITIAL_POP_SIZE'' '''do''' |
|||
''pop''<nowiki>[</nowiki>''j''<nowiki>]</nowiki> <big><big>←</big></big> Generate-New-Solution() |
|||
''pop''<nowiki>[</nowiki>''j''<nowiki>]</nowiki> <big><big>←</big></big> [[#memeticimprover|Memetic-Improver]] (''ind'') |
|||
'''endfor''' |
|||
'''return''' ''pop'' |
|||
'''end''' |
|||
In the context of continuous optimization, individual learning/individual learning exists in the form of local heuristics or conventional exact enumerative methods <ref name=schwefel1995eao>{{cite book | |
|||
*This function is very simple. A population of agents of a given size ''INITIAL_POP_SIZE'' is initialized. The creation of the initial agents done by repeatedly calling <code>Generate-New-Solution</code>, which is commonly implemented in a purely random way. It might be accomplished using heuristics, but these heuristics should be also integrated in <code>[[#memeticimprover|Memetic-Improver]]</code>, which is used in other functions as <code>[[#restartpopulation|Restart-Population]]</code>. |
|||
title = Evolution and optimum seeking | |
|||
publisher = Wiley New York | |
|||
year = 1995 | |
|||
author = Schwefel H. P. |
|||
}}</ref>. Examples of individual learning strategies include the hill climbing, Simplex method, Newton/Quasi-Newton method, interior point methods, conjugate gradient method, line search and other local heuristics. Note that most of common individual learninger are deterministic. |
|||
In combinatorial optimization, on the other hand, individual learning methods commonly exists in the form of heuristics (which can be deterministic or stochastic), that are tailored to serve a problem of interest well. Typical heuristic procedures and schemes include the k-gene exchange, edge exchange, first-improvement, and many others. |
|||
<div id="restartpopulation"/>Another function very similar to <code>[[#generateinitialpopulation|Generate-Initial-Population]]</code>: |
|||
'''process''' Restart-Population (''pop'' : ''Agent'' <nowiki>[]</nowiki>) <big><big>→</big></big> ''Agent'' <nowiki>[]</nowiki> |
|||
'''variables''' |
|||
''newpop'' : ''Agent'' <nowiki>[]</nowiki> |
|||
''j'' : ''Integer'' |
|||
'''begin''' |
|||
newpop<nowiki>[</nowiki>1..''NUM_PRESERVED''<nowiki>]</nowiki> <big><big>←</big></big> Select-Candidates (''pop'', ''NUM_PRESERVED'') |
|||
'''for''' ''j'' <big><big>←</big></big> ''NUM_PRESERVED''+1 : ''NEW_POP_SIZE'' '''do''' |
|||
''pop''<nowiki>[</nowiki>''j''<nowiki>]</nowiki> <big><big>←</big></big> Generate-New-Solution() |
|||
''pop''<nowiki>[</nowiki>''j''<nowiki>]</nowiki> <big><big>←</big></big> [[#memeticimprover|Memetic-Improver]] (''ind'') |
|||
'''endfor''' |
|||
'''return''' ''pop'' |
|||
'''end''' |
|||
*In the event of [[premature convergence]] happening, this function tries to correct the situation. A certain amount ''NUM_PRESERVED'' of agents from the converged population are maintained, the rest are discarded. Usually, the number ''NUM_PRESERVED'' is determined as a percentage of the converged population, but it is not mandatory. |
|||
*The function <code>Select-Candidates</code> elects the survivors. The selection can be done using several criteria or combinations of them: |
|||
**maintaining the best agents (ranked by their fitness) |
|||
**trying to select a sample as diverse as possible |
|||
*The rest of the population is reset very much in the same way as in <code>[[#generateinitialpopulation|Generate-Initial-Population]]</code>. This may seem inefficient, as the algorithm may have learned about the fitness landscape, and new random solutions may seem futile at first glance. Two points must be stressed: |
|||
**Even if an old solution is created again, by letting it cooperate with agents who have already been transformed through many [[#memeticalgorithm|main loop]] iterations could be fruitful, since it may carry useful characteristics which were initially lost. |
|||
**The heuristics applied by <code>[[#memeticimprover|Memetic-Improver]]</code> may have memory, and so be aware of candidate solutions previously evaluated in explored regions of the [[fitness landscape]]. |
|||
*The population size may vary if it is deemed necessary, so an arbitrary given value ''NEW_POP_SIZE'' is used, but most algorithms maintain a constant population size. |
|||
===Memetic optimization=== |
|||
<div id="dogeneration"/>As it was commented in the explanation of the [[#memeticalgorithm|main loop]], this function is the crux of the algorithm, since it implements the most important part of the main loop. It resembles closely to the usual steps found in a [[genetic algorithm]]: |
|||
'''process''' Do-Generation (''pop'' : ''Agent'' <nowiki>[]</nowiki>) <big><big>→</big></big> ''Agent'' <nowiki>[]</nowiki> |
|||
'''variables''' |
|||
''cooperators'', ''newpop'' : ''Agent'' <nowiki>[]</nowiki> |
|||
''j'' : ''Integer'' |
|||
'''begin''' |
|||
''cooperators'' <big><big>←</big></big> Select-From-Population(''pop'') |
|||
''newpop'' <big><big>←</big></big> ''cooperators'' |
|||
'''for''' ''j'' <big><big>←</big></big> 1 : ''NUM_OPERATORS'' '''do''' |
|||
''newpop'' <big><big>←</big></big> Apply-Operator ( ''MEMETIC_OPERATOR''<nowiki>[</nowiki>''j''<nowiki>]</nowiki>, ''newpop'' ) |
|||
'''endfor''' |
|||
''pop'' <big><big>←</big></big> Update-Population (''pop'', ''newpop'') |
|||
'''return''' ''pop'' |
|||
'''end''' |
|||
*At first, agents compete by using the function <code>Select-From-Population</code>. A fraction of the population with good fitness will be used as cooperators (the ranking is commonly established according to the fitness value of each agent). The competition may be deterministic (only the best agents cooperate) or probabilistic (agents are more likely to cooperate if they have better fitness). Several ranking methods are available; to name just a few: |
|||
**Ranking by '''absolute fitness'''. |
|||
**Ranking by '''normalized fitness''' (fitness divided by the best fitness and scaled to a more convenient range of values). |
|||
**Ranking by '''tournament selection''' (agents are randomly confronted with other agents in paired tournaments; the agent with the best fitness wins, and an agent is ranked according to the number of won tournaments). |
|||
*After the competition phase, agents are modified by applying to them a sequence of memetic operators ''MEMETIC_OPERATOR''<nowiki>[]</nowiki>. It should be taken into account that these operators manipulate the whole population instead of only just one or two agents at a time, as commonly done in traditional descriptions of genetic algorithms. This is motivated by a more abstract concept of operator: it encompasses any possible manipulation of the population. In the context of [[Evolutionary algorithm|EA]]-based memetic algorithms, common operators include: |
|||
**<div id="recombination"/>'''Recombination''': this operator selects as many subsets of agents for the final desired size of the population, and combines agents from each subset into a new agent, returning the set of new agents as the new population. These subsets can be any size. The recombination must be carefully designed so as to keep valuable improvements of the original agents, and to combine them in a sensible way, according to the problem instance being solved. This kind of operator is the main drive of cooperation in the algorithm. |
|||
**'''Mutation''': this sort of operator is intended to be a source of variability in the population, as a means to avoid convergence happening too quickly. It must be carefully designed to not be too disruptive, and to take advantage of problem knowledge when modifying existing solutions. |
|||
**<div id="localsearch"/>[[Image:Neighbourhood concepts.png|right|thumb|<small>In a fitness landscape, neighbourhoods can be defined in several ways. Left: a strict definition of neighbourhood, where the neighbourhood comprises only the directly reachable neighbours. Right: a looser definition, where the neighbourhood is the set of neighbours reachable by some criterion, such as by a local search operator.</small>]]'''Local search''': this class of operator performs a [[local search (optimization)|local search]] in the fitness landscape for each agent. Indeed, a good reason to talk about ''agents'' instead of ''individuals'' (as is common in genetic algorithms) is that these operators enable them to actively search good solutions across their neighbourhood in the fitness landscape. Common choices are [[simulated annealing]], [[hill-climbing|hill climbing]] and [[tabu search]] (as it can be seen, the term ''local search'' is loosely applied), but these must be complemented with domain-specific techniques, either embedded in them or used as additional operators. For example, in problems related to [[graph coloring|graph colouring]], ''tabu search'' outperforms ''simulated annealing''<ref name=memetic_scheduling>{{cite book |
|||
| author = Cotta, C.; Fernández, A. |
|||
| title = Evolutionary Scheduling |
|||
| series = Studies in Computational Intelligence (Vol. 49) |
|||
| year = 2007 |
|||
| publisher = [[Springer]] |
|||
| isbn = 978-3-540-48582-7 |
|||
| pages = 1-30 |
|||
| chapter = Memetic Algorithms in Planning, Scheduling, and Timetabling |
|||
| chapterurl = http://www.lcc.uma.es/~ccottap/papers/memetic_scheduling.pdf |
|||
}}</ref>, so the former should be preferred over the latter in these contexts. |
|||
*Finally, the main population must be updated. Since it cannot be arbitrarily large, some agents must be discarded. The function <code>Update-Population</code> takes both the original population and the newly derived agents, and performs this task. Criteria for both selecting the fittest and preserving diversity are usually taken into account. |
|||
<div id="memeticimprover"/>Finally, the <code>Memetic-Improver</code> function is a memetic-based local search: |
|||
'''process''' Memetic-Improver (''agent'' : ''Agent'') <big><big>→</big></big> ''Agent'' |
|||
'''variables''' |
|||
''newagent'' : ''Agent'' |
|||
'''begin''' |
|||
'''repeat''' |
|||
''newagent'' <big><big>←</big></big> Apply-Operator ( ''MEMETIC_IMPROVER_OPERATOR'', ''agent'' ) |
|||
'''if''' Better-Fitness(''new'', ''agent'') |
|||
''agent'' <big><big>←</big></big> ''new'' |
|||
'''endif''' |
|||
'''until''' MI-Termination_Condition(''agent'') |
|||
'''return''' ''agent'' |
|||
'''end''' |
|||
*This function can use any memetic operator based on [[#localsearch|local search]]. It is used as a sort of nursery, in order to raise the fitness of newly created agents up to a minimum standard, before they are released to compete and cooperate in the main step (<code>[[#dogeneration|Do-Generation]]</code>) of the algorithm. It is usually a greedy local hill-climber (that is to say, the operator ''MEMETIC_IMPROVER_OPERATOR'' is applied repeatedly, but only accepting changes to better neighbour solutions) for the sake of execution speed. |
|||
*The finishing criterion coded in <code>MI-Termination_Condition</code> can be: |
|||
**a fixed number of iterations |
|||
**a slow rate of improvement |
|||
**reaching the local optimum |
|||
**any combination of some of the others |
|||
==Design notes== |
|||
Given the ad-hoc nature of memetic algorithms and the inescapable [[#No Free Lunch Theorem|NFL theorem]], a general approach to the design cannot be provided as the best choice for every problem. Putting it in other words, the sole golden rule is that '''there is no golden rule'''. Notwithstanding, there exists a wealth of general guidelines and do's and don'ts. |
|||
===Codification and representation=== |
|||
Broadly speaking, fitness landscapes are shaped after the chosen problem representation or representations. '''Representation''' should not be confused with '''codification''': the former is the way which is used by heuristics in order to rank and manipulate candidate solutions, while the latter is merely the way the solutions are stored in the memory. Clever representations are the key to success. Usually, for each ''(problem, heuristic)'' pair, one or a few best suited representations may exist. Since many different heuristics are likely to be used in a memetic algorithm, care must be taken when combining representations for several heuristics, or switching between them. |
|||
Of course, using a codification poorly related with the desired representation damps the performance, since the algorithm will be hampered by the conceptual distance between the data stored in the memory and the operations performed on them. But representation is far more critical, and it should drive the design process. This rule of thumb stands in utter contrast with the codification-driven design observed by early practitioners of [[evolutionary algorithms]]. |
|||
===Shaping fitness landscapes=== |
|||
[[Image:fitness-landscape-cartoon.png|right|thumb|200px|<small>A fitness landscape sketch. The arrows indicate the possible routes of an agent on the landscape, and the points A and C are local optima. The red ball is an agent moving on the landscape. Of course, this is an oversimplification: real-life fitness landscapes are multi-dimensional. Global optima may be very difficult to find. Illustration by C.O. Wilke, 2001. </small>]] |
|||
Representations are generally structured as [[tuple]]s of elements. Each representation of the problem induces a given [[fitness landscape]] in which the algorithm will search. If a solution can be transformed into another one by the heuristics, they are said to be neighbours. If a solution has similar fitness to its neighbours, the neighbourhood is said to be '''smooth'''; otherwise, it is said to be '''rugged'''. |
|||
As a general rule, the smoother and simpler the landscape, the easier it is to find good (even optimal) solutions; however, this advice is too vague. There are several guidelines to drive the design of the landscape: |
|||
*minimizing '''fitness variance''': for a certain neighbourhood, fitness variance is the variance over the fitness levels of the neighbours. It can be defined for wider regions of the landscape as the average of the local variance of the neighbourhoods. A fitness function with high levels of variance is a very bad guide for the search process, since it usually makes more difficult to apply the heuristics. It also severely decreases the performance of the algorithm. |
|||
*maximizing '''fitness correlation''': likewise to the previous one, this characteristic is the correlation of fitness levels between neighbours. In highly correlated areas, good solutions are likely to have good neighbours, and the heuristics will easily climb to the best solutions of the neighbourhood. |
|||
*minimizing '''epistasis''': in biological sciences, [[epistasis]] refers to the interaction between several genes to define the traits of the individual. Similarly, in computer science, it is related to evolutionary algorithms: if individual genes are equalled to single elements of the tuples (recall that the tuples are representations of solutions), it means [[Nonlinear system|non-linear]] interactions between the elements of the tuples. It is a common source of fitness variance. This characteristic turns the landscape very rugged and complex, and is highly undesirable. |
|||
Many problems have a naturally well-defined fitness function, while some others do not. For example, each candidate solution to a [[boolean satisfiability problem|SAT]] problem carries only two possible outcomes: satisfiable or unsatisfiable. Search heuristics are unable to tackle such simplistic landscapes, since a fitness function which only returns two values gives no information on how to guide the search. For such cases, a conjecture on the goodness of the solutions must be used as fitness function. This guess is commonly an estimation on the distance between the solution to be evaluated and the closest valid solution. |
|||
===''Tradeoffs'' and related issues=== |
|||
As it has been shown, there are many issues, both general and specific, to take into account whenever the design of a memetic algorithm is carried out. Each consideration affects the performance of the algorithm in a specific way, and trying to improve the performance by tuning only one of them usually yields a worsening of the performance (as several other considerations have not been taken into account). The conflict can be one of design (an optimal design in order to improve a characteristic can hamper another one) or of resource allocation (given finite amounts of available memory space and [[Central processing unit|CPU]] time, the algorithm must schedule pieces of them to each heuristic). The designer must assign resources to each one, keeping in mind that, beyond some point, allocating more resources to one at the expense of others will not significantly improve the overall performance (see [[Amdahl's law]]). Thus, for each set of paired characteristics, a trade-off must be found. Unfortunately, fine-tuning trade-off issues is a difficult problem, and there are no general guidelines. Several common trade-off scenarios will be described: |
|||
*'''Exploration''' vs. '''exploitation''': Recalling the fitness landscape metaphor, memetic algorithms use a population of agents searching for good solutions over the landscape. Exploration refers to probe new regions of the landscape, in the hope of either finding better solutions or regions in which the search process is easier. On the other hand, exploitation means to thoroughly explore a given region of the landscape in order to find local optima. In memetic algorithms, exploration is commonly achieved by the population of agents, while each agent exploits a given region or regions of the landscape. Some constraints must be met: |
|||
**In order to achieve an effective exploration, the population must have a minimal size and must run for a minimal number of iterations. |
|||
**Each agent needs a minimal amount of time for running its local search operators with a reasonable chance of success, it is to say, find the neighbourhood's best solutions. In this regard, neighbourhoods are defined in a looser way than in the [[#Shaping fitness landscapes|previous section]]: instead of strictly encompassing only the neighbours of a candidate solution, they comprise wider regions of the landscape around the candidate solution (please see [[#localsearch|this picture]]). |
|||
*'''Restart''' the population vs. high '''mutability''': as it has been explained [[#memeticalgorithm|before]], memetic algorithms must overcome the issue of [[premature convergence]]. This is usually prevented by two means, both of them termed ''disruptive'': restarting the population, and keeping high mutability rates. These rates can be owed to random mutation operators, possibly in conjunction with blind (and/or random) recombination operators. While both approaches can be used together, they implement fundamentally different ways in preventing convergence. Using a metaphor borrowed from [[simulated annealing]], ''mutability temperature'' is the variability rate. Restarts amount to long periods of low temperatures, punctuated by short bursts of very high ones. On the other hand, disruptive mutations keep the temperature relatively high over all the process. Both restarting frequencies and mutation rates can be adjusted dynamically, seeking the best ''temperature profile'' for each problem. |
|||
[[Image:6n-graf.svg|right|thumb|200px|<small>Instead of a single, monolithic population, the agents may be arranged in several small subpopulations, allowing exchange of agents between certain pairs of subpopulations. The connectivity should be heuristically determined according to sound reasons. </small>]] |
|||
*'''Population arrangement''': Population-based algorithms have traditionally been improved by tuning the heuristics, but the population arrangement has been essentially the same for tens of years: any member of the population is able to interact with any other one. Notwithstanding, several different population structures have been tested in recent years, whose common trait is to distribute the agents in several subpopulations<ref name=parallel_ga>{{cite paper |
|||
| author = Schwehm, M. |
|||
| title = Parallel population models for genetic algorithms |
|||
| publisher = Universitat Erlangen-Nürnberg |
|||
| date = 1996 |
|||
| url = http://citeseer.ist.psu.edu/schwehm96parallel.html |
|||
| accessdate = 2008-02-17}}</ref>. In many instances, structured populations allow for specialization in each subpopulation; it is to say, several qualitatively different, good ways to construct solutions to the problem may arise. This feature sometimes amounts to a more effective search in the fitness landscape. Several points must be taken into account when using several subpopulations: |
|||
**'''Connectivity''': a key feature of structured algorithms is agent exchange between populations. This way, good, novel solutions can be introduced in the subpopulations (this is similar to a gentle restart of the population, but with new and improved agents). In order to decide which subpopulations can exchange agents between themselves, the subpopulations may be arranged in an n-dimensional lattice, a fully connected graph or a less structured one. |
|||
**'''Immigration policy''': Agent exchange between populations must be carefully planned. An agent should not be sent to a population if its fitness is very high in respect of the average fitness of the receiving population, because premature convergence would be the most likely outcome. That is to say, the agent's brood, with such high fitness, would obliterate endemic agents and would become the sole strain in that population. A related point to take into account is the exchange rate of agents between populations, which must be tuned to prevent stagnation in the search process as well as premature convergence. Further considerations include, for example, employing different memetic operators (or different subsets of them) in each subpopulation. So, each one of the operators can do its best without any interference from any of the others, which may disrupt its heuristic strategies. |
|||
**'''Granularity''': There may be either a few subpopulations of moderate or even large sizes, or even many tiny populations of very few agents, or any possible proportions inside this range. In the extreme case of single-agent subpopulations, arranged in a regular lattice (commonly 2-D or 3-D, and squared), the algorithm is called '''cellular'''<ref name=cellular_ga> {{cite book |
|||
| author = Whitley, L. |
|||
| title = Applications of Modern Heuristic Methods |
|||
| year = 1995 |
|||
| publisher = Wiley |
|||
| isbn = 978-0471962809 |
|||
| pages = 55-67 |
|||
| chapter = A review of models for simple genetic algorithms and cellular genetic algorithms |
|||
| chapterurl = http://citeseer.ist.psu.edu/whitley95review.html |
|||
}}</ref>, as it can be seen as a crossover of a memetic/genetic algorithm and a [[cellular automaton]]. |
|||
===Recombination operators=== |
|||
[[Image:SinglePointCrossover.png|right|thumb|200px|<small>Simplest crossover for classical [[genetic algorithm]]s. Individuals are strings of bits blindly manipulated. A random position is selected, and information from each parent is exchanged in order to create two new agents, without regarding possible disruptions of fitness levels caused by the careless operation. In contrast, memetic recombination operators are required to be careful in order to not be disruptive. </small>]] |
|||
Classical [[Crossover (genetic algorithm)|recombination operators]] have traditionally been blind. This means to mix up traits from the parents (it is to say, the agents whose candidate solutions are being merged into a hopefully better candidate) without caring about their disruption; it is to say, nullifying the contribution of a trait by means of breaking it or combining it with traits which are not compatible. In sharp contrast, [[#recombination|memetic recombination operators]] are usually designed to avoid disruptive effects. This desirable property is commonly accomplished by using domain-specific knowledge, in order to select the '''relevant''' information from each parent. This issue is closely related to the selected [[#Codification and representation|representation]] of the solutions, which should be designed to let an effective assessment of the goodness of traits or subsets of traits. For instance, in the [[travelling salesman problem]], a memetic operator would be careful to not sever the parents' paths in the middle of optimal sub-paths, but to maintain them as much as possible, and would try to paste together the contributions of the parents without adding unsound sub-paths. |
|||
Formally speaking, a '''memetic recombination operator''' can exhibit several properties. These properties are not black-and-white features, but rather characteristics which each operator can exhibit to a certain extent. These are not good properties ''per se'', but a means of classifying operators, and points to take into account when driving their design: |
|||
*'''respect''': a ''respectful'' operator detects the traits which are common to all parents, and includes them into the new agent's candidate solution. |
|||
*'''assortment''': a ''properly assorting'' operator can generate descendants from any possible combination of compatible traits from the parents. This property contributes to a better exploration of the search space. |
|||
*'''transmission''': a ''transmitting'' operator merges traits from the parents without inserting any new information, it is to say, any trait seen in the new agent comes from any parent. Of course, a transmitting operator has more limited opportunities to use domain-specific knowledge, so it should not be used if this knowledge is not embodied into the algorithm in any other way. |
|||
As it can easily be seen, these three properties are related to the issue of the selection of parental traits that are to be transmitted to the new agent. When actively adding domain-specific knowledge, the selection of non-parental features to be added becomes an important issue, too. The latter one is totally needed whenever merging traits in a straightforward way is unsound. Two general strategies can be used: |
|||
*'''locally-optimal completion''': the new agent is generated without worrying about unsound merging decisions; thereafter, a local or greedy improver is applied to fix it up. |
|||
*'''globally-optimal completion''': possible combinations of traits are enumerated (implicitly or explicitly) and the best one is selected. |
|||
==Applications== |
==Applications== |
||
Line 253: | Line 218: | ||
Memetic algorithms are the subject of intense scientific research (a [[scientific journal]] devoted to [[#memeticcomputingjournal|their research]] is going to be launched) and have been successfully applied to a multitude of real-world problems. Although many people employ techniques closely related to memetic algorithms, alternative names such as ''hybrid genetic algorithms'' are also employed. Furthermore, many people term their memetic techniques as ''genetic algorithms''. The widespread use of this misnomer hampers the assessment of the total amount of applications. |
Memetic algorithms are the subject of intense scientific research (a [[scientific journal]] devoted to [[#memeticcomputingjournal|their research]] is going to be launched) and have been successfully applied to a multitude of real-world problems. Although many people employ techniques closely related to memetic algorithms, alternative names such as ''hybrid genetic algorithms'' are also employed. Furthermore, many people term their memetic techniques as ''genetic algorithms''. The widespread use of this misnomer hampers the assessment of the total amount of applications. |
||
Researchers have used memetic algorithms to tackle many classical [[NP (complexity)|NP]] problems. To cite some of them |
Researchers have used memetic algorithms to tackle many classical [[NP (complexity)|NP]] problems. To cite some of them: [[graph partition|graph partitioning]], [[knapsack problem |multidimensional knapsack]], [[travelling salesman problem]], [[quadratic assignment problem]], [[set cover problem]], [[Graph_coloring#Algorithmic_aspects|minimal graph colouring]], [[Independent set problem|max independent set problem]], [[bin packing problem]] and [[Generalized Assignment Problem|generalized assignment problem]]. |
||
| author = Moscato, P.; Cotta, C. |
|||
| title = Handbook of Metaheuristics |
|||
| series = In Operations Research & Management Science (Vol. 57) |
|||
| year = 2005 |
|||
| publisher = [[Springer]] |
|||
| isbn = 978-1-4020-7263-5 |
|||
| pages = 105-144 |
|||
| chapter = A Gentle Introduction to Memetic Algorithms |
|||
| chapterurl = http://www.lcc.uma.es/~ccottap/papers/handbook03memetic.pdf |
|||
}}</ref>: [[graph partition|graph partitioning]], [[knapsack problem |multidimensional knapsack]], [[travelling salesman problem]], [[quadratic assignment problem]], [[set cover problem]], [[Graph_coloring#Algorithmic_aspects|minimal graph colouring]], [[Independent set problem|max independent set problem]], [[bin packing problem]] and [[generalized assignment problem]]. |
|||
More |
More recent applications include (but are not limited to): training of [[artificial neural network]]s<ref name=trainig_ANN>{{cite conference |
||
| author = Ichimura, T.; Kuriyama, Y. |
| author = Ichimura, T.; Kuriyama, Y. |
||
| title = Learning of neural networks with parallel hybrid GA using a royal road function |
| title = Learning of neural networks with parallel hybrid GA using a royal road function |
||
Line 364: | Line 319: | ||
| pages = 811-820 |
| pages = 811-820 |
||
| chapter = Clustering Gene Expression Profiles with Memetic Algorithms |
| chapter = Clustering Gene Expression Profiles with Memetic Algorithms |
||
| chapterurl = http://www.springerlink.com/content/q2d7d5k1ne65k2p9 |
|||
}}</ref>. |
}}</ref>. |
||
==Recent Activities in Memetic Algorithms== |
|||
==Related books and scientific journals== |
|||
* IEEE Workshop on Memetic Algorithms (WOMA 2009). Program Chairs: Jim Smith, University of the West of England, U.K.; Yew-Soon Ong, Nanyang Technological University, Singapore; Gustafson Steven, University of Nottingham; U.K.; Meng Hiot Lim, Nanyang Technological University, Singapore; Natalio Krasnogor, University of Nottingham, U.K. |
|||
*''Recent Advances in Memetic Algorithms'', Hart, W.; Krasnogor, N.; Smith, J. (Eds.), 2005, [[Springer]] (ISBN: 978-3-540-22904-9) [http://www.springer.com/engineering/book/978-3-540-22904-9] |
|||
* [http://www.springer.com/journal/12293 Memetic Computing Journal], first issue is scheduled for January 2009. |
|||
*''Special Issue on Memetic Algorithms'', IEEE Transactions on Systems, Man and Cybernetics - Part B, Vol. 37, No. 1, Ong Y.S.; Krasnogor, N.; Ishibuchi H. (Eds.), February 2007. |
|||
* [http://www.wcci2008.org/ 2008 IEEE World Congress on Computational Intelligence (WCCI 2008)], Hong Kong, [http://users.jyu.fi/~neferran/MA2008/MA2008.htm Special Session on Memetic Algorithms]. |
|||
*''[[IEEE Transactions on Evolutionary Computation]]'', [[IEEE]] publication (ISSN: 1089-778X) [http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?puNumber=4235] |
|||
* [http://www.ntu.edu.sg/home/asysong/SC/Special-Issue-MA.htm Special Issue on 'Emerging Trends in Soft Computing - Memetic Algorithm'], Soft Computing Journal, Completed & In Press, 2008. |
|||
*''Journal of Evolutionary Computation'', [[MIT Press]] (ISSN: 1063-6560) [http://www.mitpressjournals.org/loi/evco] |
|||
* [http://www.ntu.edu.sg/home/asysong/ETTC/ETTC%20Task%20Force%20-%20Memetic%20Computing.htm IEEE Computational Intelligence Society Emergent Technologies Task Force on Memetic Computing] |
|||
*''IEEE Transactions on Systems, Man and Cybernetics - Part B'', [[IEEE]] publication (ISSN: 1083-4419) [http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=3477] |
|||
* [http://cec2007.nus.edu.sg/ IEEE Congress on Evolutionary Computation (CEC 2007)], Singapore, [http://ntu-cg.ntu.edu.sg/ysong/MA-SS/MA.htm Special Session on Memetic Algorithms]. |
|||
*<div id="memeticcomputingjournal"/>As of 2008, ''Memetic computing'' is an incoming [[scientific journal]], to be published by [[Springer-Verlag]] (first issue expected for January 2009) (ISSN: 1865-9292) [http://www.springer.com/engineering/journal/12293] |
|||
* [http://www.esi-topics.com/erf/2007/august07-Ong_Keane.html 'Memetic Computing'] by Thomson Scientific's Essential Science Indicators as an Emerging Front Research Area. |
|||
* [http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/3477/4067063/04067075.pdf?tp=&isnumber=&arnumber=4067075 Special Issue on Memetic Algorithms], IEEE Transactions on Systems, Man and Cybernetics - Part B, Vol. 37, No. 1, February 2007. |
|||
==External links== |
|||
* [http://www.springeronline.com/sgw/cda/frontpage/0,11855,5-40356-72-34233226-0,00.html Recent Advances in Memetic Algorithms], Series: Studies in Fuzziness and Soft Computing , Vol. 166, ISBN: 978-3-540-22904-9, 2005. |
|||
* [http://www.mitpressjournals.org/doi/abs/10.1162/1063656041775009?prevSearch=allfield%3A%28memetic+algorithm%29 Special Issue on Memetic Algorithms], Evolutionary Computation Fall 2004, Vol. 12, No. 3: v-vi. |
|||
*[http://www.ing.unlp.edu.ar/cetad/mos/memetic_home.html Memetic algorithms' home page] by Pablo Moscato (author of the seminal paper on memetic algorithms<ref name=martial_arts/>) |
|||
==Further reading== |
|||
This article has been mainly inspired by ''A Gentle Introduction to Memetic Algorithms'', Moscato, P.; Cotta, C. (2005) <ref name=handbook_memetic/> |
|||
==References== |
==References== |
||
Line 388: | Line 338: | ||
[[Category:Evolutionary algorithms]] |
[[Category:Evolutionary algorithms]] |
||
[[de:Memetischer Algorithmus]] |
Revision as of 06:50, 22 May 2008
Memetic algorithm (MA) represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. Quite often, MA are also referred to in the literature as Baldwinian EAs, Lamarckian EAs, cultural algorithms or or genetic local search.
Introduction
The theory of “Universal Darwinism” was coined by Richard Dawkins [1] in 1983 to provide a unifying framework governing the evolution of any complex systems. In particular, “Universal Darwinism” suggests that evolution is not exclusive to biological systems, i.e., it is not confined to the narrow context of the genes, but applicable to any complex systems that exhibit the principles of inheritance, variation and selection, thus fulfilling the traits of an evolving system. For example, the new science of memetics represents the mind-universe analogue to genetics in culture evolution that stretches across the fields of biology, cognition and psychology, which has attracted significant attention in the last decades. The term “meme” was also introduced and defined by Dawkins [1] in 1989 as “the basic unit of cultural transmission, or imitation”, and in the English Oxford Dictionary as “an element of culture that may be considered to be passed on by non-genetic means”.
Inspired by both Darwinian principles of natural evolution and Dawkins’ notion of a meme, the term “Memetic Algorithm” (MA) was first introduced by Moscato in his technical report [2] in 1989 where he viewed MA as being close to a form of population-based hybrid genetic algorithm (GA) coupled with an individual learning procedure capable of performing local refinements. The metaphorical parallels, on the one hand, to Darwinian evolution and, on the other hand, between memes and domain specific (local search) heuristics are captured within memetic algorithms thus rendering a methodology that balances well between generality and problem specificity. In a more diverse context, memetic algorithms is now used under various names including Hybrid Evolutionary Algorithm, Baldwinian Evolutionary Algorithm, Lamarckian Evolutionary Algorithms, Cultural Algorithm or Genetic Local Search. In the context of complex optimization, many different instantiations of memetic algorithms have been reported across a wide range of application domains, in general, converging to high quality solutions more efficiently than their conventional evolutionary counterparts.
Aspects of memetics when dealt within a computational framework is termed as "Memetic Computing" (MC). With MC, the traits of Universal Darwinism are more appropriately captured. Viewed in this perspective, MA is a more constrained notion of MC. More specifically, MA covers one area of MC, in particular dealing with areas of evolutionary algorithms that marry other deterministic refinement techniques for solving optimization problems. Henceforth, the framework of MC extends the notion of memes to cover conceptual entities of knowledge-enhanced procedures or representations.
The development of MAs
1st generation:
The first generation of MA refers to hybrid algorithms, a marriage between a population-based global search (often in the form of an evolutionary algorithm) coupled with a cultural evolutionary stage. This first generation of MA although encompasses characteristics of cultural evolution (in the form of local refinement) in the search cycle, it may not qualify as a true evolving system according to Universal Darwinism, since all the core principles of inheritance/ memetic transmission, variation and selection are missing. This suggests why the term MA stirs up criticisms and controversies among researchers when first introduced in [1].
Pseudo code:
Procedure Memetic Algorithm Initialize: Generate an initial population; while Stopping conditions are not satisfied do Evaluate all individuals in the population. Evolve a new population using stochastic search operators. Select the subset of individuals, , that should undergo the individual improvement procedure. for each individual in do Perform individual learning using meme(s) with frequency or probability of fil, for a period of til. Proceed with Lamarckian or Baldwinian learning. end for end while
2nd generation:
Multi-meme [3], Hyper-heuristic [4] and Meta-Lamarckian MA [5] are referred to as second generation MA exhibiting the principles of memetic transmission and selection in their design. In Multi-meme MA, the memetic material is encoded as part of the genotype. Subsequently, the decoded meme of each respective individual / chromosome is then used to perform a local refinement. The memetic material is then transmitted through a simple inheritance mechanism from parent to offspring(s). On the other hand, in hyper-heuristic and meta-Lamarckian MA, the pool of candidate memes considered will compete, based on their past merits in generating local improvements through a reward mechanism, deciding on which meme to be selected to proceed for future local refinements. Meme having higher rewards will have greater chances of being replicated or copied subsequently. For a review on second generation MA, i.e., MA considering multiple individual learning methods within an evolutionary system, the reader is referred to [6].
3rd generation:
Co-evolution [7] and self-generation MAs [8] may be regarded as 3rd generation MA where all three principles satisfying the definitions of a basic evolving system has been considered. In contrast to 2nd generation MA which assumes the pool of memes to be used being known a priori, a rule-based representation of local search is co-adapted alongside candidate solutions within the evolutionary system, thus capturing regular repeated features or patterns in the problem space.
Some design notes
The frequency and intensity of individual learning directly define the degree of evolution (exploration) against individual learning (exploitation) in the MA search, for a given fixed limited computational budget. Clearly, a more intense individual learning provides greater chance of convergence to the local optima but limits the amount of evolution that may be expended without incurring excessive computational resources. Therefore, care should be taken when setting these two parameters to balance the computational budget available in achieving maximum search performance. When only a portion of the population individuals undergo learning, the issue on which subset of individuals to improve need to be considered to maximize the utility of MA search. Last but not least, the individual learning procedure/meme used also favors a different neighborhood structure, hence the need to decide which meme or memes to use for a given optimization problem at hand would be required.
How often should individual learning be applied?
One of the first issues pertinent to memetic algorithm design is to consider how often the individual learning should be applied, i.e., individual learning frequency. In [9], the effect of individual learning frequency on MA search performance was considered where various configurations of the individual learning frequency at different stages of the MA search were investigated. Conversely, it was shown in [10] that it may be worthwhile to apply individual learning on every individual if the computational complexity of the individual learning is relatively low.
On which solutions should individual learning be used?
On the issue of selecting appropriate individuals among the EA population that should undergo individual learning, fitness-based and distribution-based strategies were studied for adapting the probability of applying individual learning on the population of chromosomes in continuous parametric search problems with Land [11] extending the work to combinatorial optimization problems. Bambha et al. [12] introduced a simulated heating technique for systematically integrating parameterized individual learning into evolutionary algorithms to achieve maximum solution quality.
How long should individual learning be run?
Individual learning intensity, , is the amount of computational budget allocated to an iteration of individual learning, i.e., the maximum computational budget allowable for individual learning to expend on improving a single solution.
What individual learning method or meme should be used for a particular problem or individual?
In the context of continuous optimization, individual learning/individual learning exists in the form of local heuristics or conventional exact enumerative methods [13]. Examples of individual learning strategies include the hill climbing, Simplex method, Newton/Quasi-Newton method, interior point methods, conjugate gradient method, line search and other local heuristics. Note that most of common individual learninger are deterministic.
In combinatorial optimization, on the other hand, individual learning methods commonly exists in the form of heuristics (which can be deterministic or stochastic), that are tailored to serve a problem of interest well. Typical heuristic procedures and schemes include the k-gene exchange, edge exchange, first-improvement, and many others.
Applications
Memetic algorithms are the subject of intense scientific research (a scientific journal devoted to their research is going to be launched) and have been successfully applied to a multitude of real-world problems. Although many people employ techniques closely related to memetic algorithms, alternative names such as hybrid genetic algorithms are also employed. Furthermore, many people term their memetic techniques as genetic algorithms. The widespread use of this misnomer hampers the assessment of the total amount of applications.
Researchers have used memetic algorithms to tackle many classical NP problems. To cite some of them: graph partitioning, multidimensional knapsack, travelling salesman problem, quadratic assignment problem, set cover problem, minimal graph colouring, max independent set problem, bin packing problem and generalized assignment problem.
More recent applications include (but are not limited to): training of artificial neural networks[14], pattern recognition[15], robotic motion planning[16], beam orientation[17], circuit design[18], electric service restoration[19], medical expert systems[20], single machine scheduling[21], automatic timetabling (notably, the timetable for the NHL [22]), manpower scheduling[23], maintenance scheduling (for example, of an electric distribution network[24]), VLSI design[25] and clustering of gene expression profiles[26].
Recent Activities in Memetic Algorithms
- IEEE Workshop on Memetic Algorithms (WOMA 2009). Program Chairs: Jim Smith, University of the West of England, U.K.; Yew-Soon Ong, Nanyang Technological University, Singapore; Gustafson Steven, University of Nottingham; U.K.; Meng Hiot Lim, Nanyang Technological University, Singapore; Natalio Krasnogor, University of Nottingham, U.K.
- Memetic Computing Journal, first issue is scheduled for January 2009.
- 2008 IEEE World Congress on Computational Intelligence (WCCI 2008), Hong Kong, Special Session on Memetic Algorithms.
- Special Issue on 'Emerging Trends in Soft Computing - Memetic Algorithm', Soft Computing Journal, Completed & In Press, 2008.
- IEEE Computational Intelligence Society Emergent Technologies Task Force on Memetic Computing
- IEEE Congress on Evolutionary Computation (CEC 2007), Singapore, Special Session on Memetic Algorithms.
- 'Memetic Computing' by Thomson Scientific's Essential Science Indicators as an Emerging Front Research Area.
- Special Issue on Memetic Algorithms, IEEE Transactions on Systems, Man and Cybernetics - Part B, Vol. 37, No. 1, February 2007.
- Recent Advances in Memetic Algorithms, Series: Studies in Fuzziness and Soft Computing , Vol. 166, ISBN: 978-3-540-22904-9, 2005.
- Special Issue on Memetic Algorithms, Evolutionary Computation Fall 2004, Vol. 12, No. 3: v-vi.
References
- ^ a b c Dawkins R.; et al. (1989). "The Selfish Gene". Oxford University Press.
{{cite journal}}
: Cite journal requires|journal=
(help); Explicit use of et al. in:|author=
(help) - ^ Moscato, P. (1989). "On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms". Caltech Concurrent Computation Program (report 826).
- ^ Krasnogor N. (1999). "Coevolution of genes and memes in memetic algorithms". Graduate Student Workshop: 371.
- ^ Kendall G. and Soubeiga E. and Cowling P. "Choice function and random hyperheuristics". 4th Asia-Pacific Conference on Simulated Evolution And Learning SEAL 2002: 667--671.
{{cite journal}}
: horizontal tab character in|journal=
at position 65 (help) - ^ Ong Y. S. and Keane A. J. (2004). "Meta-Lamarckian learning in memetic algorithms". IEEE Transactions on Evolutionary Computation. 8 (2): 99--110.
- ^ Ong Y. S. and Lim M. H. and Zhu N. and Wong K. W. (2006). "Classification of Adaptive Memetic Algorithms: A Comparative Study". IEEE Transactions on Systems Man and Cybernetics -- Part B. 36 (1): 141.
- ^ Smith J. E. (2007). "Coevolving Memetic Algorithms: A Review and Progress Report". IEEE Transactions on Systems Man and Cybernetics - Part B. 37 (1): 6--17.
- ^ Krasnogor N. and Gustafson S. (2002). "Toward truly "memetic" memetic algorithms: discussion and proof
of concepts". Advances in Nature-Inspired Computation: The PPSN VII Workshops. PEDAL (Parallel Emergent and Distributed Architectures Lab). University of Reading.
{{cite journal}}
: horizontal tab character in|journal=
at position 66 (help); horizontal tab character in|title=
at position 65 (help) - ^ Hart W. E. (1994). "Adaptive Global Optimization with Local Search".
{{cite journal}}
: Cite journal requires|journal=
(help); Unknown parameter|school=
ignored (help) - ^ Ku K. W. C. and Mak M. W. and Siu W. C. (2000). "A study of the Lamarckian evolution of recurrent neural networks". IEEE Transactions on Evolutionary Computation. 4 (1): 31--42.
- ^ Land M. W. S. (1998). "Evolutionary Algorithms with Local Search for Combinatorial Optimization".
{{cite journal}}
: Cite journal requires|journal=
(help); Unknown parameter|school=
ignored (help) - ^ Bambha N. K. and Bhattacharyya S. S. and Teich J. and Zitzler
E. (2004). "Systematic integration of parameterized local search into evolutionary
algorithms". IEEE Transactions on Evolutionary Computation. 8 (2): 137--155.
{{cite journal}}
: horizontal tab character in|author=
at position 63 (help); horizontal tab character in|title=
at position 72 (help) - ^ Schwefel H. P. (1995). Evolution and optimum seeking. Wiley New York.
- ^ Ichimura, T.; Kuriyama, Y. (1998). "Learning of neural networks with parallel hybrid GA using a royal road function". IEEE International Joint Conference on Neural Networks. Vol. 2. New York, NY. pp. 1131–1136.
{{cite conference}}
: Unknown parameter|booktitle=
ignored (|book-title=
suggested) (help)CS1 maint: multiple names: authors list (link) - ^ Aguilar, J.; Colmenares, A. (1998). "Resolution of pattern recognition problems using a hybrid genetic/random neural network learning algorithm". Pattern Analysis and Applications. 1 (1): 52–61.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Ridao, M.; Riquelme, J.; Camacho, E.; Toro, M. (1998). "An evolutionary and local search algorithm for planning two manipulators motion". Lecture Notes in Computer Science. 1416. Springer-Verlag: 105–114.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Haas, O.; Burnham, K.; Mills, J. (1998). "Optimization of beam orientation in radiotherapy using planar geometry". Physics in Medicine and Biology. 43 (8): 2179–2193.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Harris, S.; Ifeachor, E. (1998). "Automatic design of frequency sampling filters by hybrid genetic algorithm techniques". IEEE Transactions on Signal Processing. 46 (12): 3304–3314.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Augugliaro, A.; Dusonchet, L.; Riva-Sanseverino, E. (1998). "Service restoration in compensated distribution networks using a hybrid genetic algorithm". Electric Power Systems Research. 46 (1): 59–66.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Wehrens, R.; Lucasius, C.; Buydens, L.; Kateman, G. (1993). "HIPS, A hybrid self-adapting expert system for nuclear magnetic resonance spectrum interpretation using genetic algorithms". Analytica Chimica ACTA. 277 (2): 313–324.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ França, P.; Mendes, A.; Moscato, P. (1999). "Memetic algorithms to minimize tardiness on a single machine with sequence-dependent setup times". Proceedings of the 5th International Conference of the Decision Sciences Institute. Athens, Greece. pp. 1708–1710.
{{cite conference}}
: Unknown parameter|booktitle=
ignored (|book-title=
suggested) (help)CS1 maint: multiple names: authors list (link) - ^ Costa, D. (1995). "An evolutionary tabu search algorithm and the NHL scheduling problem". INFOR 33: 161–178.
- ^ Aickelin, U. (1998). "Nurse rostering with genetic algorithms". Proceedings of young operational research conference 1998. Guildford, UK.
{{cite conference}}
: Unknown parameter|booktitle=
ignored (|book-title=
suggested) (help) - ^ Burke, E.; Smith, A. (1999). "A memetic algorithm to schedule planned maintenance for the national grid". Journal of Experimental Algorithmics (4): 1–13.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Areibi, S., Yang, Z. (2004). "Effective memetic algorithms for VLSI design automation = genetic algorithms + local search + multi-level clustering". Evolutionary Computation. 12 (3). MIT Press: 327–353.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Merz, P.; Zell, A. (2002). "Clustering Gene Expression Profiles with Memetic Algorithms". Parallel Problem Solving from Nature — PPSN VII. Springer. pp. 811–820. doi:10.1007/3-540-45712-7_78.
{{cite book}}
: CS1 maint: multiple names: authors list (link)