Memetic algorithm is a general name for a broad class of population-based heuristics. More specifically, these heuristics are optimization techniques which seek to integrate as much domain-specific knowledge as possible. Therefore, memetic algorithms are commonly described as very abstract templates, which must be instantiated for each problem. Some researchers refer to memetic algorithms as hybrid genetic algorithms and Lamarckian genetic algorithms. Memetic algorithms are sometimes shortened to the acronym MA.
Overview
A memetic algorithm uses a population of agents to seek optimal (or at least very good) solutions to a problem, using a given fitness function which ranks the goodness of the solutions. The agents seek candidate solutions throughout the fitness landscape (also named search space), using knowledge about the problem to improve the solutions, and cooperating and competing among themselves. Cooperation means that cooperating agents give rise to new agents which share characteristics from them, while competition is achieved by selection pressure over the population of agents. Although this description sounds very much in the manner of conventional genetic algorithms, memetic algorithms take a qualitatively different approach.
No Free Lunch Theorem
Graphical explanation of the consequences of the NFL theorems for heuristic algorithms: all heuristics have the same average performance. A given heuristic may outperform another one for a given problem or problem domain, but it will be worse in other ones. Furthermore, the better it is in that domain, the worse it will be in other domains.
The no free lunch (NFL) theorem ensures that, averaging over all possible optimization problems (that is to say, all possible realizations of a search space with an associated fitness function), all the optimization algorithms have the same average performance: there is not any algorithm that (on average) outperforms the rest of them. So, if there is a problem or a subset of problems for which an algorithm is better than another, the algorithm will be necessarily worse for some other problems.
Consequences
The applicability of the NFL theorem is not as straightforward as it would seem at first glance, but on the whole, it brings up a clear message: whenever the best possible performance is sought for solving an optimization problem, results must not rely on a general purpose search algorithm. Instead, it must be well-suited for the specific problem domain; and the best way to implement it is to embed specific-domain knowledge into the algorithm. So, the NFL theorem gives strong theoretical support to memetic algorithms as a legitimate way to solving computationally-hard problems.
History
In the early days of evolutionary algorithms, they were believed to be a kind of flawless Swiss-knife for optimization, being in general more robust and more efficient than more specialized techniques. For instance, when studying genetic algorithms (GAs), researchers sought to improve performance by applying general modifications to them (for example, introducing new schemes for selecting candidate solutions to breed), hoping that by obtaining good results in performance tests (with specific sets of test problems), these good results were able to be straightforwardly extrapolated to any possible domain of problems. Domain-specific adaptations were specifically thought to make the algorithms less robust, so they were discouraged unless deemed absolutely necessary.
However, by the late 80s and early 90s a shift took place, and more and more researchers were underscoring the importance of gathering problem knowledge in algorithms, so as to enhance GA's performance. The term hybridization was coined to mean embedding problem knowledge in a GA. The no free lunch theorem (see this section) came to sanction this change in mainstream attitudes.
In the seminal paper On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts[1], written by Pablo Moscato, the concept of memetic algorithm was introduced. Broadly speaking, a meme is a unit of cultural knowledge which is transmitted among individuals (for example, a way of building arches). As memes propagate, individuals can consciously change them in order to improve the results (for example, a slightly improved way of building arches), in opposition to genes, which are changed in a blind way. In other words, the cultural evolution of memes is an example of Lamarckian evolution, in opposition to the more traditional Darwinian evolution of genes.
Moscato felt that cultural evolution was a more fitting metaphor for the emerging trend of hybridization techniques than the more biological genotype of genetic algorithms, since the former actively aim on improving solutions, while the latter blindly wander over the search space. So, he coined the term memetic for a broader class of hybridization techniques explicitly concerned with exploiting all available knowledge about each specific problem. Therefore, the terminology used for describing memetic algorithms is slightly different in respect to the traditional one used for evolutionary algorithms (for example, candidate solutions are agents instead of individuals; as agent suggests an active entity, while individual hints to a passive one).
Alternative definitions
The most common approach to design a memetic algorithm (MA) is to hybridize (join) a genetic algorithm (GA) with a local search (LS) technique, using LS as an additional step within the GA. Thus, sometimes, they are briefly resumed as the equation MA = GA + LS. Indeed, some researchers have claimed[2] that Moscato's definition, which encompasses any population-based technique, is too loose and should be constrained to the MA = GA + LS way in order to enable a formal analysis of MA's performance and taxonomy. Notwithstanding, Moscato's definition came first and provides a broader scope, so this article will mainly present his point of view on memetic algorithms. From this point of view, the term hybrid genetic algorithm can be seen as a hyponym for memetic algorithm, although in practice, common use has rendered both terms as synonyms.
Description
As memetic algorithms must be tailored with domain-specific knowledge for each optimization problem, specific instances are not representative enough for a comprehensive description. Rather, memetic algorithms are usually presented using very general pseudocode templates. In this section, a basic, EA-based template of memetic algorithms will be provided, but it must be noted that other approaches to memetic algorithms may exist, as long as they are population-based.
In order to thoroughly accomplish the description, templates for the main algorithm as well as relevant subfunctions will be provided. After each piece of pseudocode, its main features will be detailed.
Main loop
As population-based techniques, memetic algorithms feature a well-characterized loop at the top level:
process Memetic-Algorithm () → Agent []
variablespop : Agent []
beginpop ← Generate-Initial-Population ()
repeatpop ← Do-Generation (pop)
if Converged(pop) thenpop ← Restart-Population(pop)
endifuntil MA-Termination-Criterion(pop)
return best ofpopend
As its name hints, the function Generate-Initial-Population creates an initial population to feed the main loop. The variable pop is a collection which stores the agents.
Do-Generation is the core of the algorithm, being the most computationally expensive and the key step; it evolves the population through one step at a time. It presents most of the opportunities for parallelizing the algorithm.
The function Converged must be instanced for each specific problem, and tests to confirm whether the population has converged. Premature convergence refers to a common pitfall in population-based heuristics, where sometimes all the population converge too early over a not quite good solution. It is a so common concern that it is best to take it into account rather than to actively try to avoid it, at least in a very general template, as this is. Common convergence criteria are low rates of improvement on solutions, low levels of information entropy in the population, or closeness of all or most of the agents in the fitness landscape (for a given metric in the search space). Whenever convergence happens, the Restart-Population function is executed in order to try to mend the issue.
MA-Termination-Criterion must also be tailored for each problem instance, since deciding whether to stop the algorithm is a very application-specific matter. There are many possible criteria; to name just the most usual:
limited amounts of execution time or iteration steps of the main loop
low rates of improvement on solution (after taking into account the restartings of the population)
a heuristic combination of some others.
Setting up populations
Before entering the main loop, the population must be set up:
This function is very simple. A population of agents of a given size INITIAL_POP_SIZE is initialized. The creation of the initial agents done by repeatedly calling Generate-New-Solution, which is commonly implemented in a purely random way. It might be accomplished using heuristics, but these heuristics should be also integrated in Memetic-Improver, which is used in other functions as Restart-Population.
In the event of premature convergence happening, this function tries to correct the situation. A certain amount NUM_PRESERVED of agents from the converged population are maintained, the rest are discarded. Usually, the number NUM_PRESERVED is determined as a percentage of the converged population, but it is not mandatory.
The function Select-Candidates elects the survivors. The selection can be done using several criteria or combinations of them:
maintaining the best agents (ranked by their fitness)
trying to select a sample as diverse as possible
The rest of the population is reset very much in the same way as in Generate-Initial-Population. This may seem inefficient, as the algorithm may have learned about the fitness landscape, and new random solutions may seem futile at first glance. Two points must be stressed:
Even if an old solution is created again, by letting it cooperate with agents who have already been transformed through many main loop iterations could be fruitful, since it may carry useful characteristics which were initially lost.
The heuristics applied by Memetic-Improver may have memory, and so be aware of candidate solutions previously evaluated in explored regions of the fitness landscape.
The population size may vary if it is deemed necessary, so an arbitrary given value NEW_POP_SIZE is used, but most algorithms maintain a constant population size.
Memetic optimization
As it was commented in the explanation of the main loop, this function is the crux of the algorithm, since it implements the most important part of the main loop. It resembles closely to the usual steps found in a genetic algorithm:
At first, agents compete by using the function Select-From-Population. A fraction of the population with good fitness will be used as cooperators (the ranking is commonly established according to the fitness value of each agent). The competition may be deterministic (only the best agents cooperate) or probabilistic (agents are more likely to cooperate if they have better fitness). Several ranking methods are available; to name just a few:
Ranking by absolute fitness.
Ranking by normalized fitness (fitness divided by the best fitness and scaled to a more convenient range of values).
Ranking by tournament selection (agents are randomly confronted with other agents in paired tournaments; the agent with the best fitness wins, and an agent is ranked according to the number of won tournaments).
After the competition phase, agents are modified by applying to them a sequence of memetic operators MEMETIC_OPERATOR[]. It should be taken into account that these operators manipulate the whole population instead of only just one or two agents at a time, as commonly done in traditional descriptions of genetic algorithms. This is motivated by a more abstract concept of operator: it encompasses any possible manipulation of the population. In the context of EA-based memetic algorithms, common operators include:
Recombination: this operator selects as many subsets of agents for the final desired size of the population, and combines agents from each subset into a new agent, returning the set of new agents as the new population. These subsets can be any size. The recombination must be carefully designed so as to keep valuable improvements of the original agents, and to combine them in a sensible way, according to the problem instance being solved. This kind of operator is the main drive of cooperation in the algorithm.
Mutation: this sort of operator is intended to be a source of variability in the population, as a means to avoid convergence happening too quickly. It must be carefully designed to not be too disruptive, and to take advantage of problem knowledge when modifying existing solutions.
In a fitness landscape, neighbourhoods can be defined in several ways. Left: a strict definition of neighbourhood, where the neighbourhood comprises only the directly reachable neighbours. Right: a looser definition, where the neighbourhood is the set of neighbours reachable by some criterion, such as by a local search operator.Local search: this class of operator performs a local search in the fitness landscape for each agent. Indeed, a good reason to talk about agents instead of individuals (as is common in genetic algorithms) is that these operators enable them to actively search good solutions across their neighbourhood in the fitness landscape. Common choices are simulated annealing, hill climbing and tabu search (as it can be seen, the term local search is loosely applied), but these must be complemented with domain-specific techniques, either embedded in them or used as additional operators. For example, in problems related to graph colouring, tabu search outperforms simulated annealing[3], so the former should be preferred over the latter in these contexts.
Finally, the main population must be updated. Since it cannot be arbitrarily large, some agents must be discarded. The function Update-Population takes both the original population and the newly derived agents, and performs this task. Criteria for both selecting the fittest and preserving diversity are usually taken into account.
Finally, the Memetic-Improver function is a memetic-based local search:
This function can use any memetic operator based on local search. It is used as a sort of nursery, in order to raise the fitness of newly created agents up to a minimum standard, before they are released to compete and cooperate in the main step (Do-Generation) of the algorithm. It is usually a greedy local hill-climber (that is to say, the operator MEMETIC_IMPROVER_OPERATOR is applied repeatedly, but only accepting changes to better neighbour solutions) for the sake of execution speed.
The finishing criterion coded in MI-Termination_Condition can be:
a fixed number of iterations
a slow rate of improvement
reaching the local optimum
any combination of some of the others
Design notes
Given the ad-hoc nature of memetic algorithms and the inescapable NFL theorem, a general approach to the design cannot be provided as the best choice for every problem. Putting it in other words, the sole golden rule is that there is no golden rule. Notwithstanding, there exists a wealth of general guidelines and do's and don'ts.
Codification and representation
Broadly speaking, fitness landscapes are shaped after the chosen problem representation or representations. Representation should not be confused with codification: the former is the way which is used by heuristics in order to rank and manipulate candidate solutions, while the latter is merely the way the solutions are stored in the memory. Clever representations are the key to success. Usually, for each (problem, heuristic) pair, one or a few best suited representations may exist. Since many different heuristics are likely to be used in a memetic algorithm, care must be taken when combining representations for several heuristics, or switching between them.
Of course, using a codification poorly related with the desired representation damps the performance, since the algorithm will be hampered by the conceptual distance between the data stored in the memory and the operations performed on them. But representation is far more critical, and it should drive the design process. This rule of thumb stands in utter contrast with the codification-driven design observed by early practitioners of evolutionary algorithms.
Shaping fitness landscapes
A fitness landscape sketch. The arrows indicate the possible routes of an agent on the landscape, and the points A and C are local optima. The red ball is an agent moving on the landscape. Of course, this is an oversimplification: real-life fitness landscapes are multi-dimensional. Global optima may be very difficult to find. Illustration by C.O. Wilke, 2001.
Representations are generally structured as tuples of elements. Each representation of the problem induces a given fitness landscape in which the algorithm will search. If a solution can be transformed into another one by the heuristics, they are said to be neighbours. If a solution has similar fitness to its neighbours, the neighbourhood is said to be smooth; otherwise, it is said to be rugged.
As a general rule, the smoother and simpler the landscape, the easier it is to find good (even optimal) solutions; however, this advice is too vague. There are several guidelines to drive the design of the landscape:
minimizing fitness variance: for a certain neighbourhood, fitness variance is the variance over the fitness levels of the neighbours. It can be defined for wider regions of the landscape as the average of the local variance of the neighbourhoods. A fitness function with high levels of variance is a very bad guide for the search process, since it usually makes more difficult to apply the heuristics. It also severely decreases the performance of the algorithm.
maximizing fitness correlation: likewise to the previous one, this characteristic is the correlation of fitness levels between neighbours. In highly correlated areas, good solutions are likely to have good neighbours, and the heuristics will easily climb to the best solutions of the neighbourhood.
minimizing epistasis: in biological sciences, epistasis refers to the interaction between several genes to define the traits of the individual. Similarly, in computer science, it is related to evolutionary algorithms: if individual genes are equalled to single elements of the tuples (recall that the tuples are representations of solutions), it means non-linear interactions between the elements of the tuples. It is a common source of fitness variance. This characteristic turns the landscape very rugged and complex, and is highly undesirable.
Many problems have a naturally well-defined fitness function, while some others do not. For example, each candidate solution to a SAT problem carries only two possible outcomes: satisfiable or unsatisfiable. Search heuristics are unable to tackle such simplistic landscapes, since a fitness function which only returns two values gives no information on how to guide the search. For such cases, a conjecture on the goodness of the solutions must be used as fitness function. This guess is commonly an estimation on the distance between the solution to be evaluated and the closest valid solution.
Tradeoffs and related issues
As it has been shown, there are many issues, both general and specific, to take into account whenever the design of a memetic algorithm is carried out. Each consideration affects the performance of the algorithm in a specific way, and trying to improve the performance by tuning only one of them usually yields a worsening of the performance (as several other considerations have not been taken into account). The conflict can be one of design (an optimal design in order to improve a characteristic can hamper another one) or of resource allocation (given finite amounts of available memory space and CPU time, the algorithm must schedule pieces of them to each heuristic). The designer must assign resources to each one, keeping in mind that, beyond some point, allocating more resources to one at the expense of others will not significantly improve the overall performance (see Amdahl's law). Thus, for each set of paired characteristics, a trade-off must be found. Unfortunately, fine-tuning trade-off issues is a difficult problem, and there are no general guidelines. Several common trade-off scenarios will be described:
Exploration vs. exploitation: Recalling the fitness landscape metaphor, memetic algorithms use a population of agents searching for good solutions over the landscape. Exploration refers to probe new regions of the landscape, in the hope of either finding better solutions or regions in which the search process is easier. On the other hand, exploitation means to thoroughly explore a given region of the landscape in order to find local optima. In memetic algorithms, exploration is commonly achieved by the population of agents, while each agent exploits a given region or regions of the landscape. Some constraints must be met:
In order to achieve an effective exploration, the population must have a minimal size and must run for a minimal number of iterations.
Each agent needs a minimal amount of time for running its local search operators with a reasonable chance of success, it is to say, find the neighbourhood's best solutions. In this regard, neighbourhoods are defined in a looser way than in the previous section: instead of strictly encompassing only the neighbours of a candidate solution, they comprise wider regions of the landscape around the candidate solution (please see this picture).
Restart the population vs. high mutability: as it has been explained before, memetic algorithms must overcome the issue of premature convergence. This is usually prevented by two means, both of them termed disruptive: restarting the population, and keeping high mutability rates. These rates can be owed to random mutation operators, possibly in conjunction with blind (and/or random) recombination operators. While both approaches can be used together, they implement fundamentally different ways in preventing convergence. Using a metaphor borrowed from simulated annealing, mutability temperature is the variability rate. Restarts amount to long periods of low temperatures, punctuated by short bursts of very high ones. On the other hand, disruptive mutations keep the temperature relatively high over all the process. Both restarting frequencies and mutation rates can be adjusted dynamically, seeking the best temperature profile for each problem.
Instead of a single, monolithic population, the agents may be arranged in several small subpopulations, allowing exchange of agents between certain pairs of subpopulations. The connectivity should be heuristically determined according to sound reasons.
Population arrangement: Population-based algorithms have traditionally been improved by tuning the heuristics, but the population arrangement has been essentially the same for tens of years: any member of the population is able to interact with any other one. Notwithstanding, several different population structures have been tested in recent years, whose common trait is to distribute the agents in several subpopulations[4]. In many instances, structured populations allow for specialization in each subpopulation; it is to say, several qualitatively different, good ways to construct solutions to the problem may arise. This feature sometimes amounts to a more effective search in the fitness landscape. Several points must be taken into account when using several subpopulations:
Connectivity: a key feature of structured algorithms is agent exchange between populations. This way, good, novel solutions can be introduced in the subpopulations (this is similar to a gentle restart of the population, but with new and improved agents). In order to decide which subpopulations can exchange agents between themselves, the subpopulations may be arranged in an n-dimensional lattice, a fully connected graph or a less structured one.
Immigration policy: Agent exchange between populations must be carefully planned. An agent should not be sent to a population if its fitness is very high in respect of the average fitness of the receiving population, because premature convergence would be the most likely outcome. That is to say, the agent's brood, with such high fitness, would obliterate endemic agents and would become the sole strain in that population. A related point to take into account is the exchange rate of agents between populations, which must be tuned to prevent stagnation in the search process as well as premature convergence. Further considerations include, for example, employing different memetic operators (or different subsets of them) in each subpopulation. So, each one of the operators can do its best without any interference from any of the others, which may disrupt its heuristic strategies.
Granularity: There may be either a few subpopulations of moderate or even large sizes, or even many tiny populations of very few agents, or any possible proportions inside this range. In the extreme case of single-agent subpopulations, arranged in a regular lattice (commonly 2-D or 3-D, and squared), the algorithm is called cellular[5], as it can be seen as a crossover of a memetic/genetic algorithm and a cellular automaton.
Recombination operators
File:SinglePointCrossover.pngSimplest crossover for classical genetic algorithms. Individuals are strings of bits blindly manipulated. A random position is selected, and information from each parent is exchanged in order to create two new agents, without regarding possible disruptions of fitness levels caused by the careless operation. In contrast, memetic recombination operators are required to be careful in order to not be disruptive.
Classical recombination operators have traditionally been blind. This means to mix up traits from the parents (it is to say, the agents whose candidate solutions are being merged into a hopefully better candidate) without caring about their disruption; it is to say, nullifying the contribution of a trait by means of breaking it or combining it with traits which are not compatible. In sharp contrast, memetic recombination operators are usually designed to avoid disruptive effects. This desirable property is commonly accomplished by using domain-specific knowledge, in order to select the relevant information from each parent. This issue is closely related to the selected representation of the solutions, which should be designed to let an effective assessment of the goodness of traits or subsets of traits. For instance, in the travelling salesman problem, a memetic operator would be careful to not sever the parents' paths in the middle of optimal sub-paths, but to maintain them as much as possible, and would try to paste together the contributions of the parents without adding unsound sub-paths.
Formally speaking, a memetic recombination operator can exhibit several properties. These properties are not black-and-white features, but rather characteristics which each operator can exhibit to a certain extent. These are not good properties per se, but a means of classifying operators, and points to take into account when driving their design:
respect: a respectful operator detects the traits which are common to all parents, and includes them into the new agent's candidate solution.
assortment: a properly assorting operator can generate descendants from any possible combination of compatible traits from the parents. This property contributes to a better exploration of the search space.
transmission: a transmitting operator merges traits from the parents without inserting any new information, it is to say, any trait seen in the new agent comes from any parent. Of course, a transmitting operator has more limited opportunities to use domain-specific knowledge, so it should not be used if this knowledge is not embodied into the algorithm in any other way.
As it can easily be seen, these three properties are related to the issue of the selection of parental traits that are to be transmitted to the new agent. When actively adding domain-specific knowledge, the selection of non-parental features to be added becomes an important issue, too. The latter one is totally needed whenever merging traits in a straightforward way is unsound. Two general strategies can be used:
locally-optimal completion: the new agent is generated without worrying about unsound merging decisions; thereafter, a local or greedy improver is applied to fix it up.
globally-optimal completion: possible combinations of traits are enumerated (implicitly or explicitly) and the best one is selected.
Applications
Memetic algorithms are the subject of intense scientific research (a scientific journal devoted to their research is going to be launched) and have been successfully applied to a multitude of real-world problems. Although many people employ techniques closely related to memetic algorithms, alternative names such as hybrid genetic algorithms are also employed. Furthermore, many people term their memetic techniques as genetic algorithms. The widespread use of this misnomer hampers the assessment of the total amount of applications.
Recent Advances in Memetic Algorithms, Hart, W.; Krasnogor, N.; Smith, J. (Eds.), 2005, Springer (ISBN: 978-3-540-22904-9) [1]
Special Issue on Memetic Algorithms, IEEE Transactions on Systems, Man and Cybernetics - Part B, Vol. 37, No. 1, Ong Y.S.; Krasnogor, N.; Ishibuchi H. (Eds.), February 2007.
Journal of Evolutionary Computation, MIT Press (ISSN: 1063-6560) [3]
IEEE Transactions on Systems, Man and Cybernetics - Part B, IEEE publication (ISSN: 1083-4419) [4]
As of 2008, Memetic computing is an incoming scientific journal, to be published by Springer-Verlag (first issue expected for January 2009) (ISSN: 1865-9292) [5]
^ Whitley, L. (1995). "A review of models for simple genetic algorithms and cellular genetic algorithms". Applications of Modern Heuristic Methods. Wiley. pp. 55–67. ISBN978-0471962809. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
^ ab Moscato, P.; Cotta, C. (2005). "A Gentle Introduction to Memetic Algorithms". Handbook of Metaheuristics. In Operations Research & Management Science (Vol. 57). Springer. pp. 105–144. ISBN978-1-4020-7263-5. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)CS1 maint: multiple names: authors list (link)
^Ichimura, T.; Kuriyama, Y. (1998). "Learning of neural networks with parallel hybrid GA using a royal road function". IEEE International Joint Conference on Neural Networks. Vol. 2. New York, NY. pp. 1131–1136. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)CS1 maint: multiple names: authors list (link)
^Aguilar, J.; Colmenares, A. (1998). "Resolution of pattern recognition problems using a hybrid genetic/random neural network learning algorithm". Pattern Analysis and Applications. 1 (1): 52–61.{{cite journal}}: CS1 maint: multiple names: authors list (link)
^Ridao, M.; Riquelme, J.; Camacho, E.; Toro, M. (1998). "An evolutionary and local search algorithm for planning two manipulators motion". Lecture Notes in Computer Science. 1416. Springer-Verlag: 105–114.{{cite journal}}: CS1 maint: multiple names: authors list (link)
^Haas, O.; Burnham, K.; Mills, J. (1998). "Optimization of beam orientation in radiotherapy using planar geometry". Physics in Medicine and Biology. 43 (8): 2179–2193.{{cite journal}}: CS1 maint: multiple names: authors list (link)
^Harris, S.; Ifeachor, E. (1998). "Automatic design of frequency sampling filters by hybrid genetic algorithm techniques". IEEE Transactions on Signal Processing. 46 (12): 3304–3314.{{cite journal}}: CS1 maint: multiple names: authors list (link)
^Augugliaro, A.; Dusonchet, L.; Riva-Sanseverino, E. (1998). "Service restoration in compensated distribution networks using a hybrid genetic algorithm". Electric Power Systems Research. 46 (1): 59–66.{{cite journal}}: CS1 maint: multiple names: authors list (link)
^Wehrens, R.; Lucasius, C.; Buydens, L.; Kateman, G. (1993). "HIPS, A hybrid self-adapting expert system for nuclear magnetic resonance spectrum interpretation using genetic algorithms". Analytica Chimica ACTA. 277 (2): 313–324.{{cite journal}}: CS1 maint: multiple names: authors list (link)
^França, P.; Mendes, A.; Moscato, P. (1999). "Memetic algorithms to minimize tardiness on a single machine with sequence-dependent setup times". Proceedings of the 5th International Conference of the Decision Sciences Institute. Athens, Greece. pp. 1708–1710. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)CS1 maint: multiple names: authors list (link)
^Costa, D. (1995). "An evolutionary tabu search algorithm and the NHL scheduling problem". INFOR 33: 161–178.
^Aickelin, U. (1998). "Nurse rostering with genetic algorithms". Proceedings of young operational research conference 1998. Guildford, UK. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
^Burke, E.; Smith, A. (1999). "A memetic algorithm to schedule planned maintenance for the national grid". Journal of Experimental Algorithmics (4): 1–13.{{cite journal}}: CS1 maint: multiple names: authors list (link)
^Areibi, S., Yang, Z. (2004). "Effective memetic algorithms for VLSI design automation = genetic algorithms + local search + multi-level clustering". Evolutionary Computation. 12 (3). MIT Press: 327–353.{{cite journal}}: CS1 maint: multiple names: authors list (link)
^Merz, P.; Zell, A. (2002). "Clustering Gene Expression Profiles with Memetic Algorithms". Parallel Problem Solving from Nature — PPSN VII. Springer. pp. 811–820. doi:10.1007/3-540-45712-7_78. {{cite book}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)CS1 maint: multiple names: authors list (link)