Manipulation of Convergence in Evolutionary Systems

  title={Manipulation of Convergence in Evolutionary Systems},
  author={Gearoid Murphy and Conor Ryan},
A Simple Powerful Constraint for Genetic Programming
This paper demonstrates the ability of Hereditary Repulsion to perform well on a range of diverse problem domains and traces the source of this high quality performance to a pleasingly simple constraint at the heart of the HR algorithm.
Improving Module Identification and Use in Grammatical Evolution
Grammar Augmentation through Module Encapsulation (GAME) was tested on seven problems from three different domains and was observed to significantly improve the performance on 3 problems and never showing harmful effects on any problem.
An Exploration of Generalization and Overfitting in Genetic Programming: Standard and Geometric Semantic Approaches
This dissertation explores the task of computational learning and the related concepts of generalization and overtting, in the context of Genetic Programming (GP), a computational method inspired by natural evolution that considers a set of primitive functions and terminals that can be combined without any considerable constraints on the structure of the models being evolved.
Genetic Programming Theory and Practice XIII
These contributions, written by the foremost international researchers and practitioners of Genetic Programming (GP), explore the synergy between theoretical and empirical results on real-world
Bayesian Inference to Sustain Evolvability in Genetic Programming
This paper proposes a new framework, referred to as Recurrent Bayesian Genetic programming (rbGP), to sustain steady convergence in Genetic Programming (GP) and effectively improves its ability to find superior solutions that generalise well.
Genetic Programming Theory and Practice VIII
Large-scale, real-world applications of GP to a variety of problem domains are discovered via in-depth presentations of the latest and most significant results in GP.
Age-fitness pareto optimization
A multi-objective method for avoiding premature convergence in evolutionary algorithms, and demonstrates a three-fold performance improvement over comparable methods on the Symbolic Regression problem.
Targeted Data Collection using ParetoGP Adaptive Design-of-Experiments & Modeling
The approach presented involves developing model ensembles from a baseline data set that can be used to identify potential optima in the system response and identify regions of parameter space where the predictions are suspect (since in these regions the constituent models diverge).
Exploiting the path of least resistance in evolution
This work examines the behaviour of HR on the difficult Parity 5 problem using a population size of only 24 individuals and dramatically improves the consistency of the algorithm, resulting in a 70% success rate with the same small population.


Automated detection of nodules in the CT lung images using multi-modal genetic algorithm
A multi-modal genetic algorithm augmented by an island model cooperating with a speciation module to identify lung nodules in chest CT images shows that the scheme can be efficiently applied for detection of isolated or attached circular regions present in the images.
Undirected Training of Run Transferable Libraries
A problem that can deceive the system into converging to a sub-optimal set of functions is introduced, and it is demonstrated that a much simpler, truly evolutionary, update strategy doesn't suffer from this problem, and exhibits far better optimization properties than the original strategy.
Genetic programming 2 - automatic discovery of reusable programs
  • J. Koza
  • Computer Science
    Complex adaptive systems
  • 1994
Learning and lineage selection in genetic algorithms
  • G. Braught
  • Biology, Psychology
    Proceedings. IEEE SoutheastCon, 2005.
  • 2005
An investigation of the effects of individual learning on the evolution of one such trait, self-adaptive mutation rates, finds that the efficacy of the learning mechanism employed has a significant effect on the number of generations required for self- adaptations to evolve.
Genetic algorithm with age structure and its application to self-organizing manufacturing system
  • N. Kubota, T. Fukuda, F. Arai, K. Shimojima
  • Business
    ETFA '94. 1994 IEEE Symposium on Emerging Technologies and Factory Automation. (SEIKEN) Symposium) -Novel Disciplines for the Next Century- Proceedings
  • 1994
The genetic algorithm has recently been demonstrated its effectiveness in optimization issues, but it has two major problems: a premature local convergence and a bias by the genetic drift. In order
The hierarchical fair competition (HFC) model for parallel evolutionary algorithms
  • Jianjun Hu, E. Goodman
  • Computer Science, Biology
    Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600)
  • 2002
The HFC model for evolutionary computation is inspired by the stratified competition often seen in society and biology and its balanced exploration and exploitation, while avoiding premature convergence, is shown on a genetic programming example.
Speciation as automatic categorical modularization
An evolutionary learning system is presented which follows this second approach to automatically create a repertoire of specialist strategies for a game-playing system that relieves the human effort of deciding how to divide and specialize.
Adaptation in natural and artificial systems
Names of founding work in the area of Adaptation and modiication, which aims to mimic biological optimization, and some (Non-GA) branches of AI.
ALPS: the age-layered population structure for reducing the problem of premature convergence
Analysis of the search behavior of ALPS finds that the offspring of individuals that are randomly generated mid-way through a run are able to move the population out of mediocre local-optima to better parts of the fitness landscape.
A Sequential Niche Technique for Multimodal Function Optimization
An algorithm based on a traditional genetic algorithm that involves iterating the GA but uses knowledge gained during one iteration to avoid re-searching, on subsequent iterations, regions of problem space where solutions have already been found.