This book bridges the widening gap between two crucial constituents of computational intelligence: the rapidly advancing technologies of machine learning in the digital information age, and the relatively slow-moving field of general-purpose search and optimization algorithms. With this in mind, the book serves to offer a data-driven view of optimization, through the framework of memetic computation (MC). The authors provide a summary of the complete timeline of research activities in MC – beginning with the initiation of memes as local search heuristics hybridized with evolutionary algorithms, to their modern interpretation as computationally encoded building blocks of problem-solving knowledge that can be learned from one task and adaptively transmitted to another. In the light of recent research advances, the authors emphasize the further development of MC as a simultaneous problem learning and optimization paradigm with the potential to showcase human-like problem-solving prowess; that is, by equipping optimization engines to acquire increasing levels of intelligence over time through embedded memes learned independently or via interactions. In other words, the adaptive utilization of available knowledge memes makes it possible for optimization engines to tailor custom search behaviors on the fly – thereby paving the way to general-purpose problem-solving ability (or artificial general intelligence). In this regard, the book explores some of the latest concepts from the optimization literature, including, the sequential transfer of knowledge across problems, multitasking, and large-scale (high dimensional) search, systematically discussing associated algorithmic developments that align with the general theme of memetics. The presented ideas are intended to be accessible to a wide audience of scientific researchers, engineers, students, and optimization practitioners who are familiar with the commonly used terminologies of evolutionary computation. A full appreciation of the mathematical formalizations and algorithmic contributions requires an elementary background in probability, statistics, and the concepts of machine learning. A prior knowledge of surrogate-assisted/Bayesian optimization techniques is useful, but not essential.
This book compiles recent advances of evolutionary algorithms in dynamic and uncertain environments within a unified framework. The book is motivated by the fact that some degree of uncertainty is inevitable in characterizing any realistic engineering systems. Discussion includes representative methods for addressing major sources of uncertainties in evolutionary computation, including handle of noisy fitness functions, use of approximate fitness functions, search for robust solutions, and tracking moving optimums.
A remarkable facet of the human brain is its ability to manage multiple tasks with apparent simultaneity. Knowledge learned from one task can then be used to enhance problem-solving in other related tasks. In machine learning, the idea of leveraging relevant information across related tasks as inductive biases to enhance learning performance has attracted significant interest. In contrast, attempts to emulate the human brain’s ability to generalize in optimization – particularly in population-based evolutionary algorithms – have received little attention to date. Recently, a novel evolutionary search paradigm, Evolutionary Multi-Task (EMT) optimization, has been proposed in the realm of evolutionary computation. In contrast to traditional evolutionary searches, which solve a single task in a single run, evolutionary multi-tasking algorithm conducts searches concurrently on multiple search spaces corresponding to different tasks or optimization problems, each possessing a unique function landscape. By exploiting the latent synergies among distinct problems, the superior search performance of EMT optimization in terms of solution quality and convergence speed has been demonstrated in a variety of continuous, discrete, and hybrid (mixture of continuous and discrete) tasks. This book discusses the foundations and methodologies of developing evolutionary multi-tasking algorithms for complex optimization, including in domains characterized by factors such as multiple objectives of interest, high-dimensional search spaces and NP-hardness.
This book bridges the widening gap between two crucial constituents of computational intelligence: the rapidly advancing technologies of machine learning in the digital information age, and the relatively slow-moving field of general-purpose search and optimization algorithms. With this in mind, the book serves to offer a data-driven view of optimization, through the framework of memetic computation (MC). The authors provide a summary of the complete timeline of research activities in MC – beginning with the initiation of memes as local search heuristics hybridized with evolutionary algorithms, to their modern interpretation as computationally encoded building blocks of problem-solving knowledge that can be learned from one task and adaptively transmitted to another. In the light of recent research advances, the authors emphasize the further development of MC as a simultaneous problem learning and optimization paradigm with the potential to showcase human-like problem-solving prowess; that is, by equipping optimization engines to acquire increasing levels of intelligence over time through embedded memes learned independently or via interactions. In other words, the adaptive utilization of available knowledge memes makes it possible for optimization engines to tailor custom search behaviors on the fly – thereby paving the way to general-purpose problem-solving ability (or artificial general intelligence). In this regard, the book explores some of the latest concepts from the optimization literature, including, the sequential transfer of knowledge across problems, multitasking, and large-scale (high dimensional) search, systematically discussing associated algorithmic developments that align with the general theme of memetics. The presented ideas are intended to be accessible to a wide audience of scientific researchers, engineers, students, and optimization practitioners who are familiar with the commonly used terminologies of evolutionary computation. A full appreciation of the mathematical formalizations and algorithmic contributions requires an elementary background in probability, statistics, and the concepts of machine learning. A prior knowledge of surrogate-assisted/Bayesian optimization techniques is useful, but not essential.
A remarkable facet of the human brain is its ability to manage multiple tasks with apparent simultaneity. Knowledge learned from one task can then be used to enhance problem-solving in other related tasks. In machine learning, the idea of leveraging relevant information across related tasks as inductive biases to enhance learning performance has attracted significant interest. In contrast, attempts to emulate the human brain’s ability to generalize in optimization – particularly in population-based evolutionary algorithms – have received little attention to date. Recently, a novel evolutionary search paradigm, Evolutionary Multi-Task (EMT) optimization, has been proposed in the realm of evolutionary computation. In contrast to traditional evolutionary searches, which solve a single task in a single run, evolutionary multi-tasking algorithm conducts searches concurrently on multiple search spaces corresponding to different tasks or optimization problems, each possessing a unique function landscape. By exploiting the latent synergies among distinct problems, the superior search performance of EMT optimization in terms of solution quality and convergence speed has been demonstrated in a variety of continuous, discrete, and hybrid (mixture of continuous and discrete) tasks. This book discusses the foundations and methodologies of developing evolutionary multi-tasking algorithms for complex optimization, including in domains characterized by factors such as multiple objectives of interest, high-dimensional search spaces and NP-hardness.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.