About Me

New Book Now In Print from Springer:

I'm Joel, and I'm a postdoctoral researcher at the University of Texas at Austin. My research goal is to write algorithms inspired by natural evolution that continually produce a wild diversity of interesting and complex things.

A possible step along the way is the novelty search algorithm, which was the topic of my dissertation. The main idea is that the algorithm keeps trying to create things that are different from what it's seen in the past.

I'm always interested in possible collaborations or to field questions, so please feel free to contact me.

Publications



2015

Enhancing Divergent Search through Extinction Events
Joel Lehman and Risto Miikkulainen
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2015)
bibtex |+abstract

A challenge in evolutionary computation is to create representations as evolvable as those in natural evolution. This paper hypothesizes that extinction events, i.e. mass extinctions, can significantly increase evolvability, but only when combined with a divergent search algorithm, i.e. a search driven towards diversity (instead of optimality). Extinctions amplify diversity-generation by creating unpredictable evolutionary bottlenecks. Persisting through multiple such bottlenecks is more likely for lineages that diversify across many niches, resulting in indirect selection pressure for the capacity to evolve. This hypothesis is tested through experiments in two evolutionary robotics domains. The results show that combining extinction events with divergent search increases evolvability, while combining them with convergent search offers no similar benefit. The conclusion is that extinction events may provide a simple and effective mechanism to enhance performance of divergent search algorithms.

General Video Game Playing as a Benchmark for Human-Competitive AI
Joel Lehman and Risto Miikkulainen
In: AAAI-15 Workshop on Beyond the Turing Test
bibtex |+abstract

Text-based conversational Turing test benchmarks for artificial intelligence have significant limitations: It is possible to do well by only faking intelligence, thereby subverting the test's intent. An ideal replacement test would facilitate and focus AI research, be easy to implement and automate, and ensure that human-competitive performance implied a powerful and general AI. The idea in this paper is that general video game playing is one such promising candidate for an improved human-level AI benchmark. To pass such a test, a computer program must succeed in efficiently completing an unannounced and diverse suite of video games, interacting with the game only through simulated versions of the same information streams available to a human player; the test is easy to automate but difficult to exploit, and can stress nearly all aspects of human intelligence through strategic sampling of the vast library of existing video games. In this way, general video game playing may provide the basis for a simple but effective benchmark competition for human-level AI.

Petalz: Search-based Procedural Content Generation for the Casual Gamer
Sebastian Risi, Joel Lehman, David D'Ambrosio, Ryan Hall, and Kenneth Stanley
bibtex |+abstract

The impact of game content on the player experience is potentially more critical in casual games than in competitive games because of the diminished role of strategic or tactical diversions. Interestingly, until now procedural content generation (PCG) has nevertheless been investigated almost exclusively in the context of competitive, skills-based gaming. This paper therefore opens a new direction for PCG by placing it at the center of an entirely casual flower-breeding game called Petalz. That way, the behavior of players and their reactions to different game mechanics in a casual environment driven by PCG can be investigated. In particular, players in Petalz can (1) trade their discoveries in a global marketplace, (2) respond to an incentive system that rewards diversity, and (3) generate real-world three-dimensional replicas of their evolved flowers. With over 1,900 registered online users and 38,646 unique evolved flowers, Petalz showcases the potential for PCG to enable these kinds of casual game mechanics, thus paving the way for continued innovation with PCG in casual gaming.

2014

An Anarchy of Methods: Current Trends in How Intelligence is Abstracted in AI
Joel Lehman, Jeff, Clune, and Sebastian Risi
In: IEEE Intelligent Systems
bibtex |+abstract

Artificial intelligence (AI) is a sprawling field encompassing a diversity of approaches to machine intelligence and disparate perspectives on how intelligence should be viewed. Because researchers often engage only within their own specialized area of AI, there are many interesting broad questions about AI as a whole that often go unanswered. How should intelligence be abstracted in AI research? Which subfields, techniques, and abstractions are most promising? Why do researchers bet their careers on the particular abstractions and techniques of their chosen subfield of AI? Should AI research be "bio-inspired" and remain faithful to the process that produced intelligence (evolution) or the biological substrate that enables it (networks of neurons)? Discussing these big-picture questions motivated us to organize an AAAI Fall Symposium, which gathered participants across AI subfields to present and debate their views. This article distills the resulting insights.

Investigating Biological Assumptions through Radical Reimplementation
Joel Lehman and Kenneth O. Stanley
In: Artificial Life
bibtex |+abstract

An important goal in both artificial life and biology is uncovering the most general principles underlying life, which might catalyze both our understanding of life and engineering life-like machines. While many such general principles have been hypothesized, conclusively testing them is difficult because life on Earth provides only a singular example from which to infer. To circumvent this limitation, this paper formalizes an approach called radical reimplementation. The idea is to investigate an abstract biological hypothesis by intentionally reimplementing its main principles to diverge maximally from existing natural examples. If the reimplementation successfully exhibits properties resembling biology it may better support the underlying hypothesis than an alternative example inspired more directly by nature. The approach thereby provides a principled alternative to a common tradition of defending and minimizing deviations from nature in artificial life. This work reviews examples that can be interpreted through the lens of radical reimplementation to yield potential insights into biology despite having purposefully unnatural experimental setups. In this way, radical reimplementation can help renew the relevance of computational systems for investigating biological theory and can act as a practical philosophical tool to help separate the fundamental features of terrestrial biology from the epiphenomenal.

Grasping Novel Objects with a Dexterous Robotic Hand through Neuroevolution
Pei-Chi Huang, Joel Lehman, Aloysius K. Mok, Risto Miikkulainen, and Luis Sentis
In: IEEE Symposium Series on Computational Intelligence
bibtex |+abstract

Robotic grasping of a target object without advance knowledge of its three-dimensional model is a challenging problem. Many studies indicate that robot learning from demonstration (LfD) is a promising way to improve grasping performance, but complete automation of the grasping task in unforeseen circumstances remains difficult. As an alternative to LfD, this paper leverages limited human supervision to achieve robotic grasping of unknown objects in unforeseen circumstances. The technical question is what form of human supervision best minimizes the effort of the human supervisor. The approach here applies a human-supplied bounding box to focus the robot's visual processing on the target object, thereby lessening the dimensionality of the robot's computer vision processing. After the human supervisor defines the bounding box through the man-machine interface, the rest of the grasping task is automated through a vision-based feature-extraction approach where the dexterous hand learns to grasp objects without relying on pre-computed object models through the NEAT neuroevolution algorithm. Given only lowlevel sensing data from a commercial depth sensor Kinect, our approach evolves neural networks to identify appropriate hand positions and orientations for grasping novel objects. Further, the machine learning results from simulation have been validated by transferring the training results to a physical robot called Dreamer made by the Meka Robotics company. The results demonstrate that grasping novel objects through exploiting neuroevolution from simulation to reality is possible.

Overcoming Deception in Evolution of Cognitive Behaviors
Joel Lehman and Risto Miikkulainen
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2014)
bibtex |+abstract

When scaling neuroevolution to complex behaviors, cognitive capabilities such as learning, communication, and memory become increasingly important. However, successfully evolving such cognitive abilities remains difficult. This paper argues that a main cause for such difficulty is deception, i.e. evolution converges to a behavior unrelated to the desired solution. More specifically, cognitive behaviors often require accumulating neural structure that provides no immediate fitness benefit, and evolution often thus converges to non-cognitive solutions. To investigate this hypothesis, a common evolutionary robotics T-Maze domain is adapted in three separate ways to require agents to communicate, remember, and learn. Indicative of deception, evolution driven by objective-based fitness often converges upon simple non-cognitive behaviors. In contrast, evolution driven to explore novel behaviors, i.e. novelty search, often evolves the desired cognitive behaviors. The conclusion is that open-ended methods of evolution may better recognize and reward the stepping stones that are necessary for cognitive behavior to emerge.

Automatically Categorizing Procedurally Generated Content for Collecting Games
Sebastian Risi, Joel Lehman, David B. D'Ambrosio, and Kenneth O. Stanley
In: Proceedings of the Workshop on Procedural COntent Generation in Games (PCG) at the 9th International Conference on the FOundation of Digital Games (FDG-2014).
bibtex |+abstract

A potentially promising application for procedural content generation (PCG) is collecting games, i.e. games in which the player strives to collect as many classes of possible artifacts as possible from a diverse set. However, the challenge for PCG in collecting games is that procedurally generated content on its own does not fall into a prede ned set of classes, leaving no concrete quanti able measure of progress for players to follow. The main idea in this paper is to remedy this shortcoming by feeding a sample of such content into a self-organizing map (SOM) that then in ect generates as many categories as there are nodes in the SOM. Once thereby organized, any new content discovered by a player can be categorized simply by identifying the node most activate after its presentation. This approach is tested in this paper in the Petalz video game, where 80 categories for user-bred owers are generated by a SOM, allowing players to track their progress in discovering all the "species" that are now explicitly identi ed. The hope is that this idea will inspire more researchers in PCG to investigate applications to collecting games in the future.

2013

A Neuroevolution Approach to General Atari Game Playing
Matthew Hausknecht, Joel Lehman, Risto Miikkulainen, and Peter Stone
In: IEEE Transactions on Computational Intelligence and AI in Games
bibtex |+abstract

This article addresses the challenge of learning to play many different video games with little domain-specific knowledge. Specifically, it introduces a neuro-evolution approach to general Atari 2600 game playing. Four neuro-evolution algorithms were paired with three different state representations and evaluated on a set of 61 Atari games. The neuro-evolution agents represent different points along the spectrum of algorithmic sophistication - including weight evolution on topologically fixed neural networks (Conventional Neuro-evolution), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), evolution of network topology and weights (NEAT), and indirect network encoding (HyperNEAT). State representations include an object representation of the game screen, the raw pixels of the game screen, and seeded noise (a comparative baseline). Results indicate that direct-encoding methods work best on compact state representations while indirect-encoding methods (i.e. HyperNEAT) allow scaling to higher-dimensional representations (i.e. the raw game screen). Previous approaches based on temporal-difference learning had trouble dealing with the large state spaces and sparse reward gradients often found in Atari games. Neuro-evolution ameliorates these problems and evolved policies achieve state-of-the-art results, even surpassing human high scores on three games. These results suggest that neuro-evolution is a promising approach to general video game playing.

Exploring Biological Intelligence through Artificial Intelligence and Radical Reimplementation
Joel Lehman and Kenneth O. Stanley
In: Proceedings of the 2013 AAAI Fall Symposium on How Should Intelligence be Abstracted in AI Research
bibtex |+abstract

An important goal in artificial intelligence and biology is to uncover general principles that underlie intelligence. While artificial intelligence algorithms need not relate to biology, they might provide a synthetic means to investigate biological intelligence in particular. Importantly, a more complete understanding of such biological intelligence would profoundly impact society. Thus, to explore biological hypotheses some AI researchers take direct inspiration from biology. However, nature's implementations of intelligence may present only one facet of its deeper principles, complicating the search for general hypotheses. This complication motivates the approach in this paper, called radical reimplementation, whereby biological insight can result from purposefully unnatural experiments. The main idea is that biological hypotheses about intelligence can be investigated by reimplementing their main principles intentionally to explicitly and maximally diverge from existing natural examples. If such a reimplementation successfully exhibits properties similar to those seen in biology it may better isolate the underlying hypothesis than an example implemented more directly in nature's spirit. Two examples of applying radical reimplementation are reviewed, yielding potential insights into biological intelligence despite including purposefully unnatural underlying mechanisms. In this way, radical reimplementation provides a principled methodology for intentionally artificial investigations to nonetheless achieve biological relevance.

Boosting Interactive Evolution using Human Computation Markets
Joel Lehman and Risto Miikkulainen
In: Proceedings of the Conference on the Theory and Practice of Natural Computation
bibtex |+abstract

Interactive evolution, i.e. leveraging human input for selection in an evolutionary algorithm, is effective when an appropriate fitness function is hard to quantify yet solution quality is easily recognizable by humans. However, single-user applications of interactive evolution are limited by user fatigue: Humans become bored with monotonous evaluations. This paper explores the potential for bypassing such fatigue by directly purchasing human input from human computation markets. Experiments evolving aesthetic images show that purchased human input can be leveraged more economically when evolution is first seeded by optimizing a purely-computational aesthetic measure. Further experiments in the same domain validate a system feature, demonstrating how human computation can help guide interactive evolution system design. Finally, experiments in an image composition domain show the approach's potential to make interactive evolution scalable even in tasks that are not inherently enjoyable. The conclusion is that human computation markets make it possible to apply a powerful form of selection pressure mechanically in evolutionary algorithms.

Leveraging Human Computation Markets for Interactive Evolution
Joel Lehman and Risto Miikkulainen
In: Machine Learning Meets Crowdsourcing Workshop at ICML 2013
bibtex |+abstract

Leveraging human input for selection in an evolutionary algorithm, i.e. interactive evolution, is effective when an appropriate domain fitness function is hard to quantify, but where solution quality is easily recognizable by humans. However, single-user applications of interactive evolution are limited by user fatigue: Humans become bored with monotonous fitness evaluations. This paper shows that user fatigue can potentially be bypassed through human computation markets that enable directly paying for human input. Experiments evolving images show that purchased human input can be leveraged more economically when evolution is seeded with products from a purely-computational aesthetic measure. Further experiments in the same domain validate a system feature, demonstrating how human computation can help guide interactive evolution system design. Finally, experiments in an image composition domain show how the approach can facilitate large-scale interactive evolution in tasks that are not inherently enjoyable. In this way, combining human computation markets with interactive evolution facilitates mechanical application of a powerful form of selection pressure.

Neuroevolution
Joel Lehman and Risto Miikkulainen
In: Scholarpedia
bibtex |+abstract

Neuroevolution is a machine learning technique that applies evolutionary algorithms to construct artificial neural networks, taking inspiration from the evolution of biological nervous systems in nature. Compared to other neural network learning methods, neuroevolution is highly general; it allows learning without explicit targets, with only sparse feedback, and with arbitrary neural models and network structures. Neuroevolution is an effective approach to solving reinforcement learning problems, and is most commonly applied in evolutionary robotics and artificial life.

Evolvability is Inevitable: Increasing Evolvability Without the Pressure to Adapt
Joel Lehman and Kenneth O. Stanley
In: PLoS ONE
bibtex |+abstract

Why evolvability appears to have increased over evolutionary time is an important unresolved biological question. Unlike most candidate explanations, this paper proposes that increasing evolvability can result without any pressure to adapt. The insight is that if evolvability is heritable, than an unbiased drifting process across genotypes can still create a distribution of phenotypes biased towards evolvability, because evolvable organisms diffuse more quickly through the space of possible phenotypes. Furthermore, because phenotypic divergence often correlates with founding niches, niche founders may on average be more evolvable, which through population growth provides a genotypic bias towards evolvability. Interestingly, the combination of these two mechanisms can lead to increasing evolvability without any pressure to out-compete other organisms, as demonstrated through experiments with a series of simulated models. Thus rather than from pressure to adapt, evolvability may inevitably result from any drift through genotypic space combined with evolution's passive tendency to accumulate niches.

Encouraging Reactivity to Create Robust Machines
Joel Lehman, Sebastian Risi, David D'Ambrosio, and Kenneth O. Stanley
In: Adaptive Behavior
bibtex |+abstract

The robustness of animal behavior is unmatched by current machines, which often falter when exposed to unforeseen conditions. While animals are notably reactive to changes in their environment, machines often follow finely-tuned yet inflexible plans. Thus instead of the traditional approach of training such machines over many different unpredictable scenarios in detailed simulations (which is the most intuitive approach to inducing robustness), this work proposes to train machines to be reactive to their environment. The idea is that robustness may result not from detailed internal models or finely-tuned control policies but from cautious exploratory behavior. Supporting this hypothesis, robots trained to navigate mazes with a reactive disposition prove more robust than those trained over many trials yet not rewarded for reactive behavior in both simulated tests and when embodied in real robots. The conclusion is that robustness may neither require an accurate model nor finely calibrated behavior.

Effective Diversity Maintenance in Deceptive Domains
Joel Lehman, Kenneth O. Stanley, and Risto Miikkulainen
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2013)
bibtex |+abstract

Diversity maintenance techniques in evolutionary computation are designed to mitigate the problem of deceptive local optima by encouraging exploration. However, as problems become more difficult, the heuristic of fitness may become increasingly uninformative. Thus, simply encouraging genotypic diversity may fail to much increase the likelihood of evolving a solution. In such cases, diversity needs to be directed towards potentially useful structures. A representative example of such a search process is novelty search, which builds diversity by rewarding behavioral novelty. In this paper the effectiveness of fitness, novelty, and diversity maintenance objectives are compared in two evolutionary robotics domains. In a biped locomotion domain, genotypic diversity maintenance helps evolve biped control policies that travel farther before falling. However, the best method is to optimize a fitness objective and a behavioral novelty objective together. In the more deceptive maze navigation domain, diversity maintenance is ineffective while a novelty objective still increases performance. The conclusion is that while genotypic diversity maintenance works in well-posed domains, a method more directed by phenotypic information, like novelty search, is necessary for highly deceptive ones.

2012

Dissertation: Evolution through the Search for Novelty
bibtex |+abstract

I present a new approach to evolutionary search called novelty search, wherein only behavioral novelty is rewarded, thereby abstracting evolution as a search for novel forms. This new approach contrasts with the traditional approach of rewarding progress towards the objective through an objective function. Although they are designed to light a path to the objective, objective functions can instead deceive search into converging to dead ends called local optima. As a significant problem in evolutionary computation, deception has inspired many techniques designed to mitigate it. However, nearly all such methods are still ultimately susceptible to deceptive local optima because they still measure progress with respect to the objective, which this dissertation will show is often a broken compass. Furthermore, although novelty search completely abandons the objective, it counterintuitively often outperforms methods that search directly for the objective in deceptive tasks and can induce evolutionary dynamics closer in spirit to natural evolution. The main contributions are to (1) introduce novelty search, an example of an effective search method that is not guided by actively measuring or encouraging objective progress; (2) validate novelty search by applying it to biped locomotion; (3) demonstrate novelty search's benefits for evolvability (i.e.\ the ability of an organism to further evolve) in a variety of domains; (4) introduce an extension of novelty search called minimal criteria novelty search that brings a new abstraction of natural evolution to evolutionary computation (i.e.\ evolution as a search for many ways of meeting the minimal criteria of life); (5) present a second extension of novelty search called novelty search with local competition that abstracts evolution instead as a process driven towards diversity with competition playing a subservient role; and (6) evolve a diversity of functional virtual creatures in a single run as a culminating application of novelty search with local competition. Overall these contributions establish novelty search as an important new research direction for the field of evolutionary computation.

Combining Search-based Procedural Content Generation and Social Gaming in the Petalz Video Game
Sebastian Risi, Joel Lehman, David B. D'Ambrosio, Ryan Hall, and Kenneth O. Stanley
In: Proceedings of the Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2012)
bibtex |+abstract

Search-based procedural content generation methods allow video games to introduce new content continually, thereby engaging the player for a longer time while reducing the burden on developers. However, games so far have not explored the potential economic value of unique evolved artifacts. Building on this insight, this paper presents for the first time a Facebook game called Petalz in which players can share flowers they breed themselves with other players through a global marketplace. In particular, the market in this social game allows players to set the price of their evolved aesthetically-pleasing flowers in virtual currency. Furthermore, the transaction in which one player buys seeds from another creates a new social element that links the players in the transaction. The combination of unique user-generated content and social gaming in Petalz facilitates meaningful collaboration between users, positively influences the dynamics of the game, and opens new possibilities in digital entertainment.

Multirobot Behavior Synchronization through Direct Neural Network Communication
David B. D'Ambrosio, Skyler Goodell, Joel Lehman, Sebastian Risi, and Kenneth O. Stanley
In: Proceedings of the 5th International Conference on Intelligent Robotics and Applications (ICIRA-2012)
bibtex |+abstract

Many important real-world problems, such as patrol or search and rescue, could benefit from the ability to train teams of robots to coordinate. One major challenge to achieving such coordination is determining the best way for robots on such teams to communicate with each other. Typical approaches employ hand-designed communication schemes that often require significant effort to engineer. In contrast, this paper presents a new communication scheme called the hive brain, in which the neural network controller of each robot is directly connected to internal nodes of other robots and the weights of these connections are evolved. In this way, the robots can evolve their own internal language to speak directly brain-to-brain. This approach is tested in a multirobot patrol synchronization domain where it produces robot controllers that synchronize through communication alone in both simulation and real robots, and that are robust to perturbation and changes in team size.

On the Benefits of Divergent Search for Evolved Representations
Joel Lehman, Sebastian Risi, and Kenneth O. Stanley
In: Proceedings of the EvoNet 2012 Workshop at ALIFE XIII
bibtex |+abstract

Evolved representations in evolutionary computation are often fragile, which can impede representation-dependent mechanisms such as self-adaptation. In contrast, evolved representations in nature are robust, evolvable, and creatively exploit available representational features. This paper provides evidence that this disparity may partially result from a key difference between natural evolution and most evolutionary algorithms: Natural evolution has no overarching objective. That is, nature tends to continually accumulate novel forms without any final goal, while most evolutionary algorithms eventually converge to a point in the search space that locally maximizes the fitness function. The problem is that individuals that maximize fitness do not need good representations because a representation's future potential is not reflected by its current fitness. In contrast, search methods without explicit objectives that are consequently divergent may implicitly reward lineages that continually diverge, thereby indirectly selecting for evolvable representations that are better able to diverge further. This paper reviews a range of past results that support such a hypothesis from a method called novelty search, which explicitly rewards novelty, i.e. behaviors that diverge from previously encountered behaviors. In many experiments, novelty search demonstrates significant representational advantages over traditional fitness-based search, such as evolving more compact solutions, uncovering more evolvable representations, and more fully exploiting representational features. The conclusion is that divergent evolutionary algorithms like novelty search may exert selection pressure towards higher quality representations than traditional convergent approaches to search.

Beyond Open-endedness: Quantifying Impressiveness
Joel Lehman and Kenneth O. Stanley
In: Proceedings of Artificial Life Thirteen (ALIFE XIII)
bibtex |+abstract

This paper seeks to illuminate and quantify a feature of natural evolution that correlates to our sense of its intuitive greatness: Natural evolution evolves impressive artifacts. Within artificial life, abstractions aiming to capture what makes natural evolution so powerful often focus on the idea of open-endedness, which relates to boundless diversity, complexity, or adaptation. However, creative systems that have passed tests of open-endedness raise the possibility that open-endedness does not always correlate to impressiveness in artificial life simulations. In other words, while natural evolution is both open-ended and demonstrates a drive towards evolving impressive artifacts, it may be a mistake to assume the two properties are always linked. Thus to begin to investigate impressiveness independently in artificial systems, a novel definition is proposed: Impressive artifacts readily exhibit significant design effort. That is, the difficulty of creating them is easy to recognize. Two heuristics, rarity and re-creation effort, are derived from this definition and applied to the products of an open-ended image evolution system. An important result is that that the heuristics intuitively separate different reward schemes and provide evidence for why each evolved picture is or is not impressive. The conclusion is that impressiveness may help to distinguish open-ended systems and their products, and potentially untangles an aspect of natural evolution's mystique that is masked by its co-occurrence with open-endedness.

Rewarding Reactivity to Evolve Robust Controllers without Multiple Trials or Noise
Joel Lehman, Sebastian Risi, David B. D'Ambrosio, and Kenneth O. Stanley
In: Proceedings of Artificial Life Thirteen (ALIFE XIII)
bibtex |+abstract

Behaviors evolved in simulation are often not robust to variations of their original training environment. Thus often researchers must train explicitly to encourage such robustness. Traditional methods of training for robustness typically apply multiple non-deterministic evaluations with carefully modeled noisy distributions for sensors and effectors. In practice, such training is often computationally expensive and requires crafting accurate models. Taking inspiration from nature, where animals react appropriately to encountered stimuli, this paper introduces a measure called reactivity, i.e. the tendency to seek and react to changes in environmental input, that is applicable in single deterministic trials and can encourage robustness without exposure to noise. The measure is tested in four different maze navigation tasks, where training with reactivity proves more robust than training without noise, and equally or more robust than training with noise when testing with moderate noise levels. In this way, the results demonstrate the counterintuitive fact that sometimes training with no exposure to noise at all can evolve individuals significantly more robust to noise than by explicitly training with noise. The conclusion is that training for reactivity may often be a computationally more efficient means to encouraging robustness in evolved behaviors.

2011

Task switching in multirobot learning through indirect encoding
David B. D'Ambrosio, Joel Lehman, Sebastian Risi, and Kenneth O. Stanley
In: International Conference on Intelligent Robots and Systems (IROS)
bibtex |+abstract

Multirobot domains are a challenge for learning algorithms because they require robots to learn to cooperate to achieve a common goal. The challenge only becomes greater when robots must perform heterogeneous tasks to reach that goal. Multiagent HyperNEAT is a neuroevolutionary method (i.e. a method that evolves neural networks) that has proven successful in several cooperative multiagent domains by exploiting the concept of policy geometry, which means the policies of team members are learned as a function of how they relate to each other based on canonical starting positions. This paper extends the multiagent HyperNEAT algorithm by introducing situational policy geometry, which allows each agent to encode multiple policies that can be switched depending on the agent's state. This concept is demonstrated both in simulation and in real Khepera III robots in a patrol and return task, where robots must cooperate to cover an area and return home when called. Robot teams that are trained with situational policy geometry are compared to teams that are not and shown to find solutions more consistently that are also able to transfer to the real world.

Novelty Seach and the Problem with Objectives
Joel Lehman and Kenneth O. Stanley
In: Genetic Programming in Theory and Practice IX (GPTP 2011)
bibtex |+abstract

By synthesizing a growing body of work in search processes that are not driven by explicit objectives, this paper advances the hypothesis that there is a fundamental problemwith the dominant paradigm of objective-based search in evolutionary computation and genetic programming: Most ambitious objectives do not illuminate a path to themselves. That is, the gradient of improvement induced by ambitious objectives tends to lead not to the objective itself but instead to dead-end local optima. Indirectly supporting this hypothesis, great discoveries often are not the result of objective-driven search. For example, the major inspiration for both evolutionary computation and genetic programming, natural evolution, innovates through an open-ended process that lacks a final objective. Similarly, large-scale cultural evolutionary processes, such as the evolution of technology, mathematics, and art, lack a unified fixed goal. In addition, direct evidence for this hypothesis is presented from a recently-introduced search algorithm called novelty search. Though ignorant of the ultimate objectiveof search, in many instances novelty search has counter-intuitively outperformed searching directly for the objective, including a wide variety of randomly-generated problems introduced in an experiment in this chapter. Thus a new understanding is beginning to emerge that suggests that searching for a fixed objective, which is the reigning paradigm in evolutionary computation and even machine learning as a whole, may ultimately limit what can be achieved.Yet the liberating implication of this hypothesis argued in this paper is that by embracing search processes that are not driven by explicit objectives, the breadth and depth of what is reachable through evolutionary methods such as genetic programming may be greatly expanded.

Evolving a Diversity of Virtual Creatures through Novelty Search and Local Competition
Joel Lehman and Kenneth O. Stanley
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2011)
bibtex |+abstract

An ambitious challenge in artificial life is to craft an evolutionary process that discovers a wide diversity of well-adapted virtual creatures within a single run. Unlike in nature, evolving creatures in virtual worlds tend to converge to a single morphology because selection therein greedily rewards the morphology that is easiest to exploit. However, novelty search, a technique that explicitly rewards diverging, can potentially mitigate such convergence. Thus in this paper an existing creature evolution platform is extended with multi-objective search that balances drives for both novelty and performance. However, there are different ways to combine performance-driven search and novelty search. The suggested approach is to provide evolution with both a novelty objective that encourages diverse morphologies and a local competition objective that rewards individuals outperforming those most similar in morphology. The results in an experiment evolving locomoting virtual creatures show that novelty search with local competition discovers more functional morphological diversity within a single run than models with global competition, which are more predisposed to converge. The conclusions are that novelty search with local competition may complement recent advances in evolving virtual creatures and may in general be a principled approach to combining novelty search with pressure to achieve.

Improving Evolvability through Novelty Search and Self-adaptation
Joel Lehman and Kenneth O. Stanley
In: 2011 IEEE Congress on Evolutionary Computation (CEC)
bibtex |+abstract

A challenge for current evolutionary algorithms is to yield highly evolvable representations like those in nature. Such evolvability in natural evolution is encouraged through selection: Lineages better at molding to new niches are less susceptible to extinction. Similar selection pressure is not generally present in evolutionary algorithms; however, the first hypothesis in this paper is that novelty search, a recent evolutionary technique, also selects for evolvability because it rewards lineages able to continually radiate new behaviors. Results in experiments in a maze-navigation domain in this paper support that novelty search finds more evolvable representations than regular fitness-based search. However, though novelty search outperforms fitness-based search in a second biped locomotion experiment, it proves no more evolvabe than fitness-based search because delicately balanced behaviors are more fragile in that domain. The second hypothesis is that such fragility can be mitigated through self-adaption, whereby genomes influence their own reproduction.Further experiments in fragile domains with novelty search and self-adaption indeed demonstrate increased evolvability, while, interestingly, adding self-adaptation to fitness-based search decreases evolvability. Thus, selecting for novelty may often facilitate evolvability when representations are not overly fragile; furthermore, achieving the potential of self-adaptation may often critically depend upon the reward scheme driving evolution.

Abandoning Objectives: Evolution through the Search for Novelty Alone
Joel Lehman and Kenneth O. Stanley
In: Evolutionary Computation
bibtex |+abstract

In evolutionary computation, the fitness function normally measures progress towards an objective in the search space, effectively acting as an objective function. Through deception, such objective functions may actually prevent the objective from being reached. While methods exist to mitigate deception, they leave the underlying pathology untreated: Objective functions themselves may actively misdirect search towards dead ends. This paper proposes an approach to circumventing deception that also yields a new perspective on open-ended evolution: Instead of either explicitly seeking an objective or modeling natural evolution to capture open-endedness, the idea is to simply search for behavioral novelty. Even in an objective-based problem, such novelty search ignores the objective. Because many points in the search space collapse to a single behavior, the search for novelty is often feasible. Furthermore, because there are only so many simple behaviors, the search for novelty leads to increasing complexity. By decoupling open-ended search from artificial life worlds, the search for novelty is applicable to real world problems. Counterintuitively, in the maze navigation and biped walking tasks in this paper, novelty search significantly outperforms objective-based search, suggesting the strange conclusion that some problems are best solved by methods that ignore the objective. The main lesson is the inherent limitation of the objective-based paradigm and the unexploited opportunity to guide search through other means.

2010

Revising the Evolutionary Computation Abstraction: Minimal Criteria Novelty Search
Joel Lehman and Kenneth O. Stanley
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2010)
bibtex |+abstract

Though based on abstractions of nature, current evolutionary algorithms and artificial life models lack the drive to complexity characteristic of natural evolution. Thus this paper argues that the prevalent fitness-pressure-based abstraction does not capture how natural evolution discovers complexity. Alternatively, this paper proposes that natural evolution can be abstracted as a process that discovers many ways to express the same functionality. That is, all successful organisms must meet the same minimal criteria of survival and reproduction. This abstraction leads to the key idea in this paper: Searching for novel ways of meeting the same minimal criteria, which is an accelerated model of this new abstraction, may be an effective search algorithm. Thus the existing novelty search method, which rewards any new behavior, is extended to enforce minimal criteria. Such minimal criteria novelty search prunes the space of viable behaviors and may often be more efficient than the search for novelty alone. In fact, when compared to the raw search for novelty and traditional fitness-based search in the two maze navigation experiments in this paper, minimal criteria novelty search evolves solutions more consistently. It is possible that refining the evolutionary computation abstraction in this way may lead to solving more ambitious problems and evolving more complex artificial organisms.

Efficiently Evolving Programs through the Search for Novelty
Joel Lehman and Kenneth O. Stanley
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2010)
bibtex |+abstract

A significant challenge in genetic programming is premature convergence to local optima, which often prevents evolution from solving problems. This paper introduces to genetic programming a method that originated in neuroevolution (i.e.\ the evolution of artificial neural networks) that circumvents the problem of deceptive local optima. The main idea is to search only for behavioral novelty instead of for higher fitness values. Although such novelty search abandons following the gradient of the fitness function, if such gradients are deceptive they may actually occlude paths through the search space towards the objective. Because there are only so many ways to behave, the search for behavioral novelty is often computationally feasible and differs significantly from random search. Counterintuitively, in both a deceptive maze navigation task and the artificial ant benchmark task, genetic programming with novelty search, which ignores the objective, outperforms traditional genetic programming that directly searches for optimal behavior. Additionally, novelty search evolves smaller program trees in every variation of the test domains. Novelty search thus appears less susceptible to bloat, another significant problem in genetic programming. The conclusion is that novelty search is a viable new tool for efficiently solving some deceptive problems in genetic programming.

Evolving the Placement and Density of Neurons in the HyperNEAT Substrate
Sebastian Risi, Joel Lehman, and Kenneth O. Stanley
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2010)
bibtex |+abstract

The Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach demonstrated that the pattern of weights across the connectivity of an artificial neural network (ANN) can be generated as a function of its geometry, thereby allowing large ANNs to be evolved for high-dimensional problems. Yet it left to the user the question of where hidden nodes should be placed in a geometry that is potentially infinitely dense. To relieve the user from this decision, this paper introduces an extension called evolvable-substrate HyperNEAT (ES-HyperNEAT) that determines the placement and density of the hidden nodes based on a quadtree-like decomposition of the hypercube of weights and a novel insight about the relationship between connectivity and node placement. The idea is that the representation in HyperNEAT that encodes the pattern of connectivity across the ANN contains implicit information on where the nodes should be placed and can therefore be exploited to avoid the need to evolve explicit placement. In this paper, as a proof of concept, ES-HyperNEAT discovers working placements of hidden nodes for a simple navigation domain on its own, thereby eliminating the need to configure the HyperNEAT substrate by hand and suggesting the potential power of the new approach.

Evolving Policy Geometry for Scalable Multiagent Learning
David B. D'Ambrosio, Joel Lehman, Sebastian Risi, and Kenneth O. Stanley
In: Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS)
bibtex |+abstract

A major challenge for traditional approaches to multiagent learning is to train teams that easily scale to include additional agents. The problem is that such approaches typically encode each agent's policy separately. Such separation means that computational complexity explodes as the number of agents in the team increases, and also leads to the problem of reinvention: Skills that should be shared among agents must be rediscovered separately for each agent. To address this problem, this paper presents an alternative evolutionary approach to multiagent learning called multiagent HyperNEAT that encodes the team as a pattern of related policies rather than as a set of individual agents. To capture this pattern, a policy geometry is introduced to describe the relationship between each agent's policy and its canonical geometric position within the team. Because policy geometry can encode variations of a shared skill across all of the policies it represents, the problem of reinvention is avoided. Furthermore, because the policy geometry of a particular team can be sampled at any resolution, it acts as a heuristic for generating policies for teams of any size, producing a powerful new capability for multiagent learning. In this paper, multiagent HyperNEAT is tested in predator-prey and room-clearing domains. In both domains the results are effective teams that can be successfully scaled to larger team sizes without any further training.

2008

Exploiting Open-Endedness to Solve Problems Through the Search for Novelty
Joel Lehman and Kenneth O. Stanley
In: Proc. of the Eleventh Intl. Conf. on Artificial Life (ALIFE XI)
bibtex |+abstract

This paper establishes a link between the challenge of solving highly ambitious problems in machine learning and the goal of reproducing the dynamics of open-ended evolution in artificial life. A major problem with the objective function in machine learning is that through deception it may actually prevent the objective from being reached. In a similar way, selection in evolution may sometimes act to discourage increasing complexity. This paper proposes a single idea that both overcomes the obstacle of deception and suggests a simple new approach to open-ended evolution: Instead of either explicitly seeking an objective or modeling a domain to capture the open-endedness of natural evolution, the idea is to simply search for novelty. Even in an objective-based problem, such novelty search ignores the objective and searches for behavioral novelty. Yet because many points in the search space collapse to the same point in behavior space, it turns out that the search for novelty is computationally feasible. Furthermore, because there are only so many simple behaviors, the search for novelty leads to increasing complexity. In fact, on the way up the ladder of complexity, the search is likely to encounter at least one solution. In this way, by decoupling the idea of open-ended search from only artificial life worlds, the raw search for novelty can be applied to real world problems. Counterintuitively, in the deceptive maze navigation task in this paper, novelty search significantly outperforms objective-based search, suggesting a surprising new approach to machine learning.

Software

Novelty Search C++
An implementation of novelty search in C++ that is built on top of Ken Stanley's implementation of the NEAT neuroevolution algorithm in C++.


This page was last updated at Wednesday May 13, 2015 at 00:12:04 .