Robots that can adapt like animals

The video shows two different robots that can adapt to a wide variety of injuries in under two minutes. A six-legged robot adapts to keep walking even if two of its legs are broken, and a robotic arm learns how to correctly place an object even with several broken motors.

This video accompanies the following paper(s):

Overview of the Evolving Artificial Intelligence Lab at the University of Wyoming

A brief overview of the Evolving Artificial Intelligence Lab at the University of Wyoming, directed by Jeff Clune. The video summarizes some of the reasons we are interested in the field of Evolutionary Robotics, which seeks to ultimately evolve artificially intelligent robots that can rival natural animals in intelligence and agility.

Evolving Gaits for Legged Robots: Neural Networks with Geometric Patterns Perform Better

Neural networks evolved to produce gaits for legged robots. The use of the HyperNEAT generative encoding produces geometric patterns (regularities) in the neural wiring of the evolved brains, which improves performance by producing coordinated, regular leg movements.

Evolving artificial neural networks (ANNs) and gaits for robots are difficult, time-consuming tasks for engineers, making them suitable for evolutionary algorithms (aka genetic algorithms). Generative encodings (aka indirect and developmental encodings) perform better than direct encodings by producing neural regularities that result in behavioral regularities.

This video accompanies the following paper(s):

Evolving Artificial Neural Networks That Are Both Modular and Regular

This video accompanies the following paper(s):

Why does modularity evolve? The evolutionary origins of modularity

Engineered and evolved things are organized in modules (e.g. organs or car parts), yet why modularity evolves remains one of biology's most important open questions. This paper shows for the first time that modularity evolves not because it speeds up adaptation, as the leading theory holds, but because it saves on "wiring costs". Connections in biological networks have costs (e.g. building and maintaining them), and modular networks use fewer connections. These results help explain the ubiquitous modularity in biological networks, such as genetic modules and the neural modules in our brains, and will help scientists evolve smarter artificial intelligence. Interestingly, the modular networks that evolve do adapt faster, meaning that adaptation is a consequence of modularity, not its main cause.

This video accompanies the following paper(s):

Evolving Soft Robots with Multiple Materials (muscle, bone, etc.)

Here we evolve the bodies of soft robots made of multiple materials (muscle, bone, & support tissue) to move quickly. Evolution produces a diverse array of fun, wacky, interesting, but ultimately functional soft robots. Enjoy!

This video accompanies the following paper(s):

Deep Neural Networks are Easily Fooled

This video accompanies the following paper(s):

EndlessForms.com - Design objects with evolution and 3D print them!

On http://EndlessForms.com objects are evolved in the same way that plants and animals are bred. You pick the ones you like and they become the parents of the next generation of objects. As in biological evolution, the offspring look similar, but not identical, to their parents, allowing you to explore different designs. Under the hood, there is an evolutionary process in which the genomes of parents are mutated and crossed over to produce new offspring objects. Additionally, the objects are grown from their genomes similar to how a single fertilized egg grows into a jaguar, hawk, or human. This grounding in developmental biology enables the evolution of complex, natural-looking forms. For more info visit: http://endlessforms.com/about_the_technology.

This video accompanies the following paper(s):

Evolved Electrophysiological Soft Robots

The research field of evolutionary robotics abstracts some of the major themes in biological evolution (heritable traits, genetic variation, and competition for scarce resources) as tools to allow computers to generate new and interesting virtual creatures. One of the recent themes in this field is towards more embodied robots (those that produce interesting behavior through the design of their bodies, as well as their brains). Here, we build on previous work evolving soft robots to demonstrate the low level embodiment of electrical signals passing information through muscle tissue. Through this work we attempt bridge the divide between embodied cognition and abstracted artificial neural networks.

This video accompanies the following paper(s):

Novelty Search Creates Robots with General Skills for Exploration

This video accompanies the following paper(s):

PPGN: Sampling within a single class of Junco

This video accompanies the following paper(s):

PPGN: Sampling between 10 classes

This video accompanies the following paper(s):

Deep Learning Overview & Visualizing What Deep Neural Networks Learn

This video accompanies the following paper(s):

Supp. Mat. Aligning Modularity

Supplementary material to: Does Aligning Phenotypic and Genotypic Modularity Improve the Evolution of Neural Networks? This videos shows the 5 best champions from 30 runs on the robotics task for the treatments with direct selection for both genotypic and phenotypic modularity (PMOD+GMOD). Champions were selected after 5000 generations of evolution.
This video accompanies the following paper(s):

Convergent Learning: Do different neural networks learn the same representations?

This video accompanies the following paper(s):

Understanding Neural Networks Through Deep Visualization

This video accompanies the following paper(s):

Talk at the Santa Fe Institute: Two Projects in BioInspired AI. Evolving regular, modular, hierarchical neural networks, and robot damage recovery.

This video accompanies the following paper(s):

Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills

This video accompanies the following paper(s):

Evolving Regular, Modular Neural Networks

I (Jeff Clune) summarize my research into evolving modular, regular neural networks, which are digital models of brains. The property of regularity is produced by using HyperNEAT, a generative encoding based on concepts from developmental biology. The property of modularity arises because we add a cost for connections between neurons in the network. Evolving structurally organized neural networks, including those that are regular and modular, is a necessary step in our long-term quest of evolving computational intelligence that rivals or surpasses human intelligence.

This video accompanies the following paper(s):

PPGN: Sampling within a single class of Triumph Arch

This video accompanies the following paper(s):

Non-Adaptive Evolvability

This video accompanies the following paper(s):

Evolving artificial neural networks with generative encodings inspired by developmental biology

In this dissertation I (Jeff Clune) investigate the difference between generative encodings and direct encodings for evolutionary algorithms. Generative encodings are inspired by developmental biology and were designed, in part, to increase the regularity of synthetically evolved phenotypes. Regularity is an important design principle in both natural organisms and engineered designs. The majority of this dissertation focuses on how the property of regularity enables a generative encoding to outperform direct encoding controls, and whether a bias towards regularity also hurts the performance of the generative encoding on some problems. I also report on whether researchers can bias the types of regularities produced by a generative encoding to accommodate user preferences. Finally, I study the degree to which a generative encoding produces another important design principle, modularity.

Several previous studies have shown that generative encodings outperform direct encodings on highly regular problems. However, prior to this dissertation, it was not known how generative encodings compare to direct encodings on problems with different levels of regularity. On three different problems, I show that a generative encoding can exploit intermediate amounts of problem regularity, which enabled the generative encoding to increasingly outperform direct encoding controls as problem regularity increased. This performance gap emerged because the generative encoding produced regular artificial neural networks (ANNs) that produced regular behaviors. The ANNs evolved with the generative encoding contained a diverse array of complicated, regular neural wiring patterns, whereas the ANNs produced by a direct encoding control were irregular.

I also document that the bias towards regularity can hurt a generative encoding on problems that have some amount of irregularity. I propose a new algorithm, called HybrID, wherein a generative encoding produces regular patterns and a direct encoding modifies
those patterns to provide fitness-enhancing irregularities. HybrID outperformed a generative encoding alone on three problems for nearly all levels of regularity, which raises the question of whether generative encodings may ultimately excel not as stand-alone algorithms, but by being hybridized with a further process of irregular refinement.

The results described so far document that a generative encoding can produce regular solutions. I then show that, at least for the generative encoding in this case study, it is possible to influence the types of regularities produced, which allows domain knowledge and preferences to be injected into the algorithm. I also investigated whether the generative encoding can produce modular solutions. I present the first documented case of this generative encoding producing a modular phenotype on a simple problem. However, the generative encoding's inability to create modularity on harder problems where modularity would have been beneficial suggests that more work is needed to increase the likelihood that this encoding produces modular ANNs in response to challenging, decomposable problems.

Overall, this dissertation paints a more complete picture of generative encodings than prior studies. Initially, it demonstrates that, by producing regular ANNs and behaviors, generative encodings increasingly outcompete direct encodings as problem regularity increases. It next documents that a bias towards regularity can harm the performance of direct encodings when problems contain irregularities. The HybrID algorithm suggests a path forward, however, by revealing that a refinement process that fine-tunes the regular patterns produced by a generative encoding can boost performance by accounting for problem irregularities. Finally, the dissertation shows that the generative encoding studied can produce modular networks on simple problems, but may struggle to do so on harder problems. The general conclusion that can be drawn from this work is that generative encodings can produce some of the properties seen in complex, natural organisms, and will likely be an important part of our long-term goal of synthetically evolving phenotypes that approach the capability, intelligence, and complexity of their natural rivals.

This video accompanies the following paper(s):

Automated Generation of Environments to Test the General Learning Capabilities of AI Agents

This video accompanies the following paper(s):

Evolving Gaits for Physical Robots Directly in Hardware with the HyperNEAT Generative Encoding

Some of the gaits evolved by the HyperNEAT algorithm.

This video accompanies the following paper(s):

Robot Harlem Shake

Simulated robots evolved to do the Harlem Shake. A student in the Evolutionary Robotics course at the University of Wyoming made a robot Harlem Shake video for fun after doing one of the homeworks on evolving simulated robots. Another student then came up with this hilarious version. Enjoy!

Aracna: An Open-Source Quadruped Robotic Platform

Aracna is a new, quadruped robot platform which requires non-intuitive motor commands in order to locomote and thus provides an interesting challenge for gait learning algorithms, such as those frequently developed in the Evolutionary Computation and Artificial Life communities. Aracna is an open-source hardware project composed of off-the-shelf and 3D-printed parts, enabling other research teams to modify its design according to their scientific needs.

This video accompanies the following paper(s):

3D Printing the Aracna robot

This video shows the Aracna robot being 3D printed at the Cornell Creative Machines Lab. Aracna is a completely open-source robotic platform. Download the plans and print your own!

This video accompanies the following paper(s):

Multi-Objective Landscape Exploration (MOLE) algorithm. Explore all areas of your fitness landscape

Here we introduce an algorithm to compute phenotype-fitness maps as a way to understand the relationship between phenotypic dimensions and fitness. The central idea is to explicitly select for fit organisms in all areas of a phenotype landscape, where the axes of that landscape are defined by phenotypic dimensions of interest. To produce such maps, we introduce the Multi-Objective Landscape Exploration (MOLE) algorithm, which is a multi-objective evolutionary algorithm, specifically NSGA-II (Deb, 2001), with two objectives: (1) searching for new organisms that are far from solutions already generated, with distance measured in a Cartesian space defined by the key dimensions, and (2) generating highly fit organisms. With MOLE, scientists can see how fitness changes as a function of various phenotypic dimensions (Figure 1). This combination of a fitness objective and an archive-based exploration objective is similar to "novelty-based multi-objectivization" (Mouret, 2011; Lehman and Stanley, 2011), but is used to generate phenotype-fitness maps instead of producing highly fit solutions.

Evolving Modular Networks: Video for "The Evolutionary Origins of Modularity"

This video accompanies the following paper(s):

Encouraging Creative Thinking in Robots: The Creative Thinking Approach

This video accompanies the following paper(s):

Talk summarizing "Encouraging creative thinking in robots improves their ability to solve challenging problems"

Talk given by Jingyu Li at the 2014 GECCO Conference in Vancouver, British Columbia.

This video accompanies the following paper(s):

Talk summarizing "Novelty Search Creates Robots with General Skills for Exploration"

Talk summarizing the paper Novelty Search Creates Robots with General Skills for Exploration. Talk given by Roby Velez at the 2014 GECCO Conference in Vancouver, British Columbia.

This video accompanies the following paper(s):

Talk summarizing "Evolving Neural Networks That Are Both Modular and Regular: HyperNEAT Plus the Connection Cost Technique"

Talk given by Joost Huizinga at the 2014 GECCO Conference in Vancouver, British Columbia.

This video accompanies the following paper(s):

Robots that can adapt like animals

The video shows two different robots that can adapt to a wide variety of injuries in under two minutes. A six-legged robot adapts to keep walking even if two of its legs are broken, and a robotic arm learns how to correctly place an object even with several broken motors.

Overview of the Evolving Artificial Intelligence Lab at the University of Wyoming

A brief overview of the Evolving Artificial Intelligence Lab at the University of Wyoming, directed by Jeff Clune.

Evolving Gaits for Legged Robots: Neural Networks with Geometric Patterns Perform Better

Neural networks evolved to produce gaits for legged robots. The use of the HyperNEAT generative encoding produces geometric patterns (regularities) in the neural wiring of the evolved brains, which improves performance by producing coordinated, regular leg movements.

Evolving Artificial Neural Networks That Are Both Modular and Regular

Why does modularity evolve? The evolutionary origins of modularity

Engineered and evolved things are organized in modules (e.g. organs or car parts), yet why modularity evolves remains one of biology's most important open questions.

Evolving Soft Robots with Multiple Materials (muscle, bone, etc.)

Here we evolve the bodies of soft robots made of multiple materials (muscle, bone, & support tissue) to move quickly. Evolution produces a diverse array of fun, wacky, interesting, but ultimately functional soft robots. Enjoy!

Deep Neural Networks are Easily Fooled

EndlessForms.com - Design objects with evolution and 3D print them!

On http://EndlessForms.com objects are evolved in the same way that plants and animals are bred. You pick the ones you like and they become the parents of the next generation of objects.

Evolved Electrophysiological Soft Robots

The research field of evolutionary robotics abstracts some of the major themes in biological evolution (heritable traits, genetic variation, and competition for scarce resources) as tools to allow computers to generate new and interesting virtual creatures.

Novelty Search Creates Robots with General Skills for Exploration

PPGN: Sampling within a single class of Junco

PPGN: Sampling between 10 classes

Deep Learning Overview & Visualizing What Deep Neural Networks Learn

Supp. Mat. Aligning Modularity

Supplementary material to: Does Aligning Phenotypic and Genotypic Modularity Improve the Evolution of Neural Networks?

Convergent Learning: Do different neural networks learn the same representations?

Understanding Neural Networks Through Deep Visualization

Talk at the Santa Fe Institute: Two Projects in BioInspired AI. Evolving regular, modular, hierarchical neural networks, and robot damage recovery.

Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills

Evolving Regular, Modular Neural Networks

I (Jeff Clune) summarize my research into evolving modular, regular neural networks, which are digital models of brains. The property of regularity is produced by using HyperNEAT, a generative encoding based on concepts from developmental biology.

PPGN: Sampling within a single class of Triumph Arch

Non-Adaptive Evolvability

Evolving artificial neural networks with generative encodings inspired by developmental biology

In this dissertation I (Jeff Clune) investigate the difference between generative encodings and direct encodings for evolutionary algorithms. Generative encodings are inspired by developmental biology and were designed, in part, to increase the regularity of synthetically evolved phenotypes.

Automated Generation of Environments to Test the General Learning Capabilities of AI Agents

Evolving Gaits for Physical Robots Directly in Hardware with the HyperNEAT Generative Encoding

Some of the gaits evolved by the HyperNEAT algorithm.

Robot Harlem Shake

Simulated robots evolved to do the Harlem Shake. A student in the Evolutionary Robotics course at the University of Wyoming made a robot Harlem Shake video for fun after doing one of the homeworks on evolving simulated robots. Another student then came up with this hilarious version. Enjoy!

Aracna: An Open-Source Quadruped Robotic Platform

Aracna is a new, quadruped robot platform which requires non-intuitive motor commands in order to locomote and thus provides an interesting challenge for gait learning algorithms, such as those frequently developed in the Evolutionary Computation and Artificial Life communities.

3D Printing the Aracna robot

This video shows the Aracna robot being 3D printed at the Cornell Creative Machines Lab. Aracna is a completely open-source robotic platform. Download the plans and print your own!

Multi-Objective Landscape Exploration (MOLE) algorithm. Explore all areas of your fitness landscape

Here we introduce an algorithm to compute phenotype-fitness maps as a way to understand the relationship between phenotypic dimensions and fitness.

Evolving Modular Networks: Video for "The Evolutionary Origins of Modularity"

Encouraging Creative Thinking in Robots: The Creative Thinking Approach

Talk summarizing "Encouraging creative thinking in robots improves their ability to solve challenging problems"

Talk given by Jingyu Li at the 2014 GECCO Conference in Vancouver, British Columbia.

Talk summarizing "Novelty Search Creates Robots with General Skills for Exploration"

Talk summarizing the paper Novelty Search Creates Robots with General Skills for Exploration. Talk given by Roby Velez at the 2014 GECCO Conference in Vancouver, British Columbia.

Talk summarizing "Evolving Neural Networks That Are Both Modular and Regular: HyperNEAT Plus the Connection Cost Technique"

Talk given by Joost Huizinga at the 2014 GECCO Conference in Vancouver, British Columbia.