Encouraging Creativity and Curiosity in Robots

Our work shows that we can make smarter robots by encouraging them to think differently about their environment and to try new things. In the Creative Thinking Approach, we reward robots for having "ideas" they have never had before, meaning that they express novel patterns in the neurons of their simulated brains. A different algorithm we call Curiosity Search instead rewards agents for "doing something new", in the hopes that expressing a myriad of behaviors will allow them to discover skills that are useful for solving a task. Our research shows that both algorithms can speed up the evolution and learning of artificial agents that solve challenging problems.


Videos

Deep Curiosity Search Example Agents: Seaquest (Best)

Our best agent produced by Curiosity Search on the Atari game Seaquest. This agent achieves approximately 132,000 points, which is superior to that of many other Deep RL algorithms and is also vastly greater than what an average human can achieve!
This video accompanies the following paper(s):

Deep Curiosity Search Example Agents: Seaquest

A typical agent produced by Curiosity Search on the Atari game Seaquest. This agent achieves approximately 3400 points, which is double that of some popular Deep RL algorithms like DQN and A2C, although not as good as Rainbow or Ape-X.
This video accompanies the following paper(s):

Deep Curiosity Search Example Agents: Montezuma's Revenge (Best)

Our best agent produced by Curiosity Search on the very challenging Atari game Montezuma's Revenge. This agent achieves 6600 points and explores many rooms, whereas most popular Deep RL algorithms like DQN, A2C, and Rainbow struggle to even pick up the first key!
This video accompanies the following paper(s):

Deep Curiosity Search Example Agents: Montezuma's Revenge

A typical agent produced by Curiosity Search on the very challenging Atari game Montezuma's Revenge. This agent achieves 3500 points and explores many rooms, whereas most popular Deep RL algorithms like DQN, A2C, and Rainbow struggle to even pick up the first key!
This video accompanies the following paper(s):

Encouraging Creative Thinking in Robots: The Creative Thinking Approach

This video accompanies the following paper(s):

Talk summarizing "Encouraging creative thinking in robots improves their ability to solve challenging problems"

Talk given by Jingyu Li at the 2014 GECCO Conference in Vancouver, British Columbia.

This video accompanies the following paper(s):

Deep Curiosity Search Example Agents: Seaquest (Best)

Our best agent produced by Curiosity Search on the Atari game Seaquest. This agent achieves approximately 132,000 points, which is superior to that of many other Deep RL algorithms and is also vastly greater than what an average human can achieve!

Deep Curiosity Search Example Agents: Seaquest

A typical agent produced by Curiosity Search on the Atari game Seaquest. This agent achieves approximately 3400 points, which is double that of some popular Deep RL algorithms like DQN and A2C, although not as good as Rainbow or Ape-X.

Deep Curiosity Search Example Agents: Montezuma's Revenge (Best)

Our best agent produced by Curiosity Search on the very challenging Atari game Montezuma's Revenge. This agent achieves 6600 points and explores many rooms, whereas most popular Deep RL algorithms like DQN, A2C, and Rainbow struggle to even pick up the first key!

Deep Curiosity Search Example Agents: Montezuma's Revenge

A typical agent produced by Curiosity Search on the very challenging Atari game Montezuma's Revenge. This agent achieves 3500 points and explores many rooms, whereas most popular Deep RL algorithms like DQN, A2C, and Rainbow struggle to even pick up the first key!

Encouraging Creative Thinking in Robots: The Creative Thinking Approach

Talk summarizing "Encouraging creative thinking in robots improves their ability to solve challenging problems"

Talk given by Jingyu Li at the 2014 GECCO Conference in Vancouver, British Columbia.


Publications