Neuromodulation improves the evolution of forward models
Many animals predict the outcomes of their actions by internal models. Such ``forward models'' enable animals to rapidly simulate many actions without performing them to choose an appropriate action. Robots would similarly benefit from forward models. However, such models must change over time to account for changes in the environment or body, such as injury. Thus, forward models must not only be accurate, but also adaptable. Neural networks can learn complex functions with high accuracy, hence they are suitable candidates to build forward models for robots. Generally, neural networks are static, which means once they pass the training phase, their weights remain unchanged and they thus cannot adapt themselves if something about the world or their body changes. Plastic neural networks change their connections over time via local learning rules (e.g. Hebbian rule) and can thus deal with unforeseen changes. A more complex, yet still biologically-inspired, technique is neuromodulation, which can change per-connection learning rates in different contexts. In this paper, we test the hypothesis that neuromodulation may improve the evolution of forward models because it can heighten learning after drastic changes such as injury. We compare forward models evolved with neuromodulation to those evolved with static neural networks and Hebbian plastic neural networks. The results show that neuromodulation produces forward models that can adapt to changes significantly better than the controls. Our findings suggest that neuromodulation is an effective tool for enabling robots (and artificial intelligence agents, more generally) to have more adaptable, effective forward models.
Proceedings of the Genetic and Evolutionary Computation Conference