Investigating whether HyperNEAT produces modular neural networks

Author(s): 
Clune J
Beckmann BE
McKinley PK
Ofria C
Year: 
2010
Abstract: 

HyperNEAT represents a class of neuroevolutionary algorithms that captures some of the power of natural development with a computationally efficient high-level abstraction of development. This class of algorithms is intended to provide many of the desirable properties produced in biological phenotypes by natural developmental processes, such as regularity, modularity and hierarchy. While it has been previously shown that HyperNEAT produces regular artificial neural network (ANN) phenotypes, in this paper we investigated the open question of whether HyperNEAT can produce modular ANNs. We conducted such research on problems where modularity should be beneficial, and found that HyperNEAT failed to generate modular ANNs. We then imposed modularity on HyperNEAT’s phenotypes and its performance improved, demonstrating that modularity increases performance on this problem. We next tested two techniques to encourage modularity in HyperNEAT, but did not observe an increase in either modularity or performance. Finally, we conducted tests on a simpler problem that requires modularity and found that HyperNEAT was able to rapidly produce modular solutions that solved the problem. We therefore present the first documented case of HyperNEAT producing a modular phenotype, but our inability to encourage modularity on harder problems where modularity would have been beneficial suggests that more work is needed to increase the likelihood that HyperNEAT and similar algorithms produce modular ANNs in response to challenging, decomposable problems.


Evolving artificial neural networks with generative encodings inspired by developmental biology

In this dissertation I (Jeff Clune) investigate the difference between generative encodings and direct encodings for evolutionary algorithms. Generative encodings are inspired by developmental biology and were designed, in part, to increase the regularity of synthetically evolved phenotypes. Regularity is an important design principle in both natural organisms and engineered designs. The majority of this dissertation focuses on how the property of regularity enables a generative encoding to outperform direct encoding controls, and whether a bias towards regularity also hurts the performance of the generative encoding on some problems. I also report on whether researchers can bias the types of regularities produced by a generative encoding to accommodate user preferences. Finally, I study the degree to which a generative encoding produces another important design principle, modularity.

Several previous studies have shown that generative encodings outperform direct encodings on highly regular problems. However, prior to this dissertation, it was not known how generative encodings compare to direct encodings on problems with different levels of regularity. On three different problems, I show that a generative encoding can exploit intermediate amounts of problem regularity, which enabled the generative encoding to increasingly outperform direct encoding controls as problem regularity increased. This performance gap emerged because the generative encoding produced regular artificial neural networks (ANNs) that produced regular behaviors. The ANNs evolved with the generative encoding contained a diverse array of complicated, regular neural wiring patterns, whereas the ANNs produced by a direct encoding control were irregular.

I also document that the bias towards regularity can hurt a generative encoding on problems that have some amount of irregularity. I propose a new algorithm, called HybrID, wherein a generative encoding produces regular patterns and a direct encoding modifies
those patterns to provide fitness-enhancing irregularities. HybrID outperformed a generative encoding alone on three problems for nearly all levels of regularity, which raises the question of whether generative encodings may ultimately excel not as stand-alone algorithms, but by being hybridized with a further process of irregular refinement.

The results described so far document that a generative encoding can produce regular solutions. I then show that, at least for the generative encoding in this case study, it is possible to influence the types of regularities produced, which allows domain knowledge and preferences to be injected into the algorithm. I also investigated whether the generative encoding can produce modular solutions. I present the first documented case of this generative encoding producing a modular phenotype on a simple problem. However, the generative encoding's inability to create modularity on harder problems where modularity would have been beneficial suggests that more work is needed to increase the likelihood that this encoding produces modular ANNs in response to challenging, decomposable problems.

Overall, this dissertation paints a more complete picture of generative encodings than prior studies. Initially, it demonstrates that, by producing regular ANNs and behaviors, generative encodings increasingly outcompete direct encodings as problem regularity increases. It next documents that a bias towards regularity can harm the performance of direct encodings when problems contain irregularities. The HybrID algorithm suggests a path forward, however, by revealing that a refinement process that fine-tunes the regular patterns produced by a generative encoding can boost performance by accounting for problem irregularities. Finally, the dissertation shows that the generative encoding studied can produce modular networks on simple problems, but may struggle to do so on harder problems. The general conclusion that can be drawn from this work is that generative encodings can produce some of the properties seen in complex, natural organisms, and will likely be an important part of our long-term goal of synthetically evolving phenotypes that approach the capability, intelligence, and complexity of their natural rivals.

Evolving artificial neural networks with generative encodings inspired by developmental biology

In this dissertation I (Jeff Clune) investigate the difference between generative encodings and direct encodings for evolutionary algorithms. Generative encodings are inspired by developmental biology and were designed, in part, to increase the regularity of synthetically evolved phenotypes.

Pub. Info: 
Proceedings of the Genetic and Evolutionary Computation Conference. 635-642
BibTeX: 

@inproceedings{Clune:2010:IWH:1830483.1830598,
author = {Clune, Jeff and Beckmann, Benjamin E. and McKinley, Philip K. and Ofria, Charles},
title = {Investigating Whether hyperNEAT Produces Modular Neural Networks},
booktitle = {Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation},
series = {GECCO '10},
year = {2010},
isbn = {978-1-4503-0072-8},
location = {Portland, Oregon, USA},
pages = {635--642},
numpages = {8},
url = {http://doi.acm.org/10.1145/1830483.1830598},
doi = {10.1145/1830483.1830598},
acmid = {1830598},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {artificial neural networks, developmental encodings, generative encodings, hyperneat, indirect encodings, modularity, neat, neuroevolution},
}