Identifying core functional networks and functional modules within artificial neural networks via subsets regression

Roby Velez
Jeff Clune
As the power and capabilities of Artificial Neural Networks (ANNs) grow, so do their size and complexity. To both decipher and improve ANNs, we need to build better tools that help us understand their inner workings. To that end, we introduce an algorithm called Subsets Regression on network Connectivity (SRC). SRC allows us to prune away unimportant nodes and connections in ANNs, revealing a core functional network (CFN) that is simpler and thus easier to analyze. SRC can also identify functional modules within an ANN. We demonstrate SRC’s capabilities on both directly and indirectly encoded ANNs evolved to solve a modular problem. In many of the cases when evolution produces a highly entangled, non-modular ANN, SRC reveals that a CFN is hidden within these networks that is actually sparse and modular. That finding will substantially impact the sizable and ongoing research into the evolution of modularity and will encourage researchers to revisit previous results on that topic. We also show that the SRC algorithm can more accurately estimate the modularity Q-Score of a network than state-of-the-art approaches. Overall, SRC enables us to greatly simplify ANNs in order to better understand and improve them, and reveals that they often contain hidden modular structures within.
Pub. Info: 
Proceedings of the Genetic and Evolutionary Computation Conference