BBC news
Deep Learning
We investigate deep learning, which is a way to train deep neural networks (neural networks with many layers) to solve complicated tasks. Deep neural networks are capable of translating spoken words to text, translating between languages, and recognizing objects in pictures. While deep neural networks have recently been shown to perform mind-blowing feats, they remain mostly black boxes whose inner workings we do not understand. Our research focuses on shedding light into these black boxes to understand what they learn and how they perform so well. See below for papers on how we do that.
Videos
Publications
- (2018) Machine learning to classify animal species in camera trap images: Applications in ecology. Methods in Ecology and Evolution 2018: 1-6. (pdf)
- (2018) Deep curiosity search: Intra-life exploration improves performance on challenging deep reinforcement problems. NIPS Deep Reinforcement Learning Workshop. (pdf) (poster)
- (2018) Safe mutations for deep and recurrent neural networks through output gradients. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO). (pdf)
- (2018) Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. Advances in Neural Information Processing Systems (NIPS) 32 (20% acceptance rate). (pdf)
- (2018) ES is more than just a traditional finite-difference approximator. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO). (pdf)
- (2018) Differentiable plasticity: Training plastic neural networks with backpropagation. ICML. (pdf)
- (2018) Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. NIPS Deep Reinforcement Learning Workshop. (pdf)
- (2018) Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences Vol. 115 no. 25. (pdf) (html) (featured on the cover)
- (2017) On the relationship between the OpenAI evolution strategy and stochastic gradient descent. arXiv 1712.06564. (pdf)
- (2017) Plug & play generative networks: Conditional iterative generation of images in latent space. In Computer Vision and Pattern Recognition (CVPR '17), IEEE, 2017. Spotlight oral presentation (~10% acceptance rate). (pdf)
- (2016) Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Advances in Neural Information Processing Systems (NIPS) 29 (23% acceptance rate). (pdf)
- (2016) Creative Generation of 3D Objects with Deep Learning and Innovation Engines. Proceedings of the International Conference on Computational Creativity. (pdf)
- (2016) Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. Visualization for Deep Learning workshop. International Conference on Machine Learning. Oral presentation and Winner: Best workshop paper. (pdf)
- (2016) Convergent Learning: Do different neural networks learn the same representations?. International Conference on Learning Representations (ICLR '16). Oral presentation (5.7% acceptance rate). (pdf)
- (2015) Understanding neural networks through deep visualization. ICML Deep Learning workshop. (pdf) (more information)
- (2015) Innovation Engines: Automated creativity and improved stochastic optimization via deep learning. Proceedings of the Genetic and Evolutionary Computation Conference. Best Paper Award (3% acceptance rate). (pdf)
- (2015) Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Computer Vision and Pattern Recognition (CVPR '15), IEEE, 2015. Community Top Paper Award. Oral presentation (3% acceptance rate). (pdf) (more information)
- (2014) How transferable are features in deep neural networks?. Advances in Neural Information Processing Systems (NIPS) 27. Pages 3320-3328. Oral presentation (1% acceptance rate). (pdf)