Deep Learning
We investigate deep learning, which is a way to train deep neural networks (neural networks with many layers) to solve complicated tasks. Deep neural networks are capable of translating spoken words to text, translating between languages, and recognizing objects in pictures. While deep neural networks have recently been shown to perform mind-blowing feats, they remain mostly black boxes whose inner workings we do not understand. Our research focuses on shedding light into these black boxes to understand what they learn and how they perform so well. See below for papers on how we do that.
Videos
Publications
- (2016) Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space. arXiv 1612.00005. (pdf)
- (2016) Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Advances in Neural Information Processing Systems (NIPS) 29 (23% acceptance rate). (pdf)
- (2016) Creative Generation of 3D Objects with Deep Learning and Innovation Engines. Proceedings of the International Conference on Computational Creativity. (pdf)
- (2016) Multifaceted Feature Visualization: Uncovering the different types of features learned by each neuron in deep neural networks. Visualization for Deep Learning workshop. International Conference on Machine Learning. Oral presentation and Winner: Best workshop paper. (pdf)
- (2016) Convergent Learning: Do different neural networks learn the same representations?. International Conference on Learning Representations (ICLR '16). Oral presentation (5.7% acceptance rate). (pdf)
- (2015) Understanding neural networks through deep visualization. ICML Deep Learning workshop. (pdf) (more information)
- (2015) Innovation Engines: Automated creativity and improved stochastic optimization via Deep Learning. Proceedings of the Genetic and Evolutionary Computation Conference. Best Paper Award (3% acceptance rate). (pdf)
- (2015) Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Computer Vision and Pattern Recognition (CVPR '15), IEEE, 2015. Community Top Paper Award. Oral presentation (3% acceptance rate). (pdf) (more information)
- (2014) How transferable are features in deep neural networks?. Advances in Neural Information Processing Systems (NIPS) 27. Pages 3320-3328. Oral presentation (1% acceptance rate). (pdf)