Jimmy Ba: Interpretable and Scalable Deep Learning

Speaker: Jimmy Ba , University of Toronto

Date: Friday, February 24, 2017

Time: 11:00 AM to 12:00 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location: Seminar Room G449 (Patil/Kiva)

Event Type:

Room Description:

Host: Josh Tenenbaum, BCS/CSAIL, MIT

Contact: Jiajun Wu, jiajunwu@csail.mit.edu

Relevant URL:

Speaker URL: None

Speaker Photo:
None

Reminders to: seminars@csail.mit.edu

Reminder Subject: TALK: Jimmy Ba: Interpretable and Scalable Deep Learning

Deep learning has transformed how we solve many of the core artificial intelligence tasks, including object recognition, speech processing, and machine translation, by using “black box” large-scale neural networks trained for weeks or months. But to build a reliable, scalable and practical intelligence system reaching human-level performance, we need to address two of the most fundamental challenges in deep learning: interpretability and efficient learning. In this talk, I will discuss my works that confront these challenges by first introducing a broad new class of visual attention models that can automatically discover human-like gazing patterns, and a new learning algorithm for these models by making connections between reinforcement learning and approximate probabilistic inference. I will show the attention-based models can add a degree of interpretability to the current “black box” deep learning approaches. I will then present a novel optimization algorithm that leverages distributed computing to significantly shortening the training time of the state-of-the-art large-scale neural networks with tens of millions of parameters. I will discuss that how the advances in attention-based models and optimization algorithms can be successfully applied to domains such as computer vision, natural language processing, reinforcement learning and computational biology.

Bio: Jimmy Ba is a PhD candidate supervised by Professor Geoffrey Hinton at the University of Toronto. He also previously received a BASc (2011) and a MASc (2013) from the University of Toronto under Ruslan Salakhutdinov and Brendan Frey. He is a recipient of the Facebook Graduate Fellowship. His primary research interests are in the areas of machine learning, numerical optimization and neural networks. During his PhD, he developed various novel visual attention models that are more expressive and interpretable than the standard convolutional neural networks. He is broadly interested in questions related to sample efficient deep learning, reinforcement learning and Bayesian statistics.

Research Areas:

Impact Areas:

This event is not part of a series.

Created by Jiajun Wu Email at Friday, February 17, 2017 at 2:52 PM.