Quantifying Interpretability of Deep Learning in Visual Recognition

Speaker: Bolei Zhou , CSAIL MIT

Date: Thursday, July 06, 2017

Time: 3:00 PM to 4:00 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location: 32-D463 (Star)

Event Type:

Room Description:

Host: David Bau, CSAIL MIT

Contact: Bolei Zhou, bzhou@csail.mit.edu

Relevant URL: http://netdissect.csail.mit.edu/

Speaker URL: None

Speaker Photo:

Reminders to: seminar@csail.mit.edu, vision-meeting@csail.mit.edu

Reminder Subject: TALK: Quantifying Interpretability of Deep Learning in Visual Recognition

We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of deep convolutional neural networks (CNNs) by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self-supervised training tasks. We further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power. The project page is at http://netdissect.csail.mit.edu.

Bolei Zhou is the 5th-year Ph.D. Candidate in Computer Science and Artificial Intelligence Laboratory at MIT, working with Prof. Antonio Torralba. His research is on computer vision and machine learning, with particular interest in visual scene understanding and network interpretability. He is the award recipient of the Facebook Fellowship, Microsoft Research Asia Fellowship, MIT Greater China Fellowship. More details about his research work is at the homepage http://people.csail.mit.edu/bzhou/.

Research Areas:

Impact Areas:

See other events that are part of the Vision Seminar Series 2017.

Created by Bolei Zhou Email at Tuesday, July 04, 2017 at 2:17 PM.