- THESIS DEFENSE: Interpreta...
- Edit Event
- Cancel Event
- Preview Reminder
- Send Reminder
- Other events happening in January 2017
THESIS DEFENSE: Interpretable Neural Models for Natural Language Processing
Speaker:
Tao Lei
, MIT CSAIL
Date: Thursday, January 19, 2017
Time: 4:00 PM to 5:00 PM Note: all times are in the Eastern Time Zone
Public: Yes
Location: 32-G449 (Stata Center - Patil/Kiva Conference Room)
Event Type:
Room Description:
Host: Regina Barzilay, MIT CSAIL
Contact: Marcia G. Davidson, 617-253-3049, marcia@csail.mit.edu
Speaker URL: None
Speaker Photo:
None
Reminders to:
seminars@csail.mit.edu, rbg@csail.mit.edu
Reminder Subject:
TALK: THESIS DEFENSE: Interpretable Neural Models for Natural Language Processing
The success of neural network models often comes at a cost of interpretability. This thesis addresses the problem by providing justifications behind the models structure and predictions.
In the first part of this thesis, we present a class of sequence operations for text processing. The proposed component generalizes from convolution operations and gated aggregations. As justifications, we relate this component to string kernels, i.e., functions measuring the similarity between sequences, and demonstrate how it encodes the efficient kernel computing algorithm into its structure. The proposed model achieves state-of-the-art or competitive results compared to alternative architectures (such as LSTMs and CNNs) across several NLP applications.
In the second part, we learn rationales behind the models prediction by extracting input pieces as supporting evidence. Rationales are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by the desiderata for rationales. We demonstrate the effectiveness of this learning framework in applications such multi-aspect sentiment analysis. Our method achieves a performance over 90% evaluated against manual annotated rationales.
Thesis Advisor: Regina Barzilay
Thesis Committee: Jim Glass and Tommi Jaakkola
Research Areas:
Impact Areas:
Created by Marcia G. Davidson at Wednesday, January 11, 2017 at 11:20 AM.