Recursive Deep Learning for Modeling Compositional Meaning in Language

Speaker: Richard Socher , Stanford University

Date: Thursday, April 03, 2014

Time: 4:00 PM to 5:00 PM Note: all times are in the Eastern Time Zone

Refreshments: 3:45 PM

Public: Yes

Location: 32-G449

Event Type:

Room Description:

Host: Regina Barzilay and Tommi Jaakkola, CSAIL

Contact: Francis Doughty, 253-4602, francisd@csail.mit.edu

Relevant URL:

Speaker URL: None

Speaker Photo:
None

Reminders to: seminars@csail.mit.edu

Reminder Subject: TALK: Richard Socher: Recursive Deep Learning for Modeling Compositional Meaning in Language

Great progress has been made in natural language processing thanks to
many different algorithms, each often specific to one application.
Most learning algorithms force language into simplified representations such as
bag-of-words or fixed-sized windows or require human-designed features.
I will introduce three models based on recursive neural networks
that can learn linguistically plausible representations of language.
These methods jointly learn compositional features and grammatical
sentence structure for parsing or phrase level sentiment predictions.
They can also be used to represent the visual meaning of a sentence which
can be used to find images based on query sentences or to describe images
with a more complex description than single object names.

Besides the state-of-the-art performance, the models capture interesting phenomena
in language such as compositionality. For instance, people easily see that the
"with" phrase in "eating spaghetti with a spoon" specifies a way of eating
whereas in "eating spaghetti with some pesto" it specifies the dish.
I show that my model solves these prepositional attachment problems
well thanks to its distributed representations.
In sentiment analysis, a new tensor-based recursive model learns different
types of high level negation and how they can change the meaning of longer
phrases with many positive words. They also learn that when contrastive
conjunctions such as "but" are used the sentiment of the phrases following
them usually dominates.

Bio:
Richard Socher is a PhD student at Stanford working with Chris Manning
and Andrew Ng. His research interests are machine learning for NLP and
vision. He is interested in developing new deep learning models that
learn useful features, capture compositional structure in multiple
modalities and perform well across different tasks.
He was awarded the 2011 Yahoo! Key Scientific Challenges Award,
the Distinguished Application Paper Award at ICML 2011, a Microsoft
Research PhD Fellowship in 2012 and a 2013 "Magic Grant" from the
Brown Institute for Media Innovation.

Research Areas:

Impact Areas:

See other events that are part of the CS Special Seminar Series 2014.

Created by Francis Doughty Email at Thursday, February 06, 2014 at 1:54 PM.