Modular Neural Architectures for Grounded Language Learning

Speaker: Jacob Andreas , UC Berkeley

Date: Wednesday, February 01, 2017

Time: 2:00 PM to 3:00 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location: 32-D463 (Stata Center - Star Conference Room)

Event Type:

Room Description:

Host: Regina Barzilay, MIT CSAIL

Contact: Marcia G. Davidson, 617-253-3049, marcia@csail.mit.edu

Relevant URL:

Speaker URL: None

Speaker Photo:
None

Reminders to: seminars@csail.mit.edu, rbg@csail.mit.edu

Reminder Subject: TALK: Modular Neural Architectures for Grounded Language Learning

Language understanding depends on two abilities: an ability to translate between natural language utterances and representations of meaning, and an ability to relate these meaning representations to the world. In the natural language processing literature, these tasks are respectively known as "semantic parsing" and "grounding", and have been treated as essentially independent problems. In this talk, I will present a family of neural architectures for jointly learning to ground language in the world and reason about it compositionally.

I will begin by describing a model that uses syntactic information to dynamically construct neural networks from composable primitives. The resulting composed networks can be used to achieve state-of-the-art results on a variety of grounded question answering tasks. Next, I will present a model for contextual referring expression generation, in which contrastive behavior results from a combination of learned semantics and inference-driven pragmatics. This model is again backed by modular neural components---in this case elementary listener and speaker representations. If time permits, I will conclude by discussing recent work on using language-like "sketches" to learn modular policy representations for interactive environments.

Research Areas:

Impact Areas:

This event is not part of a series.

Created by Marcia G. Davidson Email at Tuesday, January 24, 2017 at 5:00 PM.