Neural Network Language Models as Psycholinguistic Subjects: The Case of Filler—Gap Dependencies.

Speaker: Ethan Wilcox , Harvard Linguistics

Date: Thursday, May 16, 2019

Time: 5:00 PM to 6:00 PM

Public: Yes

Location: 46-3189

Event Type:

Room Description:

Host:

Contact: Yen-Ling Kuo, ylkuo@csail.mit.edu

Relevant URL:

Speaker URL: None

Speaker Photo:
None

Reminders to: seminars@lists.csail.mit.edu, comp-lang@mit.edu

Reminder Subject: TALK: [CompLang] Neural Network Language Models as Psycholinguistic Subjects: The Case of Filler—Gap Dependencies

Recurrent Neural Networks (RNNs) are one type of neural model that has been able to achieve state-of-the-art scores on a variety of natural language tasks, including translation and language modeling (which is used in, for example, text prediction). However, the nature of the representations that these 'black boxes' learn is poorly understood, raising issues of accountability and controllability of the NLP system. In this talk, I will argue that one way to assess what these networks are learning is to treat like subjects in a psycholinguistic experiment. By feeding them hand-crafted sentences that belie the model's underlying knowledge of language I will demonstrate that they are able to learn the filler--gap dependency, and are even sensitive to the hierarchical constraints implicated in the dependency. Next, I turn to "island effects", or structural configurations that block the filler---gap dependency, which have been theorized to be unlearnable. I demonstrate that RNNs are able to learn some of the "island" constraints and even recover some of their pre-island gap expectation. These experiments demonstrate that linear statistical models are able to learn some fine-grained syntactic rules, however their behavior remains un-humanlike in many cases.

-----

About CompLang:
CompLang is a student-run discussion group on language and computation. The aim of the group is to bring together the language community at MIT and nearby, learn about each other's research, and foster cross-laboratory collaborations. The broad topic of the meetings is using computational models to study scientific questions about language. We will discuss work from computational linguistics, psycholinguistics, cognitive science, natural language processing and formal linguistics. Please visit http://complang.mit.edu for future events.

Research Areas:
AI & Machine Learning

Impact Areas:

This event is not part of a series.

Created by Yen-Ling Kuo Email at Monday, May 13, 2019 at 12:41 PM.