Why natural language is the right vehicle for complex reasoning

Speaker: Greg Durrett , UT Austin

Date: Friday, April 01, 2022

Time: 12:00 PM to 1:00 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location: 32-D407

Event Type: Seminar

Room Description: 32-D407

Host: Jacob Andreas, MIT

Contact: Jacob Andreas, jda@csail.mit.edu

Relevant URL:

Speaker URL: https://www.cs.utexas.edu/~gdurrett/

Speaker Photo:
None

Reminders to: seminars@csail.mit.edu

Reminder Subject: TALK: [Updated to include correct location] Greg Durrett: Why natural language is the right vehicle for complex reasoning

Abstract: Despite their widespread success, end-to-end transformer models consistently fall short in settings involving complex reasoning. Transformers trained on question answering (QA) tasks that seemingly require multiple steps of reasoning often achieve high performance by taking "reasoning shortcuts." We still do not have models that robustly combine many pieces of information in a logically consistent way. In this talk, I argue that a very attractive solution to this problem is within our grasp: doing multi-step reasoning directly in natural language. Text is flexible and expressive, capturing all of the semantics we need to represent intermediate states of a reasoning process. Working with text allows us to interface with knowledge in pre-trained models and in resources like Wikipedia. And finally, text is easily interpretable and auditable by users. I describe two pieces of work that manipulate language to do inference. First, transformation of question-answer pairs and evidence sentences allows us to seamlessly move between QA and natural language inference (NLI) settings, advancing both calibration of QA models and capabilities of NLI systems. Second, we show how synthetically-constructed data can allow us to build a deduction engine in natural language, which is a powerful building block for putting together natural language "proofs" of claims. Finally, I will discuss our recent work in diverse text generation using lattices and explore how this can further improve generative reasoning.

Bio: Greg Durrett is an assistant professor of Computer Science at UT Austin. His current research focuses on making natural language processing systems more interpretable, controllable, and generalizable, spanning application domains including question answering, textual reasoning, summarization, and information extraction. His work is funded by a 2022 NSF CAREER award and other grants from agencies including the NSF, DARPA, Salesforce, and Amazon. He completed his Ph.D. at UC Berkeley in 2016, where he was advised by Dan Klein, and from 2016-2017, he was a research scientist at Semantic Machines.

Research Areas:
AI & Machine Learning

Impact Areas:

This event is not part of a series.

Created by Jacob Andreas Email at Monday, March 28, 2022 at 9:04 AM.