Thesis Defense: Self-Supervised Learning for Speech Processing

Speaker: Yu-An Chung , CSAIL MIT

Date: Thursday, April 14, 2022

Time: 3:00 PM to 4:00 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location: G449

Event Type: Thesis Defense

Room Description: G449

Host: Jim Glass, CSAIL MIT

Contact: Yu-An Chung, andyyuan@mit.edu

Relevant URL: Zoom: https://mit.zoom.us/j/8129643608?pwd=aTNnR3k4WjFJd3RMUjUwOXhkR1dpQT09 (Password: 727274)

Speaker URL: None

Speaker Photo:
None

Reminders to: andyyuan@mit.edu, seminars@csail.mit.edu, glass@csail.mit.edu

Reminder Subject: TALK: Thesis Defense: Self-Supervised Learning for Speech Processing

Thesis Supervisor(s): James Glass, Jacob Andreas, Phillip Isola

Abstract:
Deep neural networks trained with supervised learning algorithms on large amounts of labeled speech data have achieved remarkable performance on various spoken language processing applications, often being the state of the arts on the corresponding leaderboards. However, the fact that training these systems relies on large amounts of annotated speech poses a scalability bottleneck for the continued advancement of state-of-the-art performance, and an even more fundamental barrier for deployment of deep neural networks in speech domains where labeled data are intrinsically rare, costly, or time-consuming to collect.

In contrast to annotated speech, untranscribed audio is often much cheaper to accumulate. In this thesis, we explore the use of self-supervised learning—a learning paradigm where the learning target is generated from the input itself—for leveraging such easily scalable resources to improve the performance of spoken language technology. Specifically, we propose two self-supervised algorithms, one based on the idea of “future prediction” and the other based on the idea of “predicting the masked from the unmasked,” for learning contextualized speech representations from unlabeled speech data. We show that our self-supervised algorithms are capable of learning representations that transform high-level properties of speech signals such as their phonetic contents and speaker characteristics into a more accessible form than traditional acoustic features, and demonstrate their effectiveness in improving the performance of deep neural networks on a wide range of speech processing tasks. In addition to presenting new learning algorithms, we also provide extensive analysis aiming to understand the properties of the learned self-supervised representations, as well as disclosing the design factors that make one self-supervised model different from the other.

Research Areas:
AI & Machine Learning

Impact Areas:

This event is not part of a series.

Created by Yu-An Chung Email at Friday, April 01, 2022 at 2:47 PM.