Ask Your Distribution Shift if Pre-Training is Right for You

Speaker: Benjamin Cohen-Wang , CSAIL MIT

Date: Thursday, February 22, 2024

Time: 5:00 PM to 5:30 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location:

Event Type: Seminar

Room Description: Room 32-G449 (Patil/Kiva Seminar Room)

Host: Thien Le, CSAIL MIT

Contact: Thien Le, thienle@csail.mit.edu

Relevant URL:

Speaker URL: None

Speaker Photo:
None

Reminders to: mitml@mit.edu, lids-seminars@mit.edu, seminars@csail.mit.edu

Reminder Subject: TALK: Ask Your Distribution Shift if Pre-Training is Right for You

Abstract: Pre-training is a widely used approach to develop models that are robust to distribution shifts. However, in practice, its effectiveness varies: fine-tuning a pre-trained model improves robustness significantly in some cases but not at all in others (compared to training from scratch). In this work, we seek to characterize the failure modes that pre-training can and cannot address. In particular, we focus on two possible failure modes of models under distribution shift: poor extrapolation (e.g., they cannot generalize to a different domain) and biases in the training data (e.g., they rely on spurious features). Our study suggests that, as a rule of thumb, pre-training can help mitigate poor extrapolation but not dataset biases. After providing theoretical motivation and empirical evidence for this finding, we explore two of its implications for developing robust models: (1) pre-training and interventions designed to prevent exploiting biases have complementary robust- ness benefits, and (2) fine-tuning on a (very) small, non-diverse but de-biased dataset can result in significantly more robust models than fine-tuning on a large and diverse but biased dataset.

Speaker bio: Ben is a second year PhD student at MIT where he is advised by Aleksander Madry. He is interested in how we can develop machine learning models that can be safely deployed, with a focus on robustness to distribution shifts. Lately, he has been working on understanding how we can harness large-scale pre-training (e.g., CLIP, GPT) to develop robust task-specific models.

Research Areas:
AI & Machine Learning

Impact Areas:
Big Data

See other events that are part of the ML Tea.

Created by Thien Le Email at Tuesday, February 20, 2024 at 12:30 PM.