Explorations in robust optimization of deep networks for adversarial examples: provable defenses, threat models, and overfitting
, Carnegie Mellon University (CMU)
Date: Tuesday, February 18, 2020
Time: 1:00 PM to 2:00 PM Note: all times are in the Eastern Time Zone
Location: Seminar Room G575
Event Type: Seminar
Host: Aleksander Madry, MIT
Speaker URL: https://www.cs.cmu.edu/~ericwong/
TALK: Explorations in robust optimization of deep networks for adversarial examples: provable defenses, threat models, and overfitting
While deep networks have contributed to major leaps in raw performance across various applications, they are also known to be quite brittle to targeted data perturbations, so-called adversarial examples, and pose a serious risk for safety- and security-centric applications where reliability and robustness are critical.
In this talk, we discuss a number of approaches for mitigating the effect of adversarial examples, which can offer varying degrees and types of robustness. We first discuss provable defenses which can guarantee that no adversarial example exists within an L-p bounded region. Next, we study alternative threat models for the adversarial example, such as the Wasserstein threat model and the union of multiple threat models. Finally, we present some unexpected findings on the robust learning problem, showing that weak adversaries can be sufficient for training and that overfitting is a dominant phenomenon in adversarially robust training.
Created by Aleksander Madry at Thursday, February 13, 2020 at 8:52 AM.