Sample-efficient proper PAC learning with approximate differential privacy

Speaker: Noah Golowich , MIT

Date: Wednesday, March 03, 2021

Time: 4:00 PM to 5:00 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location: https://mit.zoom.us/j/97637459251

Event Type: Seminar

Room Description:

Host: Quanquan Liu, MIT

Contact: Quanquan Liu, quanquan@csail.mit.edu

Relevant URL: https://mit.zoom.us/j/97637459251

Speaker URL: https://noahgol.github.io/

Speaker Photo:
None

Reminders to: theory-seminars@csail.mit.edu, seminars@csail.mit.edu

Reminder Subject: TALK: Sample-efficient proper PAC learning with approximate differential privacy

Sample-efficient proper PAC learning with approximate differential privacy.

An exciting recent development in the theory of differentially private machine learning is the connection between private learning and online learning, and in particular the result that a binary hypothesis class is privately learnable if and only if it is online learnable (i.e., has finite Littlestone dimension). In this talk I will discuss our work strengthening various aspects of the result that online learning implies private learning: first, we show that the sample complexity of properly learning a class of Littlestone dimension d with approximate differential privacy is Õ(d^6), ignoring privacy and accuracy parameters. This result answers a question of Bun et al. (2020) by improving upon their upper bound of 2^O(d) on the sample complexity. Prior to our work, finiteness of the sample complexity for privately learning a class of finite Littlestone dimension was only known for improper private learners, and the fact that our learner is proper answers another question of Bun et al., which was also asked by Bousquet et al. (2020). Using machinery developed by Bousquet et al., we also show that the sample complexity of sanitizing a binary hypothesis class is at most polynomial in its Littlestone dimension and dual Littlestone dimension. This implies that a class is sanitizable if and only if it has finite Littlestone dimension. An important ingredient of our proofs is a new property of binary hypothesis classes that we call irreducibility, which may be of independent interest.

Joint work with Badih Ghazi, Ravi Kumar, and Pasin Manurangsi.

Research Areas:
Algorithms & Theory, AI & Machine Learning

Impact Areas:
Big Data, Cybersecurity

See other events that are part of the Algorithms and Complexity Seminar 2021.

Created by Quanquan Liu Email at Tuesday, March 02, 2021 at 1:07 AM.