Tinkering with Double-Greedy

Speaker: Josh Wang , Google Research

Date: Wednesday, May 22, 2019

Time: 4:00 PM to 5:00 PM

Public: Yes

Location: Conference Room G575

Event Type: Seminar

Room Description:

Host: Govind Ramnarayan, Quanquan Liu, Sitan Chen

Contact: Quanquan Liu, quanquan@csail.mit.edu

Relevant URL:

Speaker URL: https://sites.google.com/site/joshw0/

Speaker Photo:

Reminders to: theory-seminars@csail.mit.edu, seminars@csail.mit.edu, mitml@mit.edu

Reminder Subject: TALK: Tinkering with Double-Greedy

Submodular functions can be used to model a wide array of important problems from fields such as theoretical computer science, economics, and machine learning. One of the most basic problems in submodular optimization is to maximize a submodular function f. When the function is not necessarily monotone, this problem is known to be NP-hard to approximate better than a factor of 1/2 [Feige, Mirrokni, Vondrak '11; Dobzinski and Vondrak '12]. This lower bound was met by the landmark Double-Greedy algorithm of [Buchbinder, Feldman, Naor, Schwartz '12]. In this talk, I review this embarrassingly simple algorithm and then present two extensions to different settings. The first extension is to online no-regret learning, where we receive a submodular function each round and want to do well compared to the best fixed point in hindsight. The second extension concerns the weaker generalization of submodularity to the continuous setting, where we are not guaranteed that the function is coordinate-wise concave. Both extensions are powered by a game-theoretic view of the Double-Greedy algorithm.

Joint work with Tim Roughgarden and Rad Niazadeh.

Research Areas:
Algorithms & Theory, AI & Machine Learning

Impact Areas:

See other events that are part of the Algorithms & Complexity Seminars 2018-2019.

Created by Quanquan Liu Email at Thursday, May 09, 2019 at 5:07 PM.