Tinkering with Double-Greedy
, Google Research
Date: Wednesday, May 22, 2019
Time: 4:00 PM to 5:00 PM
Location: Conference Room G575
Event Type: Seminar
Host: Govind Ramnarayan, Quanquan Liu, Sitan Chen
Contact: Quanquan Liu, email@example.com
Speaker URL: https://sites.google.com/site/joshw0/
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org
TALK: Tinkering with Double-Greedy
Submodular functions can be used to model a wide array of important problems from fields such as theoretical computer science, economics, and machine learning. One of the most basic problems in submodular optimization is to maximize a submodular function f. When the function is not necessarily monotone, this problem is known to be NP-hard to approximate better than a factor of 1/2 [Feige, Mirrokni, Vondrak '11; Dobzinski and Vondrak '12]. This lower bound was met by the landmark Double-Greedy algorithm of [Buchbinder, Feldman, Naor, Schwartz '12]. In this talk, I review this embarrassingly simple algorithm and then present two extensions to different settings. The first extension is to online no-regret learning, where we receive a submodular function each round and want to do well compared to the best fixed point in hindsight. The second extension concerns the weaker generalization of submodularity to the continuous setting, where we are not guaranteed that the function is coordinate-wise concave. Both extensions are powered by a game-theoretic view of the Double-Greedy algorithm.
Joint work with Tim Roughgarden and Rad Niazadeh.
Algorithms & Theory, AI & Machine Learning
Created by Quanquan Liu at Thursday, May 09, 2019 at 5:07 PM.