BEGIN:VCALENDAR
VERSION:2.0
PRODID:icalendar-ruby
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20160313T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20161106T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20210928T205138Z
UID:2761fa15-47a9-4cfe-aa32-12a4716ea307
DTSTART;TZID=America/New_York:20160630T150000
DTEND;TZID=America/New_York:20160630T160000
CREATED:20160623T071815
DESCRIPTION:Abstract:\n\nWe consider online learning algorithms that guaran
tee worst-case regret rates in adversarial environments (so they can be de
ployed safely and will perform robustly)\, yet adapt optimally to favorabl
e stochastic environments (so they will perform well in a variety of setti
ngs of practical importance). We quantify the friendliness of stochastic e
nvironments by means of the well-known Bernstein (a.k.a. generalized Tsyba
kov margin) condition. For two recent algorithms (Squint for the Hedge set
ting and MetaGrad for online convex optimization) we show that the particu
lar form of their data-dependent individual-sequence regret guarantees imp
lies that they adapt automatically to the Bernstein parameters of the stoc
hastic environment. We prove that these algorithms attain fast rates in th
eir respective settings both in expectation and with high probability.\n
LAST-MODIFIED:20160623T071815
SUMMARY:Combining Adversarial Guarantees and Stochastic Fast Rates in Onlin
e Learning
URL:https://calendar.csail.mit.edu/events/173206
END:VEVENT
END:VCALENDAR