Real-Time Robust Planning and Learning Stabilizable Models for Control

Speaker: Sumeet Singh , Stanford University

Date: Wednesday, May 08, 2019

Time: 3:00 PM to 4:00 PM

Public: Yes

Location:

Event Type: Seminar

Room Description:

Host: John Leonard

Contact: John J. Leonard, jleonard@csail.mit.edu

Relevant URL: https://web.stanford.edu/~ssingh19/

Speaker URL: None

Speaker Photo:
None

Reminders to:

Reminder Subject: TALK: Real-Time Robust Planning and Learning Stabilizable Models for Control

When it works, model-based Reinforcement Learning (RL) typically offers major improvements in sample efficiency in comparison to model-free techniques such as Policy Gradients that do not explicitly estimate the underlying dynamical system. Yet, all too often, when standard supervised learning is applied to model complex dynamics, the resulting controllers do not perform at par with model-free RL methods in the limit of increasing sample size, due to compounding errors across long time horizons. In this talk, I will present novel algorithmic tools leveraging Lyapunov-based analysis and semi-infinite convex programming to derive a control-theoretic regularizer for dynamics fitting, rooted in the notion of trajectory stabilizability. In the first part of the talk, I will illustrate how to leverage the control-theoretic tools within a unified framework for synthesizing robust trajectory tracking controllers for complex underactuatednonlinear systems with analytical bounded-input-bounded-output disturbance rejection properties. Integrating these controllers within traditional (open-loop) motion planning algorithms allows us to reason about the closed-loop effects of uncertainty from disturbances, and generate certifiably safe trajectories. In the second part of the talk, the control-theoretic tools will be used to devise a semi-supervised algorithm for dynamics learning that yields models that jointly balance regression performance and stabilizability, ultimately resulting in generated trajectories for the robot that are notably easier to track. I will conclude with a brief discussion of some open questions pertaining to both robust planning and model learning.

Sumeet Singh is a Ph.D. candidate in the Autonomous Systems Lab in the Aeronautics and Astronautics Department at Stanford University. He received a B.Eng. in Mechanical Engineering and a Diploma of Music (Performance) from University of Melbourne in 2012, and a M.Sc. in Aeronautics and Astronautics from Stanford University in 2015. Prior to joining Stanford, Sumeet worked in the Berkeley Micromechanical Analysis and Design lab at the University of California, Berkeley in 2011 and the Aeromechanics Branch at NASA Ames in 2013. Sumeet's research interests include (1) Robust motion planning for constrained nonlinear systems, (2) Risk-sensitive inference and decision-making with humans in-the-loop, and (3) Design of verifiable learning architectures for safety-critical applications. Sumeet is the recipient of the Stanford Graduate Fellowship (2013-2016), the most prestigious Stanford fellowship awarded to incoming graduate students, and the Qualcomm Innovation Fellowship (2018).

Research Areas:

Impact Areas:

This event is not part of a series.

Created by John J. Leonard Email at Tuesday, May 07, 2019 at 1:51 PM.