Vision, Touch and Motion: On the Value of Multimodal Data in Robot Manipulation and How to Leverage it

Speaker: Jeannette Bohg , Dept. of Computer Science, Stanford University

Date: Tuesday, December 11, 2018

Time: 3:00 PM to 4:00 PM

Public: Yes

Location: Seminar Room G882 (Hewlett Room)

Event Type: Seminar

Room Description:

Host: Josh Tenenbaum, BCS/CSAIL, MIT

Contact: Sholei Croom, croom@mit.edu

Relevant URL:

Speaker URL: None

Speaker Photo:
None

Reminders to: seminars@csail.mit.edu, robotics-related@mit.edu

Reminder Subject: TALK: Jeannette Bohg - Vision, Touch and Motion: On the Value of Multimodal Data in Robot Manipulation and How to Leverage it

Recent approaches in robotics follow the insight that perception is facilitated by physical interaction with the environment. First, interaction creates a rich sensory signal that would otherwise not be present. And second, knowledge of the regularity in the combined space of sensory data and action parameters facilitate the prediction and interpretation of the signal.

In this talk, I will focus on what this rich sensory signal may consist of and how it can be leveraged for better perception and manipulation. I will start with our recent work that exploits RGB, Depth and Motion to perform instance segmentation of an unknown number of simultaneously moving objects. The underlying model estimates dense, per-pixel scene flow that is then followed by clustering in motion trajectory space. We show how this outperforms state-of-the-art in scene flow estimation and multi-object segmentation.

Furthermore, I will present our recent work on fusing vision and touch for contact-rich manipulation tasks. It is non-trivial to manually design a robot controller that combines modalities with very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of policy learning. I present experiments on a peg insertion task where the learned policy generalises over different geometry, configurations, and clearances, while being robust to external perturbations.

Bio:
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interesting in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette was a group leader at the MPI for Intelligent Systems until September 2017. Before joining the Autonomous Motion lab of MPI-IS in January 2012, Jeannette was a PhD student at the Computer Vision and Active Perception lab (CVAP) at KTH in Stockholm. Her thesis on Multi-modal scene understanding for Robotic Grasping was supervised by Prof. Danica Kragic. She studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively.

Research Areas:
AI & Machine Learning, Graphics & Vision, Robotics

Impact Areas:

This event is not part of a series.

Created by Jiajun Wu Email at Tuesday, November 27, 2018 at 2:17 PM.