THESIS DEFENSE: Representations for Intelligent Navigation in Unfamiliar Environments

Speaker: Gregory J. Stein , MIT, EECS, CSAIL

Date: Friday, February 28, 2020

Time: 3:00 PM to 4:30 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location: Seminar Room D463 (Star)

Event Type: Thesis Defense

Room Description:

Host:

Contact: Gregory J. Stein, gjstein@mit.edu

Relevant URL:

Speaker URL: None

Speaker Photo:
None

Reminders to:

Reminder Subject: TALK: THESIS DEFENSE: Representations for Intelligent Navigation in Unfamiliar Environments

In this thesis, we focus primarily on the problem of autonomous navigation in complex, unknown environments. For example, consider an embodied agent tasked with traveling to an unseen goal in minimum time. In general, effective navigation requires that the agent explicitly reason about portions of the environment that it has not yet seen, yet the world is intractably complex, making exhaustive enumeration of all environment configurations impossible. Instead, we imbue an agent with the ability to more tractably make predictions about uncertainty by changing the way in which it represents its surroundings and the actions it uses to define navigation. The way an agent chooses to represent the world around it is fundamental to its ability to effectively interact with it. The work presented in this thesis is centered around the development of new representations that enable embodied agents to better understand the impact of their actions, so that they may plan quickly and intelligently in a dynamic and uncertain world.

This thesis has three primary contributions. First, we introduce Learned Subgoal Planning, a decision-making paradigm that leverages high-level actions to factor the planning task, thereby enabling tractable predictions about unknown space via supervised learning and efficient computation of expected cost. Second, we apply recent progress in image-to-image translation to the task of domain adaptation for image data, allowing an agent to transfer knowledge acquired in simulation to the real world. Finally, we introduce a learned pseudosensor that estimates sparse structure in view of an agent from monocular images and an accompanying probabilistic sensor model. Fusing these estimates during exploration of unknown environments, we enable map-building of unfamiliar environments from monocular images suitable for high-level planning with topological constraints.

Research Areas:
AI & Machine Learning, Robotics

Impact Areas:

This event is not part of a series.

Created by Gregory J. Stein Email at Monday, February 24, 2020 at 2:29 PM.