- Unsupervised Semantic Perce...
- Edit Event
- Cancel Event
- Preview Reminder
- Send Reminder
- Other events happening in January 2014
Unsupervised Semantic Perception, Summarization, and Exploration for Robots in Unstructured Environments
, McGill University
Date: Thursday, January 30, 2014
Time: 11:00 AM to 12:00 PM Note: all times are in the Eastern Time Zone
Host: Nick Roy
Speaker URL: None
TALK: Unsupervised Semantic Perception, Summarization, and Exploration for Robots in Unstructured Environments
This work explores several challenges involved in building robotic exploration and monitoring systems, and I will describe contributions on three fronts:
First, I will describe ROST, a realtime online spatiotemporal topic modeling framework that can be used to perceive the world at a higher level of abstraction, in realtime, and with no prior training. If we are given an observation model of various semantic entities (topics) that compose the world (such as sand, corals, rocks and fishes), then it is easy to describe the current scene in terms of these entities using this model; likewise, if we are given a labeling of the world in terms of these entities, then it is easy to compute the observation model for each individual entity. The challenge comes from doing these two tasks together, unsupervised, and with no prior information about the world.
Second, I will describe the idea of extremum summaries that are useful in summarizing observation data from a robot. These summaries aim to capture interesting observations and can be used to inform a remote operator of any surprising observations over a limited communication bandwidth. Although computation of an optimal summary is shown to be NP-hard, the proposed approximate algorithms bound the worst case
behavior of the results while running in polynomial time.
Third, I will describe an information gathering robotic exploration technique that biases the path of the robot towards locations with high information content in topic space. Topic models learned from such paths are much better at distinguishing between different terrains, and correlates well with hand labeled data when compared with space filling or random paths. I will present empirical results and give a video demonstration of the technique on Aqua underwater robot.
Yogesh Girdhar received his B.S and M.S. degree in Computer science from Rensselaer Polytechnic Institute, Troy, NY, where he received Paul A. McLoin Prize for most outstanding academic achievement in Computer Science. He is currently finishing his PhD in the School of Computer Science at McGill University, Montreal, Canada. Yogesh's current research interest is in building robots that can assist in exploration of unknown environments.
Created by Nick Roy at Friday, January 24, 2014 at 2:52 PM.