Bayesian RL for Multiagent Systems under State Uncertainty
, University of Amsterdam
Date: Wednesday, April 02, 2014
Time: 3:00 PM to 4:00 PM Note: all times are in the Eastern Time Zone
Location: D463 (Star)
Host: Leslie Kaelbling
Contact: Christopher Amato, email@example.com
Speaker URL: None
TALK: Bayesian RL for Multiagent Systems under State Uncertainty
Sequential decision making in multiagent systems (MASs) is a challenging problem, especially when the agents have uncertainty about what the true state of the environment is. The problem gets even more complex when the agents do not have an accurate model of the environment and/or the other agents they are interacting with. In such cases, the agents will need to learn during execution. While the field of multiagent reinforcement learning (MARL) focuses on learning in MASs, few approaches address settings with state uncertainty, and even fewer consider principled methods for balancing the exploitation of learned knowledge and exploratory actions to gain new information.
In this talk I will cover two approaches to Bayesian MARL for settings with state uncertainty that aim to fill this gap by transforming the learning problem to a planning problem. The solution of this planning problem specifies behavior that optimally trades of exploration and exploitation. I discuss a novel scalable approach based on sample-based planning and factored value functions that exploits structure present in many multiagent settings. I will also briefly talk about a model from the perspective of a single agent that has uncertainty about both the environment and the behavior of the agents it needs to interact with. Our results show that we can provide high quality solutions to these realistic problems even with a large amount of initial model uncertainty.
Created by Christopher Amato at Tuesday, April 01, 2014 at 9:50 AM.