- Modeling Human Communicatio...
- Edit Event
- Cancel Event
- Preview Reminder
- Send Reminder
- Other events happening in November 2013
Modeling Human Communication Dynamics
Speaker:
Louis-Philippe Morency
Date: Monday, November 18, 2013
Time: 4:00 PM to 5:00 PM Note: all times are in the Eastern Time Zone
Refreshments: 3:45 PM
Public: Yes
Location: 32-D463
Event Type:
Room Description:
Host: Randall Davis
Contact: Nira Manokharan, 617-253-5977, nira@csail.mit.edu
Speaker URL: None
Speaker Photo:
None
Reminders to:
seminars@csail.mit.edu, csail-related@lists.csail.mit.edu
Reminder Subject:
TALK: Modeling Human Communication Dynamics
Abstract:
Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context. Today's computers and interactive devices are still lacking many of these human-like abilities to hold fluid and natural interactions. Leveraging recent advances in machine learning, audio-visual signal processing and computational linguistics, my research focuses on creating human-computer interaction (HCI) technologies able to analyze, recognize and predict human subtle communicative behaviors in a social context. I formalize this new research endeavor with a Human Communication Dynamics framework, addressing four key computational challenges: behavioral dynamic, multimodal dynamic, interpersonal dynamic and societal dynamic. Central to this research effort is the introduction of new probabilistic models able to learn the temporal and fine-grained latent dependencies across behaviors, modalities and interlocutors. In this talk, I will present some of our recent achievements modeling multiple aspects of human communication dynamics, motivated by applications in healthcare (depression, PTSD, suicide, autism), education (learning analytics), business (negotiation, interpersonal skills) and social multimedia (opinion mining, social influence).
Bio:
Louis-Philippe Morency is a Research Assistant Professor in the Department of Computer Science at the University of Southern California (USC) and leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab) at the USC Institute for Creative Technologies. He received his Ph.D. and Master degrees from MIT Computer Science and Artificial Intelligence Laboratory. In 2008, Dr. Morency was selected as one of "AI's 10 to Watch" by IEEE Intelligent Systems. He has received 7 best paper awards in multiple ACM- and IEEE-sponsored conferences for his work on context-based gesture recognition, multimodal probabilistic fusion and computational models of human communication dynamics. For the past two years, Dr. Morency has been leading a DARPA-funded multi-institution effort which created SimSensei, an interactive virtual human platform for healthcare decision support, and ltiSense, a multimodal perception library designed to objectively quantify behavioral indicators of psychological distress.
Research Areas:
Impact Areas:
Created by Niranjala Manokharan at Tuesday, October 29, 2013 at 8:39 AM.