Real-time, Interactive Machine Learning for Music Composition and Performance

Speaker: Rebecca Fiebrink , Princeton

Date: Friday, November 04, 2011

Time: 1:00 PM to 2:00 PM

Refreshments: 2:45 PM

Public: Yes

Location: Patil/Kiva Seminar Room (G449)

Event Type:

Room Description:

Host: Rob Miller, MIT CSAIL

Contact: Katrina Panovich, 630-853-8164, kp@mit.edu

Relevant URL: http://groups.csail.mit.edu/uid/seminar.shtml

Speaker URL: None

Speaker Photo:
None

Reminders to: seminars@csail.mit.edu, hci-seminar@csail.mit.edu, chi-labs@csail.mit.edu, msgs@media.mit.edu

Reminder Subject: TALK: Real-time, Interactive Machine Learning for Music Composition and Performance

Abstract:
Supervised learning offers a useful set of computational tools for many problems in computer music composition and performance. Through the use of training examples, these algorithms offer composers and instrument builders a means to implicitly specify the relationship between low-level, human-generated control signals (such as the outputs of gesturally-manipulated sensor interfaces, or audio captured by a microphone) and the desired computer response (such as a change in the synthesis or structural parameters of dynamically-generated digital audio). However, previously existing software tools have not adequately enabled musicians to employ supervised learning in their work. In my recent research, I have focused on building better tools for these users by supporting more appropriate and comprehensive end-user interactions with the supervised learning process.

In this talk, I will provide a brief introduction to interactive computer music and the use of supervised learning in this field. I will show a live musical demo of the software that I have created for interactively applying standard supervised learning algorithms to music and other real-time problem domains. This software, called the Wekinator, supports a hands-on approach to generating training examples by real-time demonstration, as well as interactive, real-time evaluation of the trained models.

In the rest of the talk, I will present my research and collaborations with composers and students applying the Wekinator to their work. This has included a participatory design process with practicing composers, pedagogical use with undergraduate students building interactive music systems, the design of a gesture recognition system for a sensor-augmented cello bow, and case studies with composers who have used the Wekinator in publicly-performed musical works. I will discuss some highlights of my findings, such as how interactions with the Wekinator supported users in accomplishing their goals, how the Wekinator "trained" its users to become better machine learning practitioners and to become more aware of their own actions, and how interactive supervised learning functioned as a tool for supporting creativity and an embodied approach to design.

Research Areas:

Impact Areas:

See other events that are part of the HCI Seminar Series 2011/2012.

Created by Linda L. Julien Email at Wednesday, June 19, 2013 at 6:24 AM.