Efficient Deep Learning: From Theory to Practice
, MIT CSAIL
Date: Monday, August 23, 2021
Time: 3:00 PM to 4:30 PM Note: all times are in the Eastern Time Zone
Location: https://mit.zoom.us/j/98186493200?pwd=Z0hTZVMyWWFVUTl3ckd2cHpHRFg5Zz09 or 32-G449 (Patil/Kiva)
Event Type: Thesis Defense
Host: Daniela Rus, MIT CSAIL
Contact: Lucas Liebenwein, email@example.com
Speaker URL: http://www.mit.edu/~lucasl/
TALK: Efficient Deep Learning: From Theory to Practice (Thesis Defense, Lucas Liebenwein)
Modern machine learning often relies on deep neural networks that are prohibitively expensive in terms of the memory and computational footprint. This in turn significantly inhibits the potential range of applications where we are faced with non-negligible resource constraints, e.g., real-time data processing, embedded devices, and robotics. In this thesis, we develop theoretically-grounded algorithms to reduce the size and inference cost of modern large-scale neural networks. By taking a theoretical approach from first principles, we intend to understand and analytically describe the performance-size trade-offs of modern deep networks, i.e, the generalization properties, and to leverage such insights to devise practical algorithms for obtaining efficient neural network models via pruning or compression. Beyond theoretical aspects and the inference time efficiency of neural networks, we study how compression can yield novel insights into the design and training of neural networks. We investigate the practical aspects of the generalization properties of pruned neural network beyond simple metrics such as test accuracy. We then show how in certain applications pruned neural networks can improve the training and hence generalization of networks.
Thesis supervisor: Daniela Rus
Thesis committee: Michael Carbin, Song Han
Algorithms & Theory, AI & Machine Learning
Created by Lucas Liebenwein at Friday, July 23, 2021 at 11:39 AM.