EECS Special Seminar: Towards Deeper Understandings of Deep Learning
, Stanford University
Date: Monday, February 11, 2019
Time: 4:00 PM to 5:00 PM
Location: 32-G449 Patil/Kiva
Event Type: Seminar
Host: Prof. Tommi Jaakkola, MIT-CSAIL
Contact: Mary McDavitt, 617-253-9620, email@example.com
Speaker URL: None
TALK: Towards Deeper Understandings of Deep Learning
Learning through highly complicated and non-convex systems plays an important rule in machine learning. Recently, a vast amount of empirical works have demonstrated the success of these methods, especially in deep learning. However, the formal study of the principles behind them is much less developed.
This talk will cover a few recent results towards developing such principles. Firstly, we focus on the principle of ``over-parameterization''. We show that for neural networks such as CNNs, ResNet and RNNs, as long as enough over-parameterization is given, algorithms such as stochastic gradient descent (SGD) provably finds the global optimal on the training data set. Moreover, the solution also generalizes to test data set as long as the training labels are realizable by certain teacher networks.
The second result will cover the principle of ``being noisy''. We show how, for certain data sets, the neural network found by SGD with a large learning rate (i.e. step size) at the begining follow by a learning rate decay generalizes better than the one found by SGD with a small learning rate, even when both case have the same training loss.
Bio: Yuanzhi Li is a postdoctoral researcher at the Computer Science Department of Stanford University. Previously, he obtained his Ph.D. at Princeton (2014-2018) under the advice of Sanjeev Arora. His research interests include topics in deep learning, non-convex optimization, algorithms, and online learning.
Host: Tommi Jaakkola
Created by Mary McDavitt at Monday, February 04, 2019 at 11:27 AM.