IFT 6169: Theoretical principles for deep learning

THIS CLASS IS NOT OFFERED IN WINTER 2024

Description

Research in deep learning produces state-of-the-art results on a number of machine learning tasks. Most of those advances are driven by intuition and massive exploration through trial and error. As a result, theory is currently lagging behind practice. The ML community does not fully understand why the best methods work.

A symptom of this lack of understanding is that deep learning methods largely lack guarantees and interpretability, two necessary properties for mission-critical applications. More importantly, a solid theoretical foundation can aid the design of a new generation of efficient methods—sans the need for blind trial-and-error-based exploration.

In this class we will go over a number of recent publications that attempt to shed light onto these questions. Before discussing the new results in each paper we will first introduce the necessary fundamental tools from optimization, statistics, information theory and statistical mechanics. The purpose of this class is to get students engaged with new research in the area. To that end, the majority of credit will be given for a class project report and presentation on a relevant topic.

Prerequisites: This is a graduate class for students who want to engage in theory-driven deep learning research. We will introduce some theoretical tools necessary.

People

Instructor: Ioannis Mitliagkas

TAs: Jose Gallego, Motahareh Sohrabi

Class info

Winter 2023 semester:

Location (Hybrid):

First class: Thursday, January 12th, 2023.

Office hours: 11:15am-12:00pm on Thursday right after class.

Registering and auditing

If you want to audit the class, please fill this form. You will get access to the course material, but you will not be able to submit work or be graded.

Communication

We will use the class’s Studiumto make announcements. We will use a Slack workspace for discussion. Please keep an eye out for an invitation to the Slack. We will use Gradescope for quizzes and report submissions. If you have something personal/sensitive to discuss, feel free to email me or the TA. Starting your email subject with ‘IFT6169:’ will ensure that your email is not miscategorized.

Evaluation

Use this Latex template for scribing.

Tentative topics–to be updated as we go along

Extensive class bibliography

You can find a large number of recent (mostly) research papers related to the class’s objectives here. You can use those for your in-class paper presentations and projects.

Schedule

For the first half of the class we will be closely following the previous iteration of the class.

January 12th Class introduction [slides, old quiz]



Crash course in optimization

January 18th Basics of convex analysis and gradient descent [scribed notes]

Reading:

Convex analysis basics from ‘Convex Optimization’ by Boyd, Vandenberge ([5] under references):

Convergence proofs: from Chapter 3 of [1] (‘Convex Optimization…’ by S.Bubeck under References)

January 19th [continuation of previous lecture]

January 25th The different rates of gradient descent: from Lipschitz to strongly convex [scribed notes]

Reading:

Convergence proofs from Chapter 3 of [1] (‘Convex Optimization…’ by S.Bubeck under References)

January 26th Black box models and lower bounds [scribed notes]

Reading:

February 1st Accelerated methods: Polyak’s momentum (the heavy ball method) [scribed notes] [slides]

Reading:

February 2nd Nesterov’s Accelerated Gradient, Stochastic gradient descent [scribed notes]

Reading:


Crash course in statistical learning theory

February 8th Stochastic gradient descent [scribed notes]

February 9th Elements of statistical learning theory [scribed notes]

Recommended reading: Section 2 (if you need the intro) and Section 5 of [4].

Required reading: Sections 3, 4 and 6 of [4].

February 15th Elements of statistical learning theory [continuation of previous lecture]

February 16th PAC-Bayes bounds [scribed notes]

Required reading: [12]

Recommended reading: Section 31 of [4]

Recommended reading (harder): Section 6 of [2]

February 22nd Probably Approximately Correct Constrained Learning

Reading:

February 23rd MIDTERM EXAM


Winter break

March 1st Winter break No class

March 2nd Winter break No class



March 8th

Stability and generalization [scribed notes]

Required reading:

Optional reading: Proofs that are not covevered in the scribed notes.


Seminar part of class

March 9th Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning [scribed notes]

March 15th Surprising results on the generalization error of neural networks

March 16th Where is the Bayes risk hiding?

March 22nd Student presentations

March 23rd Student presentations

March 29th Student presentations

March 30th Student presentations

April 5th Student presentations

April 13th Neural Tangent Kernel [new lecture - no scribed notes]

Recommended reading on reproducing kernel hilbert spaces:

April 19th 1-1 discussions on course projects

April 20th 1-1 discussions on course projects

**May 3rd ** Final poster presentation for projects

Extra material (not scheduled) Determinantal point processes for ML (guest lecture by Jose Gallego)

Resources

  1. Convex Optimization: Algorithms and Complexity, Sebastien Bubeck.
  2. Theory of classification: a survey of some recent advances Stephane Boucheron, Olivier Bousquet and Gabor Lugosi
  3. iPython notebook demonstrating basic ideas of gradient descent and stochastic gradient descent, simple and complex models as well as generalization.
  4. Understanding Machine Learning: From Theory to Algorithms, by Shai Shalev-Shwartz and Shai Ben-David.
  5. Convex Optimization, Stephen Boyd and Lieven Vandenberghe.
  6. Nesterov’s Accelerated Gradient Descent for Smooth and Strongly Convex Optimization, blog post by Sebastien Bubeck.
  7. Introductory lectures on convex optimization, Yurii Nesterov.
  8. Why momentum really works, blog post by Gabriel Goh (this blog post uses a slightly different parametrization of the momentum algorithm. The version we discuss in class, only applies the learning rate on the gradient.)
  9. YellowFin and the Art of Momentum Tuning, SysML 2019, J. Zhang, I. Mitliagkas.
  10. Large-scale Machine Learning and Optimization (class), Dimitris Papailiopoulos, University of Wisconsin.
  11. Advanced Machine Learning Systems (class), Chris De Sa, Cornell University.
  12. A PAC-Bayesian Tutorial with A Dropout Bound, David McAllester.
  13. Stability and generalization, O. Bousquet, A. Elisseeff.
  14. Train faster, generalize better: Stability of stochastic gradient descent, M. Hardt, B. Recht, Y. Singer.
  15. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data, Gintare Karolina Dziugaite, Daniel M. Roy
  16. Lecture notes on generative learning algorithms, Andrew Ng
  17. Generative Adversarial Nets, Ian Goodfellow et al.
  18. Wasserstein GAN, Martin Arjovsky, Soumith Chintala, Léon Bottou
  19. Read-through: Wasserstein GAN, Alex Irpan
  20. The Numerics of GANs, Lars Mescheder, Sebastian Nowozin, Andreas Geiger
  21. Optimization Methods for Large-Scale Machine Learning, Léon Bottou, Frank E. Curtis, Jorge Nocedal
  22. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction, Rie Johnson, Tong Zhang
  23. Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning, Ali Rahimi, Ben Recht
  24. PacGAN: The power of two samples in generative adversarial networks, Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh
  25. Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming, Saeed Ghadimi, Guanghui Lan
  26. Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition, Hamed Karimi, Julie Nutini, Mark Schmidt
  27. Escaping From Saddle Points — Online Stochastic Gradient for Tensor Decomposition, Rong Ge, Furong Huang, Chi Jin, Yang Yuan
  28. How to Escape Saddle Points Efficiently, Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan
  29. RANDOM MATRICES AND COMPLEXITY OF SPIN GLASSES, ANTONIO AUFFINGER, GERARD BEN AROUS, AND JIRI CERNY
  30. The Loss Surfaces of Multilayer Networks, Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, Yann LeCun
  31. Learning Long-Term Dependencies with Gradient Descent is Difficult, Y Bengio, P Simard, P Frasconi
  32. On the difficulty of training Recurrent Neural Networks, Razvan Pascanu, Tomas Mikolov, Yoshua Bengio
  33. Opening the Black Box: Low-Dimensional Dynamics in High-Dimensional Recurrent Neural Networks, David Sussillo and Omri Barak
  34. Reinforcement Learning: An Introduction, Richard S. Sutton and Andrew G. Barto
  35. Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation, Moritz Hardt.
  36. Uniform convergence may be unable to explain generalization in deep learning , Vaishnavh Nagarajan, J. Zico Kolter
  37. Tutorial on Practical Prediction Theory for Classification, John Langford, 2005.
  38. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data, Gintare Karolina Dziugaite, Daniel M. Roy, 2017.
  39. Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach , Wenda Zhou, Victor Veitch, Morgane Austern, Ryan P. Adams, Peter Orbanz, 2018.
  40. Representation benefits of deep feedforward, Matus Telgarsky, 2015.