IFT 6085: Theoretical principles for deep learning (new class)

NEW! Extensive class bibliography

Class discussion group: please sign up to receive announcements and participate in discussion.


Research in deep learning produces state-of-the-art results on a number of machine learning tasks. Most of those advances are driven by intuition and massive exploration through trial and error. As a result, theory is currently lagging behind practice. The ML community does not fully understand why the best methods work.

A symptom of this lack of understanding is that deep learning methods largely lack guarantees and interpretability, two necessary properties for mission-critical applications. More importantly, a solid theoretical foundation can aid the design of a new generation of efficient methods—sans the need for blind trial-and-error-based exploration.

In this class we will go over a number of recent publications that attempt to shed light onto these questions. Before discussing the new results in each paper we will first introduce the necessary fundamental tools from optimization, statistics, information theory and statistical mechanics. The purpose of this class is to get students engaged with new research in the area. To that end, the majority of credit will be given for a class project report and presentation on a relevant topic.

Prerequisites: This is meant to be an advanced graduate class for students who want to engage in theory-driven deep learning research. We will introduce the theoretical tools necessary, but start with the assumption that students are comfortable with basic probability and linear algebra.


Lecturer: Ioannis Mitliagkas, Office: 3359, André-Aisenstadt

Class info

Winter 2018 semester:

Room: André-Aisestadt 3195

Office hours: 11:15am-12:15pm on Thursday right after class.


Class project: 60% Paper presentation: 25% Scribing: 10% Class participation: 5%

Use this Latex template for scribing.

Tentative topics–to be updated as we go along


January 10th Class introduction [slides, quiz]

Crash course in optimization

January 11th Basics of convex analysis and gradient descent [scribed notes]


Convex analysis basics from ‘Convex Optimization’ by Boyd, Vandenberge ([5] under references):

Convergence proofs: from Chapter 3 of [1] (‘Convex Optimization…’ by S.Bubeck under References)

January 17th The different rates of gradient descent: from Lipschitz to strongly convex [scribed notes]


Convergence proofs from Chapter 3 of [1] (‘Convex Optimization…’ by S.Bubeck under References)

January 18th Black box models and lower bounds [scribed notes]

Reading: [1, Theorem 3.15], [6]

January 24th Accelerated methods [scribed notes]

Reading: [6], [7, pages 67-76], [8], [9]

January 25th Nesterov’s Accelearted Gradient, Stochastic gradient descent [scribed notes]

Reading: Section 6 until 6.2 of [1], Section 14.3 of [4]

Crash course in statistical learning theory

January 31st Elements of statistical learning theory [scribed notes]

Reading: Sections 2 (if you need the intro), 3, 4 and 6 of [4].

February 1st PAC-Bayes bounds [scribed notes]

Reading: [12]

Reading (harder): Section 6 of [2]

February 7th Stability and generalization [scribed notes]

Reading: [13]

February 8th Stability and generalization: Part II [scribed notes]

Reading: [13,14]

Seminar part of class

February 14th Applications of stability and PAC Bayes [scribed notes]

Reading: [14,15]

February 15th NO CLASS - Instructor is travelling

February 21st Student paper presentations A

February 22nd Generative models [scribed notes]

Reading: [16,17]

February 28th Student paper presentations B

March 1st Wasserstein GANs [scribed notes]

Reading: [18,19]

March 7th BREAK No class

March 8th BREAK No class

March 14th Student paper presentations C

March 15th The Numerics of GANs

Reading: [20]

March 21st TBA

Reading: TBA

March 28th NO CLASS - Instructor is travelling


  1. Convex Optimization: Algorithms and Complexity, Sebastien Bubeck.
  2. Theory of classification: a survey of some recent advances Stephane Boucheron, Olivier Bousquet and Gabor Lugosi
  3. iPython notebook demonstrating basic ideas of gradient descent and stochastic gradient descent, simple and complex models as well as generalization.
  4. Understanding Machine Learning: From Theory to Algorithms, by Shai Shalev-Shwartz and Shai Ben-David.
  5. Convex Optimization, Stephen Boyd and Lieven Vandenberghe.
  6. Nesterov’s Accelerated Gradient Descent for Smooth and Strongly Convex Optimization, blog post by Sebastien Bubeck.
  7. Introductory lectures on convex optimization, Yurii Nesterov.
  8. Why momentum really works, blog post by Gabriel Goh (this blog post uses a slightly different parametrization of the momentum algorithm. The version we discuss in class, only applies the learning rate on the gradient.)
  9. YellowFin and the Art of Momentum Tuning, preprint J. Zhang, I. Mitliagkas.
  10. Large-scale Machine Learning and Optimization (class), Dimitris Papailiopoulos, University of Wisconsin.
  11. Advanced Machine Learning Systems (class), Chris De Sa, Cornell University.
  12. A PAC-Bayesian Tutorial with A Dropout Bound, David McAllester.
  13. Stability and generalization, O. Bousquet, A. Elisseeff.
  14. Train faster, generalize better: Stability of stochastic gradient descent, M. Hardt, B. Recht, Y. Singer.
  15. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data, Gintare Karolina Dziugaite, Daniel M. Roy
  16. Lecture notes on generative learning algorithms, Andrew Ng
  17. Generative Adversarial Nets, Ian Goodfellow et al.
  18. Wasserstein GAN, Martin Arjovsky, Soumith Chintala, Léon Bottou
  19. Read-through: Wasserstein GAN, Alex Irpan
  20. The Numerics of GANs, Lars Mescheder, Sebastian Nowozin, Andreas Geiger