Ioannis Mitliagkas

UdeM
Associate professor, Computer Science, University of Montréal


Mila
Core faculty at Mila

Google Brain
Staff research scientist at Google Brain Montreal


CIFAR
Canada CIFAR AI Chair


MTL MLOpt
Founder, organizer





CV (October 2022)

Email, Scholar, Twitter, LinkedIn
I work on topics in optimization, dynamics and learning, with a focus on modern machine learning. I have done work in the intersection of systems and theory. Some recent topics:
  • Min-max optimization and the dynamics of games
  • Generalization and domain adaptation
  • Optimization for deep learning
  • Statistical learning and inference
See my research contributions for long-term vision and recent publications for an idea of what I'm doing now. Most importantly, I supervise a dozen very talented junior researchers.

I co-organize the Smooth games optimization and ML workshop series at NeurIPS. The opening remarks video from two years ago gives a nice summary of our motivation for this line of work. For a summary of relevant work in my lab you can check out this slide deck. In spring 2022 I was invited to participate at the semester on Learning and Games at Simons Institute at Berkeley, CA.

In Montreal, I work (part-time) as a research scientist at Google Brain. I also co-organize MTL MLOpt, a bi-weekly meeting of optimization experts from Mila, UdeM, McGill (CS and math), Google Brain, SAIL, FAIR, MSR. We proudly share our guest speaker videos.

Every fall, I teach ML to 100-200 grad students. Every winter, I teach an advanced research class on deep learning theory. Last summer I was honored to be invited to teach optimization for ML at the Neuromatch Academy's deep learning course.

Before joining the University of Montreal, I was a postdoc with the Departments of Computer Science and Statistics at Stanford University working with Chris Ré and Lester Mackey. I got my PhD at The University of Texas at Austin with Constantine Caramanis and Sriram Vishwanath, where I also worked with Alex Dimakis.

Prospective students: I am looking for particularly strong students, for MSc or PhD. Unfortunatley, I might not be able to respond to all emails. Please make sure to go over my recent publications and list of recent projects (below). If you think that we have a strong overlap in interests and you have a strong background in mathematics and computation, please make sure to submit your supervision request by December 1st (form opens in mid-October) and mention me as one of your faculty of choice.
  


Recent projects

NEW: A curated list of some of my most exciting research projects. For a thorough list of projects grouped by topic, please consult the projects page. Publications listed below, in the present page.



Recent News

  • September 2022: I'm excited to announce that I've started part-time work as a staff research scientist at Google Brain Montreal! Looking forward to deepening my connection to larger scale problems in industry.
  • September 2022: Kartik Ahuja has finished his postdoc and started working as a research scientist at Meta reserach (FAIR) Paris!
  • July 2022: Graduating PhD candidate Alexia Jolicoeur-Martineau started work at Samsung SAIT AI Lab in Montreal!
  • June 2022: As of June 1st, 2022 I was promoted to the post of associate professor with tenure. My sincere thanks to colleagues and my group for all their hard work.
  • October 2021: Honored to be invited to the semester of Learning and Games at Simons. I will be visitting Berkeley in January-March of 2022.
  • September 2021: Two papers accepted at NeurIPS 2021! [one, two]
  • April 2021: Postdoc in our group, Nicolas Loizou accepted a tenure track position at Johns Hopkins University!
  • March 2021: Postdoc in our group, Manuela Girotti accepted a tenure track position at St. Mary's University! In the meantime shewill spend time at the prestigious mathematical institute, MSRI, in Berkeley, CA.
  • January 2021: Best student paper award for Charles, Baptiste and Manuela's paper at OPT2020! For their work on the fundamentals of condition numbers. Their paper was accepted for publication at AISTATS 2021.
  • January 2021: Alexia and Rémi's paper, in collaboration with MSR Montréal has been accepted at ICLR 2021! Preprint available here.
  • September 2020: Paper on evaluating generalization measures accepted at NeurIPS'20.
  • September 2020: Welcome to new PhD students, Ryan D'Orazio and Hiroki Naganuma.
  • August 2020: Kartik Ahuja awarded the IVADO postdoctoral scholarship. Excited to have him join us in January 2021!
  • April 2020: Two papers on differentiable games (one, two) accepted at ICML'20.
  • January 2020: Two papers on efficient methods and tight bounds for differentiable games (one, two) accepted at AISTATS'20.
  • December 2019: Nicolas Loizou was awarded the IVADO postdoctoral scholarship at the prestigious Fellow tier.
  • December 2019: Brady Neal graduates with an MSc. He will continue on his PhD with us.
  • November 2019: Excited to be coorganizing the 2nd iteration of the Smooth Games Optimization and Machine Learning Workshop at NeurIPS'19.
  • November 2019: Reducing the variance in online optimization by transporting past gradients selected for spotlight oral presentation at NeurIPS'19.
  • October 2019: Multiple submissions to AISTATS. Preprints on the way...
  • June 2019: At ICML with 3 papers in main conference, 2 in Deep Learning Phenomena workshop.
  • May 2019: State-Reification Networks selected for oral presentation at ICML'19.
  • April 2019: Excited to be listed among the most prolific authors of accepted ICML 2019 papers.
  • April 2019: I received the NSERC Discovery grant!
  • April 2019: In Japan for AISTATS.
  • January 2019: h-detach paper accepted at ICLR.
  • December 2018: Was nominated in the first cohort of Canada CIFAR AI chairs!!
  • December 2018: Co-organizing Smooth Games Optimization in ML workshop at NeurIPS.
  • December 2018: Negative momentum for improved game dynamics. Paper accepted at AISTATS 2019.
  • December 2018: Full version of YellowFin manscript accepted at SysML
  • September 2018: Excited to be teaching Machine Learning to a class of 180 graduate students at UdeM.
  • February 2018: YellowFin selected for oral presentation at SysML'18.
  • January 2018: Teaching new class! IFT 6085: Theoretical principles for deep learning
  • December 2017: Accelerated power iteration via momentum, paper accepted at AISTATS 2018.
  • November 2017: Talk at Google Brain, Montréal
  • September 2017: Thrilled to be starting work at the University of Montreal and the Mila as an assistant professor!
  • August 2017: Visiting my alma mater, UT Austin.
  • August 2017: At Sydney for ICML, presenting work on YellowFin, custom scans for Gibbs sampling, and deep learning for 3D point cloud representation and generation.
  • July 2017: New preprint! Representation Learning and Adversarial Generation of 3D Point Clouds [arxiv].
  • July 2017: New preprint! Accelerated stochastic power iteration [arxiv].
  • June 2017: New preprint! An automatic tuner for they hyperparameters of momentum SGD [arxiv].
  • May 2017: Custom scan sequence paper accepted for presentation at ICML 2017!
  • April 2017: Invited talk at Workshop on Advances in Computing Architectures, Stanford SystemX
  • March 2017: New preprint! Custom scan sequences for super fast Gibbs sampling.
  • February 2017: Invited to talk at ITA in San Diego.
  • February 2017: Spoke at the AAAI 2017 Workshop on Distributed Machine Learning.
  • January 2017: Visiting Microsoft Research, Cambridge
  • December 2016: At NIPS, presenting our Gibbs sampling paper dispelling some common beliefs regarding scan orders.
  • November 2016: Visiting Microsoft Research New England
  • November 2016: Invited talk at SystemX Stanford Alliance Fall Conference
  • November 2016: Full version of asynchrony paper.
  • September 2016: Talk at Allerton
  • August 2016: I had the pleasure to give a talk MIT Lincoln Labs.
  • August 2016: Gave an asynchronous optimization talk at Google.
  • August 2016: Blog post on our momentum work.
  • July 2016: Invited to talk at NVIDIA.
  • June 2016: Poster at non-convex optimization ICML workshop.
  • June 2016: Poster at OptML 2016 workshop.
  • In a recent note, we show that asynchrony in SGD introduces momentum. In the companion systems paper, we use this theory to train deep networks faster.
  • Does periodic model averaging always help? Recent results.
  • Excited to start Postdoc at Stanford University. Will be working with Lester Mackey and Chris Ré.
  • Successfully defended my PhD thesis!
  • SILO seminar talk at the Wisconsin Institute of Discovery. Loved both Madison and the WID!
  • Densest k-Subgraph work picked up by NVIDIA!
  • Our latest work has been accepted for presentation at VLDB 2015!


Students and postdocs

Brady Neal Brady Neal, PhD

Adam Ibrahim Adam Ibrahim, PhD

Charles Guille-Escuret Charles Guille-Escuret, PhD

Kartik Ahuja Kartik Ahuja, Postdoc

Mehrnaz Mofakhami Mehrnaz Mofakhami, MSc student

Kilian Fatras Kilian Fatras, Postdoc
Reyhane Askari-Hemmat Reyhane Askari-Hemmat, PhD

Alexia Jolicoeur-Martineau Alexia Jolicoeur-Martineau, PhD

Hiroki Naganuma Hiroki Naganuma, PhD

Ryan D'Orazio Ryan D'Orazio, PhD

Divyat Mahajan Divyat Mahajan, MSc student
  

The following people are not supervised by me, but are exceptional collaborators deserving mention.

Gauthier Gidel
Isabela Albuquerque
Joao Monteiro
Alex Lamb
  


Past students and supervisees (and where they are now)

Nicolas Loizou, Postdoc (Assistant professor, Johns Hopkins University, 2022)
Manuela Girotti, Postdoc (Assistant professor, St. Mary's University)
Brady Neal, MSc, Fall 2019 (continuing for PhD)
Remi Piche-Taillefer, MSc, Summer 2021 (MSR Montreal)
Baptiste Goujaud, intern, Summer 2019 (PhD candidate at Ecole Polytechnique Paris)
Seb Arnold, intern, Summer 2018 (PhD candidate at USC)
Amartya Mitra, intern, Spring-Summer 2020 (PhD candidate at UC Riverside)
Nicolas Gagne, intern, Summer 2018 (PhD candidate at McGill)
Vinayak Tantia, intern, 2018 (FAIR Montreal)
  


Teaching



Publications

 
Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization
Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Yoshua Bengio, Ioannis Mitliagkas, Irina Rish
Spotlight
NeurIPS, 2021 [pdf]
 
Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity
Nicolas Loizou, Hugo berard, Gauthier Gidel, Ioannis Mitliagkas, Simon Lacoste-Julien
NeurIPS, 2021 [pdf]
 
A Study of Condition Numbers for First-Order Optimization
Charles Guille-Escuret*, Baptiste Goujaud*, Manuela Girotti, Ioannis Mitliagkas
Best student paper award, OPT2020
AISTATS, 2021 [pdf]
 
Adversarial score matching and improved sampling for image generation
Alexia Jolicoeur-Martineau*, Rémi Piché-Taillefer*, Rémi Tachet des Combes, Ioannis Mitliagkas
ICLR, 2021 [pdf]
 
In search of robust measures of generalization
Gintare Karolina Dziugaite, Alexandre Drouin, Brady Neal, Nitarshan Rajkumar, Ethan Caballero, Linbo Wang, Ioannis Mitliagkas, Daniel M Roy
NeurIPS 2020 [pdf]
 
LEAD: Least-Action Dynamics for Min-Max Optimization
Reyhane Askari Hemmat*, Amartya Mitra*, Guillaume Lajoie, Ioannis Mitliagkas
Early version presented at OPT2020, preprint [pdf]
 
Linear Lower Bounds and Conditioning of Differentiable Games
Adam Ibrahim, Waiss Azizian, Gauthier Gidel, Ioannis Mitliagkas
ICML 2020 [pdf]
 
Stochastic hamiltonian gradient methods for smooth games
Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, Ioannis Mitliagkas
ICML 2020 [pdf]
 
Adversarial target-invariant representation learning for domain generalization
Isabela Albuquerque, João Monteiro, Tiago Falk, Ioannis Mitliagkas
preprint 2020 [pdf]
 
Accelerating Smooth Games by Manipulating Spectral Shapes
Waiss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon Lacoste-Julien, Gauthier Gidel
AISTATS 2020 [pdf]
 
A tight and unified analysis of extragradient for a whole spectrum of differentiable games
Waiss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, Gauthier Gidel
AISTATS 2020 [pdf]
 
Reducing the variance in online optimization by transporting past gradients
Seb M. Arnold, Pierre-Antoine Manzagol, Reza Babanezhad, Ioannis Mitliagkas, Nicolas Le Roux.
NeurIPS 2019 [pdf] Spotlight oral presentation
 
Multi-objective training of Generative Adversarial Networks with multiple discriminators
Isabela Albuquerque, João Monteiro, Thang Doan, Breandan Considine, Tiago Falk, Ioannis Mitliagkas
ICML 2019 [pdf]
 
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Alex Lamb, Jonathan Binas, Anirudh Goyal, Sandeep Subramanian, Ioannis Mitliagkas, Denis Kazakov, Yoshua Bengio, Michael Mozer.
ICML, 2019 [pdf] Oral presentation
 
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
Brady Neal, Sarthak Mittal, Aristide Baratin, Vinayak Tantia, Matthew Scicluna, Simon Lacoste-Julien, Ioannis Mitliagkas
Machine Learning with Guarantees 2019 (workshop at NeurIPS), preprint [pdf]
 
Negative momentum for improved game dynamics
Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Gabriel Huang, Remi Lepriol, Simon Lacoste-Julien, Ioannis Mitliagkas
AISTATS 2019 [pdf]
 
Manifold mixup: Encouraging meaningful on-manifold interpolation as a regularizer
Vikas Verma, Alex Lamb, Christopher Beckham, Aaron Courville, Ioannis Mitliagkas, Yoshua Bengio
ICML, 2019[pdf]
 
Fortified networks: Improving the robustness of deep networks by modeling the manifold of hidden representations
Alex Lamb, Jonathan Binas, Anirudh Goyal, Dmitriy Serdyuk, Sandeep Subramanian, Ioannis Mitliagkas, Yoshua Bengio
Preprint, 2018 [pdf]
 
Learning Representations and Generative Models for 3D Point Clouds
Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, Leo Guibas.
ICML 2018 [pdf]
 
Accelerated Stochastic Power Iteration
Christopher De Sa, Bryan He, Ioannis Mitliagkas, Christopher Ré, Peng Xu.
AISTATS 2018 [pdf] [Blogpost] [adoption]
 
Improving Gibbs Sampler Scan Quality with DoGS
Ioannis Mitliagkas and Lester Mackey
ICML 2017 [ pdf]
 
YellowFin: Adaptive Optimization for (A)synchronous Systems
Jian Zhang and Ioannis Mitliagkas.
SysML'18
Oral presentation [long version pdf] [Blogpost]
 
Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data
Thorsten Kurth, Jian Zhang, Nadathur Satish, Ioannis Mitliagkas, Evan Racah, Md. Mostofa Ali Patwary, Tareq Malas, Narayanan Sundaram, Wahid Bhimji, Mikhail Smorkalov, Jack Deslippe, Mikhail Shiryaev, Srinivas Shridharan, Prabhat, Pradeep Dubey.
Supercomputing 2017 [pdf]
 
Asynchrony begets Momentum, with an Application to Deep Learning
Ioannis Mitliagkas, Ce Zhang, Stefan Hadjis, and Christopher Ré.
Allerton, arXiv:1605.09774v2 (2016) [pdf]
 
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much
Bryan He, Christopher De Sa, Ioannis Mitliagkas, and Christopher Ré
NIPS 2016, arXiv:1606.03432 (2016) [pdf]
 
Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs
Stefan Hadjis, Ce Zhang, Ioannis Mitliagkas, and Christopher Ré
arXiv preprint arXiv:1606.04487 (2016) [pdf]
 
Parallel SGD: When does averaging help?
Jian Zhang, Christopher De Sa, Ioannis Mitliagkas, and Christopher Ré
OptML workshop at ICML 2016, arXiv:1606.07365 (2016) [pdf]
 
FrogWild! Fast PageRank Approximations on Graph Engines
Ioannis Mitliagkas, Michael Borokhovich, Alex Dimakis, and Constantine Caramanis.
VLDB 2015 (Earlier version at NIPS 2014 workshop). [ bib | pdf]
 
Streaming PCA with Many Missing Entries
Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain.
Preprint, 2015. [ bib | pdf]
 
Finding Dense Subgraphs via Low-rank Bilinear Optimization
Dimitris S Papailiopoulos, Ioannis Mitliagkas, Alexandros G Dimakis, and Constantine Caramanis.
ICML, 2014. [ bib | pdf]
 
Memory Limited, Streaming PCA
Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain.
NIPS 2013 (arXiv:1307.0032), 2013. [ bib | pdf]
 
User Rankings from Comparisons: Learning Permutations in High Dimensions
I. Mitliagkas, A. Gopalan, C. Caramanis, and S. Vishwanath.
In Proc. of Allerton Conf. on Communication, Control and Computing, Monticello, USA, 2011. [ bib | pdf]
 
Strong Information-Theoretic Limits for Source/Model Recovery
I. Mitliagkas and S. Vishwanath.
In Proc. of Allerton Conf. on Communication, Control and Computing, Monticello, USA, 2010. [ bib | pdf]


Older publications

In another life, I did research in telecommunications as an electrical engineer. That experience was my gateway into information theory, optimization and statistics. It also introduced me to the information theory community and some of their unique tools for dealing with statistical and learning problems.
 
Joint Power and Admission Control for Ad-hoc and Cognitive Underlay Networks: Convex Approximation and Distributed Implementation
I. Mitliagkas, ND Sidiropoulos, and A. Swami. IEEE Transactions on Wireless Communications, 2011. [ bib ]
 
Distributed joint power and admission control for ad-hoc and cognitive underlay networks
I. Mitliagkas, ND Sidiropoulos, and A. Swami. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 3014-3017. IEEE. [ bib ]
 
Convex approximation-based joint power and admission control for cognitive underlay networks
I. Mitliagkas, ND Sidiropoulos, and A. Swami. In Wireless Communications and Mobile Computing Conference, 2008. IWCMC'08. International, pages 28-32. IEEE. [ bib ]

Funding acknowledgements

CIFAR Apogee IVADO Samsung
FRQNT NSERC Microsoft Research

Special thanks to Intel and NVIDIA for donating access to hardware and SigOPT for access to their platform for some of our work.