I am an Assistant Professor with MILA at the University of Montréal. I'm looking for students!

I am interested in the intersection of systems and theory applied to modern machine learning and data analysis. Some recent topics include:
  • Statistical learning and inference
  • Large-scale optimization
  • Data-dependent guarantees
  • Self-tuning systems
Teaching a seminar class in Winter 2018!
IFT 6085: Theoretical principles for deep learning

Before joining UdeM, I was a Postdoctoral Scholar with the Departments of Computer Science and Statistics at Stanford University working with Chris Ré and Lester Mackey. I got my PhD at The University of Texas at Austin with Constantine Caramanis and Sriram Vishwanath, where I also worked with Alex Dimakis.

Contact: Email, Scholar, LinkedIn, Github.

That's me.

Ioannis Mitliagkas
Department of Computer Science and Operations Research (DIRO)
University of Montréal
Pav. André-Aisenstadt
CP6128, Succ. Centre-Ville
Montréal (QC) H3C 3J7
  

Recent projects



Recent News

  • February 2018: YellowFin selected for oral presentation at SysML'18.
  • January 2018: Teaching new class! IFT 6085: Theoretical principles for deep learning
  • December 2017: Accelerated power iteration via momentum, paper accepted at AISTATS 2018.
  • November 2017: Talk at Google Brain, Montréal
  • September 2017: Thrilled to be starting work at the University of Montreal and MILA as an assistant professor!
  • August 2017: Visiting my alma mater, UT Austin.
  • August 2017: At Sydney for ICML, presenting work on YellowFin, custom scans for Gibbs sampling, and deep learning for 3D point cloud representation and generation.
  • July 2017: New preprint! Representation Learning and Adversarial Generation of 3D Point Clouds [arxiv].
  • July 2017: New preprint! Accelerated stochastic power iteration [arxiv].
  • June 2017: New preprint! An automatic tuner for they hyperparameters of momentum SGD [arxiv].
  • May 2017: Custom scan sequence paper accepted for presentation at ICML 2017!
  • April 2017: Invited talk at Workshop on Advances in Computing Architectures, Stanford SystemX
  • March 2017: New preprint! Custom scan sequences for super fast Gibbs sampling.
  • February 2017: Invited to talk at ITA in San Diego.
  • February 2017: Spoke at the AAAI 2017 Workshop on Distributed Machine Learning.
  • January 2017: Visiting Microsoft Research, Cambridge
  • December 2016: At NIPS, presenting our Gibbs sampling paper dispelling some common beliefs regarding scan orders.
  • November 2016: Visiting Microsoft Research New England
  • November 2016: Invited talk at SystemX Stanford Alliance Fall Conference
  • November 2016: Full version of asynchrony paper.
  • September 2016: Talk at Allerton
  • August 2016: I had the pleasure to give a talk MIT Lincoln Labs.
  • August 2016: Gave an asynchronous optimization talk at Google.
  • August 2016: Blog post on our momentum work.
  • July 2016: Invited to talk at NVIDIA.
  • June 2016: Poster at non-convex optimization ICML workshop.
  • June 2016: Poster at OptML 2016 workshop.
  • In a recent note, we show that asynchrony in SGD introduces momentum. In the companion systems paper, we use this theory to train deep networks faster.
  • Does periodic model averaging always help? Recent results.
  • Excited to start Postdoc at Stanford University. Will be working with Lester Mackey and Chris Ré.
  • Successfully defended my PhD thesis!
  • SILO seminar talk at the Wisconsin Institute of Discovery. Loved both Madison and the WID!
  • Densest k-Subgraph work picked up by NVIDIA!
  • Our latest work has been accepted for presentation at VLDB 2015!


Large-scale, Distributed Algorithms and Systems

 
YellowFin: Adaptive Optimization for (A)synchronous Systems
Jian Zhang and Ioannis Mitliagkas. SysML'18 workshop (selected for oral presentation) [long version .pdf ] [Blogpost ]
 
Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data
Thorsten Kurth, Jian Zhang, Nadathur Satish, Ioannis Mitliagkas, Evan Racah, Md. Mostofa Ali Patwary, Tareq Malas, Narayanan Sundaram, Wahid Bhimji, Mikhail Smorkalov, Jack Deslippe, Mikhail Shiryaev, Srinivas Shridharan, Prabhat, Pradeep Dubey. Supercomputing 2017 [.pdf ]
 
Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs
Stefan Hadjis, Ce Zhang, Ioannis Mitliagkas, and Christopher Ré Tech report, arXiv:1606.04487 (2016) [.pdf ]
 
FrogWild! Fast PageRank Approximations on Graph Engines
Ioannis Mitliagkas, Michael Borokhovich, Alex Dimakis, and Constantine Caramanis. VLDB 2015 (Earlier version at NIPS 2014 workshop). [ bib | .pdf ]
 
Finding Dense Subgraphs via Low-rank Bilinear Optimization
Dimitris S Papailiopoulos, Ioannis Mitliagkas, Alexandros G Dimakis, and Constantine Caramanis. ICML, 2014. [ bib | .pdf ]
 
Distributed joint power and admission control for ad-hoc and cognitive underlay networks
I. Mitliagkas, ND Sidiropoulos, and A. Swami. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 3014-3017. IEEE. [ bib ]


Machine Learning Theory

 
Accelerated Stochastic Power Iteration
Christopher De Sa, Bryan He, Ioannis Mitliagkas, Christopher Ré, Peng Xu. AISTATS 2018 [ .pdf ] [Blogpost ]
 
Improving Gibbs Sampler Scan Quality with DoGS
Ioannis Mitliagkas and Lester Mackey ICML 2017 [ .pdf ]
 
Asynchrony begets Momentum, with an Application to Deep Learning
Ioannis Mitliagkas, Ce Zhang, Stefan Hadjis, and Christopher Ré. Presented at Allerton, 2016 [.pdf ]
 
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much
Bryan He, Christopher De Sa, Ioannis Mitliagkas, and Christopher Ré NIPS 2016 [.pdf ]
 
Parallel SGD: When does averaging help?
Jian Zhang, Christopher De Sa, Ioannis Mitliagkas, and Christopher Ré OptML workshop at ICML 2016 [.pdf ]
 
Streaming PCA with Many Missing Entries
Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain. Preprint, 2015. [ bib | .pdf ]
 
Memory Limited, Streaming PCA
Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain. NIPS 2013. [ bib | .pdf ]
 
User Rankings from Comparisons: Learning Permutations in High Dimensions
I. Mitliagkas, A. Gopalan, C. Caramanis, and S. Vishwanath. In Proc. of Allerton Conf. on Communication, Control and Computing, Monticello, USA, 2011. [ bib | .pdf ]
 
Strong Information-Theoretic Limits for Source/Model Recovery
I. Mitliagkas and S. Vishwanath. In Proc. of Allerton Conf. on Communication, Control and Computing, Monticello, USA, 2010. [ bib | .pdf ]


Older Publications

 
Joint Power and Admission Control for Ad-hoc and Cognitive Underlay Networks: Convex Approximation and Distributed Implementation
I. Mitliagkas, ND Sidiropoulos, and A. Swami. IEEE Transactions on Wireless Communications, 2011. [ bib ]
 
Distributed joint power and admission control for ad-hoc and cognitive underlay networks
I. Mitliagkas, ND Sidiropoulos, and A. Swami. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 3014-3017. IEEE. [ bib ]
 
Convex approximation-based joint power and admission control for cognitive underlay networks
I. Mitliagkas, ND Sidiropoulos, and A. Swami. In Wireless Communications and Mobile Computing Conference, 2008. IWCMC'08. International, pages 28-32. IEEE. [ bib ]

subscribe via RSS