Guido Montúfar
UCLA Departments of Mathematics and Statistics
Los Angeles, CA 90095, USA

Photo: Reed Hutchinson/UCLA 
Short CV
Research Group Leader, Mathematical Machine Learning Group, Max Planck Institute for Mathematics in the Sciences (since 2018)
Assistant Professor,
Departments of Mathematics and Statistics,
UCLA (20172022)
Postdoc, Max Planck Institute for Mathematics in the Sciences, Information Theory of Cognitive Systems Group (20132017)
Research Associate, Department of Mathematics, Pennsylvania State University (20122013)
Dr. rer. nat. in Mathematics, MPI MIS/Leipzig University (2012)
Diplom Physiker, TU Berlin (2009)
Diplom Mathematiker, TU Berlin (2007)

Research Interests
Deep Learning Theory
Mathematical Machine Learning
Graphical Models
Information Geometry
Algebraic Statistics
Activities


Geometry and convergence of natural policy gradients.
Johannes Müller, Guido Montufar. Information Geometry, 2023. Preprint [arXiv:2211.02105]. Repo [GitHub].
Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape.
Kedar Karhadkar, Michael Murray, Hanna Tseran, Guido Montufar. Preprint [arXiv:2305.19510]. Repo [GitHub].
Supermodular Rank: Set Function Decomposition and Optimization.
Rishi Sonthalia, Anna Seigal, Guido Montufar. Preprint [arXiv:2305.14632]. Repo [GitHub].
Implicit bias of gradient descent for mean squared error regression with twolayer wide neural networks.
Hui Jin and Guido Montufar. Journal of Machine Learning Research JMLR 24(137):197, 2023. Repo [GitHub]. Preprint [arXiv:2006.07356].
Function space and critical points of linear convolutional networks.
Kathlen Kohn, Guido Montufar, Vahid Shahverdi, Matthew Trager. Preprint [arXiv:2304.05752].
Critical points and convergence analysis of generative deep linear networks trained with BuresWasserstein loss.
Pierre Brechet, Katerina Papagiannouli, Jing An, Guido Montufar. In International Conference on Machine Learning (ICML 2023). Preprint [arXiv:2303.03027].
Expected Gradients of Maxout Networks and Consequences to Parameter Initialization.
Hanna Tseran, Guido Montufar. In International Conference on Machine Learning (ICML 2023). Preprint [arXiv:2301.06956]. Repo [GitHub].
Characterizing the Spectrum of the NTK via a Power Series Expansion.
Michael Murray, Hui Jin, Benjamin Bowman, Guido Montufar. In International Conference on Learning Representations (ICLR 2023). Repo [GitHub]. Virtual poster [ICLR]. Preprint [arXiv:2211.07844].
FoSR: Firstorder spectral rewiring for addressing oversquashing in GNNs.
Kedar Karhadkar, Pradeep Kr Banerjee, Guido Montufar. In International Conference on Learning Representations (ICLR 2023). Repo [GitHub]. Virtual poster [ICLR]. Preprint [arXiv:2210.11790]. 

Stochastic Feedforward Neural Networks: Universal Approximation.
T. Merkh and G. Montufar. In Mathematical Aspects of Deep Learning, Cambridge University Press, 2022. Preprint [arXiv:1910.09763], [RG].
Sharp bounds for the number of regions of maxout networks and vertices of Minkowski sums.
Guido Montufar, Yue Ren, and Leon Zhang. SIAM Journal on Applied Algebra and Geometry vol 6, issue 4, 2022. Preprint [arXiv:2104.08135].
Algebraic optimization of sequential decision problems.
Mareike Dressler, Marina GarroteLopez, Guido Montufar, Johannes Müller, Kemal Rose. Preprint [arXiv:2211.09439]. Repo [GitHub].
Enumeration of maxpooling responses with generalized permutohedra.
Laura Escobar, Patricio Gallardo, Javier GonzalezAnaya, Jose L Gonzalez, Guido Montufar, Alejandro H Morales. Preprint [arXiv:2209.16978]. Repo [GitHub].
Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime.
Benjamin Bowman and Guido Montufar. In Advances in Neural Information Processing Systems (NeurIPS 2022). Virtual poster [SlidesLive]. Preprint [arXiv:2206.02927].
On the effectiveness of persistent homology.
Renata Turkeš, Guido Montufar, and Nina Otter. In Advances in Neural Information Processing Systems (NeurIPS 2022). Repo [GitHub]. Virtual poster [SlidesLive]. Preprint [arXiv:2206.10551].
Oversquashing in GNNs through the lens of information contraction and graph expansion.
Pradeep Kr. Banerjee, Kedar Karhadkar, YuGuang Wang, Uri Alon, and Guido Montufar. In 58th Annual Allerton Conference on Communication, Control, and Computing (Allerton 2022). Preprint [arXiv:2208.03471].
Cell graph neural networks enable the precise prediction of patient survival in gastric cancer.
Yanan Wang, YuGuang Wang, Changyuan Hu, Ming Li, Yanan Fan, Nina Otter, Ikuan Sam, Hongquan Gou, Yiquun Hu, Terry Kwon, John Zalcberg, Alex Boussioutas, Roger Dali, Guido Montufar, Pietro Lio, Dakang Xu, Geoffrey I Webb, and Jiangning Song. npj Precision Oncology 6, Article number: 45 (2022).
Geometry of Linear Convolutional Networks.
Kathlen Kohn, Thomas Merkh, Guido Montufar, Matthew Trager. SIAM Journal on Applied Algebra and Geometry vol 6, issue 3, 2022. Preprint [arXiv:2108.01538].
Solving infinitehorizon POMDPs with memoryless stochastic policies in stateaction space.
Johannes Müller and Guido Montufar. In Reinforcement Learning and Decision Making (RLDM 2022). Repo [GitHub]. Preprint [arXiv:2205.14098].
Continuity and additivity properties of information decompositions.
Johannes Rauh, Pradeep Kr. Banerjee, Ekehard Olbrich, Guido Montufar, Juergen Jost. In 12 Workshop on Uncertainty Processing (WUPES 2022). Preprint [arXiv:2204.10982].
Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks.
Benjamin Bowman and Guido Montufar. In The Tenth International Conference on Learning Representations (ICLR 2022). Virtual poster [ICLR]. Preprint [arXiv:2201.04738].
Learning curves for Gaussian process regression with powerlaw priors and targets.
Hui Jin, Pradeep Kr. Banerjee, and Guido Montufar. In The Tenth International Conference on Learning Representations (ICLR 2022). Preprint [arXiv:2110.12231]. Slides [GitHub]. Powerlaw asymptotics of the generalization error for GP regression under powerlaw priors and targets. Workshop version presented at Workshop on Bayesian Deep Learning NeurIPS 2021.
The Geometry of Memoryless Stochastic Policy Optimization in InfiniteHorizon POMDPs.
Johannes Müller and Guido Montufar. In The Tenth International Conference on Learning Representations (ICLR 2022). Virtual poster [ICLR]. Repo [GitHub]. Preprint [arXiv:2110.07409]. 

Weisfeiler and Lehman go cellular: CW networks.
Christian Bodnar, Fabrizio Frasca, Nina Otter, Yu Guang Wang, Pietro Lio, Guido Montufar, and Michael Bronstein. Advances in Neural Information Processing Systems 35 (NeurIPS 2021). Repo [GitHub]. Virtual poster [SlidesLive]. Preprint [arXiv:2106.12575].
On the expected complexity of maxout networks.
Hanna Tseran and Guido Montufar. Advances in Neural Information Processing Systems 35 (NeurIPS 2021). Repo [GitHub]. Virtual poster [SlidesLive]. Preprint [arXiv:2107.00379].
A topdown approach to attain decentralized multiagents.
Alex Tong Lin, Guido Montufar, and Stanley Osher. Handbook of Reinforcement Learning and Control, pp 419431, Springer, 2021.
Distributed learning via filtered hyperinterpolation on manifolds.
Guido Montufar and Yu Guang Wang. Foundations of computational mathematics 22, 12191271, 2021. Preprint [arXiv:2007.09392].
How framelets enhance graph neural networks.
Xuebin Zheng, Bingxin Zhou, Junbin Gao, Yu Guang Wang, Pietro Lio, Ming Li, and Guido Montufar. In Proceedings of the 38th International Conference on Machine Learning (ICML 2021), PMLR 139:1276112771, 2021. Repo [GitHub]. Preprint [arXiv:2102.06986].
Weisfeiler and Lehman go topological: message passing simplicial networks.
Christian Bodnar, Fabrizio Frasca, Yu Guang Wang, Nina Otter, Guido Montufar, Pietro Lio, and Michael Bronstein. In Proceedings of the 38th International Conference on Machine Learning (ICML 2021), PMLR 139:10261037, 2021. Virtual poster [ICML]. Preprint [arXiv:2103.03212]. Workshop version presented at Workshop on Geometrical and Topological Representation Learning ICLR, 2021. Virtual poster [SlidesLive].
Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep ReLU networks.
Quynh Nguyen, Marco Mondelli, and Guido Montufar. In Proceedings of the 38th International Conference on Machine Learning (ICML 2021), PMLR 139:81198129, 2021. Preprint [arXiv:2012.11654].
Information complexity and generalization bounds.
Pradeep Kumar Banerjee and Guido Montufar. In IEEE international symposium on information theory (ISIT 2021). Preprint [arXiv:2105.01747].
Wasserstein proximal of GANs.
Alex Tong Lin, Wuchen Li, Stanley Osher, and Guido Montufar. In Proceedings of the 5th International Conference Geometric Science of Information (GSI 2021), LNCS, vol 12829, pp 524533, Springer, 2021. Preprint [arXiv:2102.06862], [CAM report 1853], [RG]. Poster [pdf].
Decentralized multiagents by imitations of a centralized controller.
Alex Tong Lin, Mark J. Debord, Katia Estabridis, Gary Hewer, Guido Montufar, and Stanley Osher. In 2nd Annual Conference on Mathematical and Scientific Machine Learning (MSML 2021). Preprint [arXiv:1902.02311].
Wasserstein distance to independence models.
Tuerkue Ozluem Celik, Asgar Jamneshan, Guido Montufar, Bernd Sturmfels, and Lorenzo Venturello. Journal of Symbolic Computation, 104:855873, 2021. Preprint [arXiv:2003.06725].
PACBayes and information complexity.
Pradeep Kr. Banerjee and Guido Montufar. Presented at Workshop on neural compression: from information theory to applications ICLR, 2021. Poster [GitHub]. 

Optimization theory for ReLU neural networks trained with normalization layers.
Yonatan Dukler, Quanquan Gu, and Guido Montufar. In Proceedings of the 37th International Conference on Machine Learning (ICML 2020), PMLR 119:27512760, 2020. Virtual poster [ICML]. Preprint [arXiv:2006.06878].
Haar graph pooling.
Yu Guang Wang, Ming Li, Zheng Ma, Guido Montufar, Xiaosheng Zhuang, Yanan Fan. In Proceedings of the 37th International Conference on Machine Learning (ICML 2020), PMLR 119:99529962, 2020. Repo [GitHub]. Virtual poster [ICML]. Preprint [arXiv:1909.11580].
The variational deficiency bottleneck.
Pradeep Kumar Banerjee and Guido Montufar. In Proceedings of the international joint conference on neural networks (IJCNN 2020). Preprint [arXiv:1810.11677].
Kernelized Wasserstein Natural Gradient.
Michael Arbel, Arthur Gretton, Wuchen Li, Guido Montufar. In International Conference on Learning Representations (ICLR 2020). Repo [GitHub]. Preprint [arXiv:1910.09652].
Factorized Mutual Information Maximization.
Thomas Merkh and Guido Montufar. Kybernetika 56(5):948978, 2020. Preprint [arxiv:1906.05460], [RG].
Ricci curvature for parametric statistics via optimal transport.
W. Li and G. Montufar. Information Geometry 3(1):89117, 2020. Preprint [arXiv 1807.07095].
Can neural networks learn persistent homology features?.
Guido Montufar, Nina Otter, and Yu Guang Wang. Presented at Workshop on topological data analysis and beyond NeurIPS, 2020. Preprint [arXiv:2011.14688]. 

Optimal Transport to a Variety.
T. O. Celik, A. Jamneshan, G. Montufar, B. Sturmfels, L. Venturello. In International Conference on Mathematical Aspects of Computer and Information Sciences (MACIS 2019), LNCS, vol 11989, pp 364381, Springer, 2020. Preprint [arxiv:1909.11716], [MPI MIS 7/2021].
Affine Natural Proximal Learning.
Wuchen Li, Alex Lin and Guido Montufar. In Proceedings of the 4th International Conference Geometric Science of Information (GSI 2019), LNCS, vol 11712, pp 705714, Springer, 2019. Preprint [MPI MIS 6/2021], [RG].
Wasserstein of Wasserstein loss for learning generative models.
Y. Dukler, W. Li, A. Lin, and G. Montufar. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019), PMLR 97:17161725, 2019. [BibTex]. Repo [GitHub]. Preprint [MPI MIS 13/2019].
Wasserstein Diffusion Tikhonov Regularization.
Alex Tong Lin, Yonatan Dukler, Wuchen Li, and Guido Montufar. Presented at Optimal Transport and Machine Learning Workshop NeurIPS, 2019. Preprint [arxiv:1909.06860], [RG].
A continuity result for optimal memoryless planning in POMDPs.
J. Rauh, N. Ay, G. Montufar. Presented at The 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2019). [pdf]. Preprint [MPI MIS 5/2021], [RG].
TaskAgnostic Constraining in Average Reward POMDPs.
G. Montufar, J. Rauh, N. Ay. Presented at TaskAgnostic Reinforcement Learning Workshop ICLR, 2019. [pdf]. Preprint [MPI MIS 9/2021], [RG].
How Well Do WGANs Estimate the Wasserstein Metric?.
Anton Mallasto, Guido Montufar, Augusto Gerolin. Preprint [arXiv:1910.03875]. 

Natural Gradient via Optimal Transport.
W. Li and G. Montufar. Information Geometry 1, issue 2, pp 181214, 2018. Preprint [arXiv:1803.07033].
Restricted Boltzmann Machines: Introduction and Review.
G. Montufar. Information geometry and its applications (IGAIA IV), pp 75115, Springer, 2018. Preprint [arXiv:1806.07066].
Computing the Unique Information.
P. K. Banerjee, J. Rauh, and G. Montufar. IEEE International Symposium on Information Theory (ISIT), pages 141145, 2018. Repo [GitHub]. Preprint [arXiv:1709.07487].
Mixtures and Products in two Graphical Models.
A. Seigal and G. Montufar. Journal of Algebraic Statistics vol 9 no 1, 2018. Preprint [arXiv:1709.05276].
Uncertainty and Stochasticity of Optimal Policies.
G. Montufar, J. Rauh, N. Ay. In Proceedings of the 11th Workshop on Uncertainty Processing (WUPES 2018). Preprint [MPI MIS 8/2021], [RG]. 

Mode Poset Probability Polytopes.
G. Montufar and J. Rauh. Journal of Algebraic Statistics 7(1):113, 2016. [BibTeX]. Workshop version in Proceedings of the 10th Workshop on Uncertainty Processing (WUPES 2015). Preprint [arXiv:1503.00572], [MPI MIS 22/2015].
Evaluating Morphological Computation in Muscle and DCmotor Driven Models of Hopping Movements.
K. GhaziZahedi, D. Haeufle, G. Montufar, S. Schmitt, and N. Ay. Frontiers in Robotics and AI 3(42):frobt.2016.00042, 2016. [BibTeX]. Preprint [arXiv:1512.00250].
Information Theoretically Aided Reinforcement Learning for Embodied Agents.
G. Montufar, K. GhaziZahedi, and N. Ay. Preprint [arXiv:1605.09735], [RG].
Geometry and Determinism of Optimal Stationary Control in Partially Observable Markov Decision Processes.
G. Montufar, K. GhaziZahedi, and N. Ay. Preprint [arXiv:1503.07206], [MPI MIS 22/2016]. 

A Theory of Cheap Control in Embodied Systems.
G. Montufar, K. Zahedi, and N. Ay. PLoS Comput Biol 11(9):e1004427, 2015. [BibTeX]. Preprint [arXiv:1407.6836], [MPI MIS 70/2014].
Geometry and Expressive Power of Conditional Restricted Boltzmann Machines.
G. Montufar, N. Ay, and K. Zahedi. Journal of Machine Learning Research JMLR 16(Dec):24052436, 2015. [BibTeX]. Preprint [arXiv:1402.3346], [MPI MIS 16/2014].
Discrete Restricted Boltzmann Machines.
G. Montufar and J. Morton. Journal of Machine Learning Research JMLR 16(Apr):653672, 2015. [BibTeX]. Conference version in International Conference on Learning Representations (ICLR 2013). Preprint [arXiv:1301.3529], [MPI MIS 106/2014].
When Does a Mixture of Products Contain a Product of Mixtures?
G. Montufar and J. Morton. SIAM Journal on Discrete Mathematics (SIDMA) 29(1):321347, 2015. [BibTeX]. Preprint [arXiv:1206.0387], [MPI MIS 98/2014].
Deep Narrow Boltzmann Machines are Universal Approximators.
G. Montufar. In Third International Conference on Learning Representations (ICLR 2015). [BibTeX]. Preprint [arXiv:1411.3784], [MPI MIS 113/2014].
A Comparison of Neural Network Architectures.
G. Montufar. Presented at Deep Learning Workshop ICML, 2015. [pdf], [pdf].
Universal Approximation of Markov Kernels by Shallow Stochastic Feedforward Networks.
G. Montufar. Preprint [arXiv:1503.07211], [MPI MIS 23/2015]. 

On the Number of Linear Regions of Deep Neural Networks.
G. Montufar, R. Pascanu, K. Cho, and Y. Bengio. Neural Information Processing Systems 27 (NIPS 2014). [BibTeX]. Preprint [MPI MIS 73/2014], [arXiv 1402.1869]
On the Number of Response Regions of Deep Feedforward Networks with Piecewise Linear Activations.
R. Pascanu, G. Montufar, and Y. Bengio. In Second International Conference on Learning Representations (ICLR 2014). [BibTeX]. Preprint [MPI MIS 72/2014], [arXiv 1312.6098]
On the Fisher Information Metric of Conditional Probability Polytopes.
G. Montufar, J. Rauh, and N. Ay. Entropy 16(6):32073233, 2014. [BibTeX]. Preprint [MPI MIS 87/2014], [arXiv 1404.0198]
Scaling of Model Approximation Errors and Expected Entropy Distances.
G. Montufar and J. Rauh. Kybernetika 50(2):234245, 2014. [BibTeX]. Workshop version WUPES 2012, pp. 137148. Preprint [arXiv 1207.3399]
Universal Approximation Depth and Errors of Narrow Belief Networks with Discrete Units.
G. Montufar. Neural Computation 26(7):13861407, 2014. [BibTeX]. Preprint [MPI MIS 74/2014], [arXiv 1303.7461]
Sequential RecurrenceBased Multidimensional Universal Source Coding of LempelZiv Type.
T. Krueger, G. Montufar, R. Seiler, and R. SiegmundSchultze. Preprint [MPI MIS 86/2014], [arXiv 1408.4433]. 

Universally Typical Sets for Ergodic Sources of Multidimensional Data.
T. Krüger, G. Montufar, R. Seiler, and R. SiegmundSchultze. Kybernetika 49(6):868882, 2013. [BibTeX]. Preprint [MPI MIS 20/2011], [arXiv 1105.0393]
Mixture Decompositions of Exponential Families Using a Decomposition of their Sample Spaces.
G. Montufar. Kybernetika 49(1):2339, 2013. [BibTeX]. Preprint [MPI MIS 39/2010], [arXiv 1008.0204]
Maximal Information Divergence from Statistical Models defined by Neural Networks.
G. Montufar, J. Rauh, and N. Ay. In Geometric Science of Information (GSI), LNCS, vol 8085, pp 759766, Springer, 2013. [BibTeX]. Preprint [MPI MIS 31/2013], [arXiv 1303.0268]
Selection Criteria for Neuromanifolds of Stochastic Dynamics.
N. Ay, G. Montufar, J. Rauh. In Advances in Cognitive Neurodynamics (III), pp 147154, 2013. [BibTeX]. Preprint [MPI MIS 15/2011] 

Kernels and Submodels of Deep Belief Networks.
G. Montufar and J. Morton. Deep Learning and Unsupervised Feature Learning Workshop NIPS, 2012. Preprint [arXiv 1211.0932] 

Expressive Power and Approximation Errors of Restricted Boltzmann Machines.
G. Montufar, J. Rauh, and N. Ay. Neural Information Processing Systems 24 (NIPS 2011). [BibTeX]. Preprint [MPI MIS 27/2011], [arXiv 1406.3140]
Refinements of Universal Approximation Results for Restricted Boltzmann Machines and Deep Belief Networks.
G. Montufar and N. Ay. Neural Computation 23(5):13061319, 2011. [BibTeX]. Preprint [MPI MIS 23/2010], [arXiv 1005.1593]
Mixture Models and Representational Power of RBMs, DBNs and DBMs.
G. Montufar. Deep Learning and Unsupervised Feature Learning Workshop NIPS, 2010. [pdf], [pdf] 

On the Expressive Power of Discrete Mixture Models, Restricted Boltzmann Machines, and Deep Belief Networks—A Unified Mathematical Treatment.
PhD Thesis, Leipzig University, October 2012. Supervisor: N. Ay. [pdf] (14.4 MB, 155 pages, 30 figures)
Theory of Transport and PhotonStatistics in a Biased Nanostructure.
German Diplom in Physics, Institute for Theoretical Physics, TUBerlin, December 2008. Supervisor: A. Knorr and T. Brandes.
QSanov Theorem for d ≥ 2.
German Diplom in Mathematics, Institute for Mathematics, TUBerlin, August 2007. Supervisor: R. Seiler and J.D. Deuschel. 

Marie Brandenburg, Postdoc, MPI MIS
Rishi Sonthalia, Hedrick Assistant Adjunct Professor, UCLA, comentored with A. Bertozzi and J. Foster
Pradeep Kr. Banerjee, Postdoc, MPI MIS


Kedar Karhadkar, PhD candidate, UCLA
Benjamin Bowman, PhD candidate, UCLA
Jiayi Li, PhD candidate, UCLA
Johannes Müller, PhD candidate, IMPRS, MPI MIS, coadvised with Nihat Ay
Hanna Tseran, PhD candidate, IMPRS, MPI MIS
Pierre Brechet, PhD candidate, MPI MIS


Katerina Papagiannouli, 202122 Postdoc at MPI MIS, next position Postdoc at Learning and Inference Group MPI MIS
Jing An, 202122 Postdoc at MPI MIS comentored with F. Otto, next position Phillip Griffiths Assistant Research Professor at Duke
Yu Guang Wang, 202021 Postdoc at MPI MIS, next position Associate Professor at Shanghai Jiao Tong University
Nina Otter, 201821 CAM Adjunct Assistant Professor at UCLA comentored with M. Porter, next position Lecturer (Assistant Professor) of Data Science at Queen Mary University of London
Quynh Nguyen, 202021 Postdoc at MPI MIS


Hui Jin, PhD 2022 at UCLA Math, next position Research Engineer at Huawei Co., Ltd.
Yonatan Dukler, PhD 2021 at UCLA Math coadvised with A. Bertozzi, next position Applied Scientist at AWS


Renata Turkeš, Sep 2021May 2022 Fulbright visitor at UCLA, comentored with Nina Otter
Friedrich Wicke, MaySep 2022 Research Intern at MPI MIS


Stats 100A  Introduction to Probability, Fall 2022
Math 273A  Optimization, UCLA, Fall 2022
Math 164  Optimization, UCLA, Spring 2022
Stats 290  Current Literature in Statistics, UCLA, Spring 2022
Math 273A  Optimization, UCLA, Fall 2021
Stats 100A  Introduction to Probability, UCLA, Fall 2021
Stats 231C  Theories of Machine Learning, UCLA, Spring 2021
Stats 200A  Applied Probability, UCLA, Fall 2020
Math 273A  Optimization and Calculus of Variations, UCLA, Fall 2020
Stats 231C  Theories of Machine Learning, UCLA, Spring 2020
Math 285J  Applied Mathematics Seminar  Deep Learning Topics, UCLA, Fall 2019
Stats 200A  Applied Probability, UCLA, Fall 2019
Stats 231C  Theories of Machine Learning, UCLA, Spring 2019
IMPRS Ringvorlesung  short course, Topics from Deep Learning, MPI MIS, Winter 2019
Math 273  Optimization, Calculus of Variations, and Control Theory, UCLA, Fall 2018
Math 164  Optimization, UCLA, Fall 2018.
Stat 270  Mathematical Machine Learning, UCLA, Spring 2018
Math 285J  Applied Mathematics Seminar  Deep Learning Topics, UCLA, Winter 2018
Introduction to the Theory of Neural Networks, MS/PhD Lecture, Leipzig University and MPI MIS, Summer Term 2016
Geometric Aspects of Graphical Models and Neural Networks, with N. Ay, [Abstract], MS/PhD Lecture, Leipzig University and MPI MIS, Winter Term 2014/2015


Minisymposium Parameterizations and Nonconvex Optimization Landscapes, SIAM Conference on Optimization (OP23), Seattle, May 2023.
Computations and Data in Algebraic Statistics, Impromptu Session, BIRS CMO, Oaxaca, May 2023.
Joint TILOS and OPTML++ Seminar, MIT, April 2023. (online)
Applied Mathematics and Computation Seminar, Department of Mathematics and Statistics, University of Massachusetts Amherst, March 2023.
Online Machine Learning Seminar, School of Mathematical Sciences, University of Nottingham, February 2023. (online)
Caltech CMX Seminar, Caltech, January 2023.


1WMINDS Seminar, December 2022.
Learning, Information, Optimization, Networks, and Statistics (LIONS) seminar, Arizona State University, November 2022. (online)
Invited talk at Information Geometry for Data Science, Hamburg University of Technology, September 2022. (online)
Invited talk at Algebraic Statistics Session at COMPSTAT 2022, Bologna, Italy, August 2022.
Oxford Data Science Seminar, Mathematical Institute, University of Oxford, May 2022. (online)
Keynote at Algebraic Statistics, University of Hawai'i at Manoa, Honolulu, HI, May 2022.
Invited talk at Statistics Colloquium, Statistics Department, University of Chicago, April 2022. (online)
Invited talk at Special session on Latinxs in Combinatorics at the Joint Mathematics Meetings (JMM), hosted by the American Mathematical Society, April 2022.
Invited talk at minisymposium Geometric Methods for Understanding and Applying Machine Learning, SIAM Conference on Imaging Science (IS22), March 2022. (online)


Wilhelm Killing Colloquium, Mathematisches Institut, Universitaet Muenster, Muenster, Germany, December 2021. (online)
Mathematics of Information Processing Seminar, RWTH Aachen University, Aachen, Germany, November 2021. (online)
Mathematics in Imaging, Data and Optimization, Department of Mathematics, Rensselaer Polytechnic Institute, Troy, New York, November 2021. (online)
Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China, October 2021. (online)
Kickoff Workshop Numerical and Probabilistic Nonlinear Algebra, MPI MIS, September 2021.
Mathematics of Deep Learning, Isaac Newton Institute, Cambridge, August 2021. (online)
Mathematical Foundation and Applications of Deep Learning Workshop, Purdue, August 2021. (online)
Numerical Algebra and Optimization seminar, MPI MIS, August 2021.
Mini symposium Algebraic Geometry of Data, SIAM Conference on Applied Algebraic Geometry (AG21), August 2021. (online)
Workshop  Sayan Mukherjee, MPI MIS, July 2021.
Statistics Department Seminar, Department of Statistics, UCLA, June 2021. (online)
Discrete Mathematics/Geometry Seminar, TU Berlin, May 2021. (online)
Mathematical Data Science Seminar, Department of Mathematics, Purdue University, March 2021. (online)
DeepMind/ELLIS CSML Seminar Series, Centre for Computational Statistics and Machine Learning, University College London (UCL), January 2021. (online)
Invited talk at the TRIPODS Winter School and Workshop on the Foundations of Graph and Deep Learning, Mathematical Institute for Data Science (MINDS) at Johns Hopkins University, Baltimore MD, January 2021. (online)
Biostatistics Winter Webinar Series, Department of Biostatistics, UCLA, January 2021. (online)
Applied and Computational Mathematics Seminar, UC Irvine, January 2021. (online)


Mathematics of Data and Decision in Davis (MADDD), UC Davis, December 2020. (online)
Keynote at Workshop Deep Learning through Information Geometry at NeurIPS, December 2020. (online)
Invited talk at GAMM Workshop Computational and Mathematical Methods in Data Science, (Gesellschaft fuer Angewandte Mathematik und Mechanik e.V.), Max Planck Institute MIS, September 2020. (online)
Plenary talk at the Workshop Optimal Transport, Topological Data Analysis and Applications to Shape and Machine Learning, Mathematical Biosciences Institute, Ohio State University, USA, July 2020. (online)
Keynote at Algebraic Statistics in Hawaii, USA, June 2020. (online)
Keynote at Differential Geometry and Machine Learning at CVPR 2020, June 2020.
Deep Learning Seminar, University of Vienna, Austria, February 2020.
Mathematics Seminar, KAUST, Thuwal, Saudi Arabia, January 2020.
Mathematisches Kolloquium, Bielefeld University, Germany, January 2020.


Wasserstein Regularization for Generative and Discriminative Learning, UseDat Conf, Infospace, Moscow, September 2019.
Invited talk Factorized mutual information maximization at Prague Stochastics, Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague, August 2019.
Wasserstein Information Geometry for Learning from Data, Optimal Transport for Nonlinear Problems, ICIAM, Valencia, Spain, July 2019.
Markov Kernels with Deep Graphical Models, Latent Graphical Models, SIAM AG, Bern, July 2019.
Keynote Computing the Unique Information at 1st Workshop on Semantic Information, CVPR 2019, Long Beach, June 2019.
Tutorial Wasserstein Information Geometry for Learning from Data at Geometry and Learning from Data in 3D and Beyond, IPAM, LA, March 2019. [talk at GLTUT IPAM].


RBM Intro and Review at Boltzmann Machines, AIM, October 2018.
Plenary talk Representation, Approximation, Optimization advances for Restricted Botlzmann Machines at 7th International conference on computational harmonic analysis, Vanderbilt University, May 2018.
SIAM Annual Meeting 2018: Statistics MiniSimposium, Portland, Oregon, July 2018.
Tutorial at Transilvanian Machine Learning Summer School, ClujNapoca, Romania, 1622 July 2018.
Mixtures and products in two graphical models, USC Probability and Statistics Seminar, USC, April 2018.
A theory of cheap control in embodied systems, Random Structures in Neuroscience and Biology, Herrsching, Germany, March 2018.
Mode poset probability polytopes, Combinatorics Seminar (Igor Pak), UCLA, February 2018.
Uncertainty and Stochasticity in POMDPs, Machine Learning Seminar (Wilfrid Gangbo), UCLA, February 2018.


Mixtures and products in two graphical models, Level Set Collective (Stanley Osher), IPAM, November 2017.
Graphical models with hidden variables, Systematic Approaches to Deep Learning Methods for Audio, Vienna, Austria, September 2017.
On the Fisher metric of conditional probability polytopes, Geometry of Information for Neural Networks, Machine Learning, Artificial Intelligence, Topological and Geometrical Structures of Information, CIRM Marseille, France, August 2017.
Notes on the number of linear regions of deep neural networks, Mathematics of Deep Learning, Special Session at International Conference on Sampling Theory, Tallin, Estonia, July 2017.
Neural networks for cheap control of embodied behavior, Peking University (Jinchao Xu), Beijing, China, July 2017.
Selected Topics in Deep Learning, Short Course, Beijing Institute for Scientific Computing, Beijing, China, July 2017.
Learning with neural networks, Tutorial, Training Networks, Summer School, Signal Processing with Adaptive Sparse Structured Representations, Lisbon, Portugal, June 2017.


Dimension of Marginals of Kronecker Product Models, Seminar on NonLinear Algebra, TUBerlin, Germany, November 2016.
Geometric and Combinatorial Perspectives on Deep Neural Networks, Theory of Deep Learning Workshop, ICML 2016, New York, USA, June 2016.
Plenary talk Geometry of Boltzmann Machines, [Slides], [Abstract], IGAIA IV, Liblice, Czech Republic, June 2016.
Geometric Approaches to the Design of Embodied Learning Systems, Special Symposium on Intelligent Systems, MPI for Intelligent Systems, Tuebingen, Germany, March 2016.
Artificial Intelligence Overview, LikBez Seminar, MPI MIS, January 2016.


Poster: A comparison of neural network architectures, Deep Learning Workshop, ICML 2015.
Poster: Mode Poset Probability Polytopes, [pdf], Algebraic Statistics 2015, Department of Mathematics University of Genoa, Italy, June 811, 2015.
A Theory of Cheap Control in Embodied Systems, Montreal Institute for Learning Algorithms (MILA), University of Montreal, Canada, December 2015.
Dimension of restricted Boltzmann machines, Department of Mathematics & Statistics, York University, Toronto, Canada, December 2015.
Sequential RecurrenceBased Multidimensional Universal Source Coding, Dynamical Systems Seminar, MPI MIS, November 2015.
Cheap Control of Embodied Systems, Aalto Science Institute, Espoo, Finland, November 2015.
Mode Poset Probability Polytopes, WUPES'15, Moninec, Czech Republic, September 18, 2015.
Hierarchical models as marginals of hierarchical models, WUPES'15, Moninec, Czech Republic, September 17, 2015.
Confining bipartite graphical models by simple classes of inequalities, Special Topics Session Algebraic and Geometric Approaches to Graphical Models, 60th World Statistics Congress  ISI 2015, Rio de Janeiro, Brazil, July 31, 2015.


Poster: A Framework for Cheap Universal Approximation in Embodied Systems, Autonomous Learning: 3. Symposium DFG Priority Programme 1527, Berlin, September 89, 2014.
Poster: Geometry of hiddenvisible products of statistical models, [pdf], Algebraic Statistics at IIT, Chicago, IL, 2014.
On the Number of Linear Regions of Deep Neural Networks, Montreal Institute for Learning Algorithms (MILA), Université de Montréal, Montreal, Canada, December 15, 2014.
Information Divergence from Statistical Models Defined by Neural Networks, Workshop: Information Geometry for Machine Learning, RIKEN BSI, Japan, December 2014.
Geometry of Deep Neural Networks and Cheap Design for Autonomous Learning, Google DeepMind, London, UK, October 2014.
Geometry of HiddenVisible Products of Statistical Models, Joint Workshop on Limit Theorems and Algebraic Statistics, UTIA, Prague, August 2529, 2014.


How size and architecture determine the learning capacity of neural networks, SFI Seminar, Santa Fe, NM, USA, October 23, 2013.
Maximal Information Divergence from Statistical Models defined by Neural Networks, GSI 2013, Mines ParisTech, Paris, France, August 29, 2013.
Naive Bayes models, Seminario de Postgrado en Ingenieria de Sistemas, Universidad del Valle, Santiago de Cali, Colombia, May 30, 2013.
Discrete Restricted Boltzmann Machines, ICLR2013, Scottsdale, AZ, USA, May 2, 2013.


Poster: When Does a Mixture of Products Contain a Product of Mixtures, [Abstract], NIPS 2012  Deep Learning and Unsupervised Feature Learning Workshop.
Poster: Kernels and Submodels of Deep Belief Networks, [Abstract], NIPS 2012  Deep Learning and Unsupervised Feature Learning Workshop.
When Does a Mixture of Products Contain a Product of Mixtures?, Tensor network states and algebraic geometry, ISI Foundation, Torino, Italy, November 0608, 2012.
Universally typical sets for ergodic sources of multidimensional data, Seminar on probability and its applications (Manfred Denker), Penn State, PA, USA, October 05, 2012.
On the Expressive Power of Discrete Mixture Models, Restricted Boltzmann Machines, and Deep Belief Networks—A Unified Mathematical Treatment, PhD thesis defense, Leipzig University, October 17, 2012.
Scaling of model approximation errors and expected entropy distances, Stochastic Modelling and Computational Statistics Seminar (Murali Haran), Penn State, PA, USA, October 11, 2012.
Scaling of Model Approximation Errors and Expected Entropy Distances, WUPES'12, Mariánské Lázně, Czech Republic, September 13, 2012.
Multivalued Restricted Boltzmann Machines, [Abstract], MPI MIS, Leipzig, Germany, September 19, 2012.
Simplex packings of marginal polytopes and mixtures of exponential families, SIAM Conference on Discrete Mathematics (DM 2012), Dalhousie University, Halifax, Nova Scotia, Canada, June 1821, 2012.
On Secants of Exponential Families, Algebraic Statistics in the Alleghenies, Penn State, PA, USA, June 0815, 2012.
Approximation Errors of Deep Belief Networks, Applied Algebraic Statistics Seminar, Penn State, PA, USA, February 08, 2012.


Submodels of Deep Belief Networks, [Abstract], Berkeley Algebraic Statistics Seminar, UC Berkeley, CA, USA, December 07, 2011.
Geometry and Approximation Errors of Restricted Boltzmann Machines, The 5th Statistical Machine Learning Seminar, Institute of Statistical Mathematics, Tachikawa, Tokyo, Japan, September 02, 2011.
Geometry of Restricted Boltzmann Machines Towards Geometry of Deep Belief Networks, RIKEN Workshop on Information Geometry, RIKEN BSI, Japan, August 31, 2011.
Selection Criteria for Neuromanifolds of Stochastic Dynamics, The 3rd International Conference on Cognitive Neurodynamics, Niseko Village, Hokkaido, Japan, June 12, 2011.
On Exponential Families and the Expressive Power of Related Generative Models, [Abstract], Laboratoire d'Informatique des Systèmes Adaptatifs (LISA), Université de Montréal, Canada, March 14, 2011.
Mixtures from Exponential Families, Neuronale Netze und Kognitive Systeme Seminar, MPI MIS, Leipzig, Germany, March 02, 2011.
Universal approximation results for Restricted Boltzmann Machines and Deep Belief Networks, Neuronale Netze und Kognitive Systeme Seminar, MPI MIS, Leipzig, Germany, February 16, 2011.
Necessary conditions for RBM universal approximators, Meeting of the Department of DecisionMaking Theory  Institute of Information Theory and Automation UTIA, Marianska, Czech Republic, January 18, 2011.


Poster: Mixture Models and Representational Power of RBMs, DBNs and DBMs, NIPS 2010  Deep Learning and Unsupervised Feature Learning Workshop, Whistler, Canada.
Poster: Faces of the probability simplex contained in the closure of an exponential family and minimal mixture representations, Information Geometry and its Applications III, Leipzig, Germany, 2010.
Information Geometry of MeanField Methods, Fall School on Statistical Mechanics and 5th annual PhD Student Conference in Probability, MPI MIS, Leipzig, Germany, September 0712, 2009.
QuantumSanovTheorem for correlated States in multidimensional Grids, Dies Mathematicus, TUBerlin, Germany, February 2008.
QuantenSanovTheorem im mehrdimensionallen Fall, Workshop on Complexity and Information Theory, MPI MIS, Leipzig, Germany, October 2007.


We had the Reunion Conference of the IPAM Program Geometry and Learning from Data in 3D and Beyond at Lake Arrowhead, December 2021.
We are excited to be part of the Priority Programme Theoretical Foundations of Deep Learning (SPP 2298) with a project on Combinatorial and implicit approaches to deep learning.
Together with Pablo Suarez Serrato, Minh Ha Quang, Rongjie Lai we are organizing the BIRSCMO Workshop Geometry and Learning from Data, online, October 2021.
Together with Benjamin Gess and Nihat Ay we are organizing the ZiF Conference on Mathematics of Machine Learning, Bielefeld, August 2021.
I am participating at the Mathematics of Deep Learning Program, Isaac Newton Institute for Mathematical Sciences, Cambridge, UK, JulDec 2021.
Starting in June 2021 I will be serving as a research mentor at the yearlong Latinx Mathematicians Research Community, AIM, 2021.
Optimal transport in the natural sciences, Mathematisches Forschungsinstitut Oberwolfach (MFO), February 2021.
Together with Wuchen Li we are organizing the Wasserstein Information Geometry special session at GSI 2019, Tolouse, France, August 2019.
I am participating at the National Workshop on Data Science Education, UC Berkeley, CA, USA, June 2019.
IST Workshop on Deep Learning Theory, IST, Vienna, Austria, September 2019
Together with Joan Bruna, Yu Guang Wang,
Nina Otter, and Zheng Ma, we had the ICERM Collaborate Group Geometry of Data and Networks, Institute for Computational and Experimental Research in Mathematics, Providence, RI, USA, June 2019.
I am a coorganizer of the Geometry of Data and Learning in 3D and Beyond, IPAM Long Program, Institute for Pure and Applied Mathematics, Los Angeles, CA, USA, March  June 2019.
We had a fantastic Deep Learning Theory Kickoff Meeting at the Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany, March 2019.
DALI, January 2019.
Asja Fischer, Jason Morton, and I are organizing the AIM Workshop Boltzmann Machines, American Institute of Mathematics, San Jose, CA, USA, September 2018.
With Asja Fischer I am organizing the Theory of Deep Learning Workshop at DALI 2018, Lanzarote, Spain, April 2018.
Latinx in the Mathematical Sciences Conference, IPAM, March 2018.
Together with Christiane Goergen, Nihat Ay, and Andre Uschmajew, I am a coinitiator of the Math of Data Initiative, Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany.
NIPS 2017, Long Beach, CA, December 2017.
Geometric Science of Information, Paris, November 2017.
ICML 2017, Principled Approaches to Deep Learning, Program Committee, Sydney, Australia, August 2017.
Oberwolfach Workshop Algebraic Statistics, Mathematisches Forschungsinstitut Oberwolfach, Germany, April 2017.
Santa Fe Institute, Visit for Research Collaboration (Nihat Ay), Santa Fe, NM, USA, October 2016.
NIPS 2015, Montréal, Canada.
Santa Fe Institute, Visit for Research Collaboration (Nihat Ay), Santa Fe, NM, USA, October 15November 15, 2014.
Information Geometry in Learning and Optimization, University of Copenhagen, September 2226, 2014.
Autonomous Learning: 3. Symposium DFG Priority Programme 1527, Magnus Haus Berlin, Germany, September 0809, 2014.
Autonomous Learning: Summer School, MPI MIS, September 0104, 2014.
Santa Fe Institute, Visit for Research Collaboration (Nihat Ay), October 127, 2013.
SFI Working Group ``Information Theory of Sensorimotor Loops'', Santa Fe Institute, Santa Fe, NM, USA, October 811, 2013.
Pennsylvania State University, Visit for Research Collaboration (Jason Morton), PA, USA, September 2013.
Algebraic Statistics in Europe, IST Austria, September 2830, 2012.
Graduate Summer School: Deep Learning, Feature Learning, IPAM  UCLA, Los Angeles, CA, USA, July 927, 2012.
Singular Learning Theory, AIM Workshop, American Institute of Mathematics, Palo Alto, CA, USA, December 1216, 2011.
RIKENBSI, Laboratory for Mathematical Neuroscience (Prof. S. Amari), Internship, Hirosawa, Wako, Saitama, Japan, AugustOctober 2011.
SFI Complex Systems Summer School (CSSS11), Saint John's College, Santa Fe, NM, USA, June 8July 1, 2011.
Information Geometry and its Applications (IGAIA III), Leipzig University, Germany, August 2010.
