
Mingyi Hong
[intermediate] Modern Optimization Algorithms for Large Language Models
Summary
This short course introduces optimization fundamentals from a modern machine learning perspective, with a special focus on training large language models (LLMs). It covers essential tools such as gradient descent, constrained optimization, and regularization, then explores how these tools behave in practical LLM scenarios. Through a blend of theory, geometric insights, and hands-on examples, students will learn how classical optimization concepts scale up, and sometimes break down, in LLM training. The course is designed to be accessible and self-contained for graduate students with a background in linear algebra and calculus.
Syllabus
- Basics of optimization: problem setup, examples, and geometric intuition
- First- and second-order optimality conditions (scalar and vector cases)
- Stochastic gradient-based methods
- Introduction to LLM models and basic concepts on pretraining
- Pretraining algorithms, modern developments and insights
References
Nonlinear Programming (Bertsekas).
Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. In International Conference on Learning Representations, 2018. URL: https://openreview.net/forum?id=ryQu7f-RZ.
Diederik Kingma and Jimmy Ba. Adam: a method for stochastic optimization. ICLR, 2015.
Keller Jordan, Yuchen Jin, Vlado Boza, Jiacheng You, Franz Cesista, Laker Newhouse, and Jeremy Bernstein. Muon: an optimizer for hidden layers in neural networks. 2024. URL: https://kellerjordan.github.io/posts/muon/.
Vineet Gupta, Tomer Koren, and Yoram Singer. Shampoo: preconditioned stochastic tensor optimization. In International Conference on Machine Learning, 1842–1850. PMLR, 2018.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2011.
Jeremy Bernstein and Laker Newhouse. Old optimizer, new norm: an anthology. arXiv preprint https://arxiv.org/abs/2409.20325, 2024.
Pre-requisites
Linear algebra (vector spaces, inner products, matrix calculus). Calculus and basic real analysis. Basic machine learning familiarity (e.g., supervised learning, logistic regression). No prior knowledge of LLMs or deep learning optimization required.
Short bio
Mingyi Hong is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Minnesota, Minneapolis. His research has been focused on developing optimization theory and algorithms for applications in signal processing, machine learning and foundation models. His work has received two IEEE Signal Processing Society Best Paper Awards (2021, 2022), and an International Consortium of Chinese Mathematicians Best Paper Award (2020), among others. He is an Amazon Scholar, and he is the recipient of the 2022 Pierre-Simon Laplace Early Career Technical Achievement Award from IEEE, and the 2025 Egon Balas Prize from INFORMS Optimization Society. He is a Fellow of IEEE.

















