
Mingyi Hong
[intermediate] Bilevel Optimization: Theory, Algorithms, and Applications in Machine Learning and Foundation Models
Summary
This short course introduces bilevel optimization, including its theory and modern algorithms, as well as its application in modern machine learning and foundation model training. Bilevel optimization captures hierarchical decision-making structures where one problem is nested inside another, enabling principled formulations of tasks such as hyperparameter tuning, meta-learning, inverse reinforcement learning, and alignment.
The course begins with foundational optimization tools, then develops the theory of bilevel programming, including optimality conditions, sensitivity analysis, and algorithmic frameworks. It further explores scalable methods for solving large-scale bilevel problems arising in LLM training and alignment, highlighting connections to reinforcement learning, preference learning, and adversarial training. Through a blend of theory, geometric insights, and practical examples, students will gain a modern perspective on how bilevel optimization underpins many emerging techniques in foundation models.
Syllabus
Introduction to Bilevel Optimization
- Bilevel problem formulation
- Examples from machine learning: hyperparameter optimization, meta-learning, data reweighting
- Relationship to min-max and hierarchical optimization
Optimality Conditions and Theory
- First-order optimality conditions for bilevel problems
- Implicit function theorem and sensitivity analysis
- KKT-based reformulations
- Connections to variational inequalities and equilibrium problems
Algorithms for Bilevel Optimization
- Gradient-based methods (implicit differentiation, unrolling)
- Approximation techniques and truncation strategies
- Single-loop vs. double-loop algorithms
- Stochastic bilevel optimization methods
Bilevel Optimization in Modern ML
- Hyperparameter optimization at scale
- Meta-learning and few-shot learning
- Dataset and loss function optimization
- Adversarial training and robustness
Applications to LLM Training and Alignment
- RLHF and inverse reinforcement learning as bilevel problems
- Preference learning and reward modeling
- Alignment as bilevel estimation
- Pretraining vs. post-training optimization pipelines
Emerging Directions
- Hierarchical RL and planning (connections to bilevel structure)
- Bilevel formulations for long-horizon decision making
- Scalable algorithms for foundation models
- Open problems and research frontiers
References
Benoît Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization. Annals of Operations Research, 153(1):235–256, 2007.
Luca Franceschi, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, and Massimiliano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. In International Conference on Machine Learning, pages 1563–1572, 2018.
Mathieu Dagréou, Pierre Ablin, Samuel Vaiter, and Thomas Moreau. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. In Neural Information Processing Systems, 2022.
Saeed Ghadimi and Mengdi Wang. Approximation methods for bilevel programming, 2018.
Kaiyi Ji, Junjie Yang, and Yingbin Liang. Bilevel optimization: Convergence analysis and enhanced design. In International Conference on Machine Learning, pages 4882–4892. PMLR, 2021.
Jeongyeol Kwon, Dohyun Kwon, Stephen Wright, and Robert D Nowak. A fully first-order method for stochastic bilevel optimization. In International Conference on Machine Learning, pages 18083–18113. PMLR, 2023.
Yan Yang, Bin Gao, and Ya-xiang Yuan. Bilevel reinforcement learning via the development of hyper-gradient without lower-level convexity, 2025.
Siliang Zeng, Mingyi Hong, and Alfredo Garcia. Structural estimation of Markov decision processes in high-dimensional state space with finite-time guarantees. Operations Research, 73(2):720-737.
Chenliang Li, Siliang Zeng, Zeyi Liao, Jiaxiang Li, Dongyeop Kang, Alfredo Garcia, and Mingyi Hong. Joint Reward and Policy Learning with Demonstrations and Human Feedback Improves Alignment. In International Conference on Learning Representations, 2024.
Michael Arbel and Julien Mairal. Amortized implicit differentiation for stochastic bilevel optimization. In International Conference on Learning Representations, 2022.
Pre-requisites
Linear algebra (vector spaces, inner products, matrix calculus). Calculus and basic real analysis. Basic machine learning familiarity (e.g., supervised learning, logistic regression). No prior knowledge of LLMs or deep learning optimization required, but it is beneficial.
Short bio
Mingyi Hong is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Minnesota, Minneapolis. His research has been focused on developing optimization theory and algorithms for applications in signal processing, machine learning and foundation models. His work has received two IEEE Signal Processing Society Best Paper Awards (2021, 2022), and an International Consortium of Chinese Mathematicians Best Paper Award (2020), among others. He is an Amazon Scholar, and he is the recipient of the 2022 Pierre-Simon Laplace Early Career Technical Achievement Award from IEEE, and the 2025 Egon Balas Prize from INFORMS Optimization Society. He is a Fellow of IEEE.

















