George Karypis
[intermediate/advanced] Optimizing LLM Inference
Summary
In the span of just a few years, we have witnessed a significant increase in the performance of transformer-based large language models (LLMs). This has ushered a new era of generative AI applications across a wide range of domains. These unprecedent performance improvements were achieved through scaling the size of LLMs. Today’s leading LLMs have hundreds of billions to a few trillions of parameters. However, the size of these models reduces their practical utility because they are expensive to use. Performing inference on a leading LLM requires multiple expensive AI accelerators connected with a high-bandwidth low-latency network, which is both expensive and power hungry. In this course, we delve into the latest advancements in inference optimization using AI accelerators. Beginning with an exploration of the basic transformer architecture and an overview of modern deep learning system frameworks and AI hardware, we then delve into system optimization techniques for fast and memory-efficient attention computation like KV Caches and Flash Attention, followed by structured transformer architectures like GQA and MoE. Subsequently, we examine various post-processing strategies such as quantization, distillation, sparsification, and then algorithmic innovation for fast inference like speculative decoding. Finally, we share our experiences with efficient training schemes through multiple case studies across different AI accelerators.
Syllabus
- Overview of ML systems and LLM inference.
- System-level optimizations.
- Network architecture optimizations.
- Model compression.
- Model pruning.
- Fast decoding.
- Case studies.
References
1. LLM inference unveiled: survey and roofline model insights, Yuan et al., 2024, https://arxiv.org/abs/2402.16363.
2. Full-stack optimization of transformer inference: A survey, Kim et al., 2023, https://arxiv.org/abs/2302.14017.
3. A survey of techniques for optimizing transformer inference, Teja Cjitty-Venkata et al., 2023, https://arxiv.org/abs/2307.07982.
4. Efficient memory management for large language models, Kwon et al., 2023, https://arxiv.org/abs/2309.06180.
5. Efficiently scaling transformer inference, Pope et al., 2022, https://arxiv.org/abs/2211.05102.
6. DeepSpeed Inference: Enabling efficient inference of transformer models at unprecedented scale, Yazdani et al., 2022, https://arxiv.org/abs/2207.00032.
Pre-requisites
Knowledge of machine learning, neural networks, transformers, and LLMs. Knowledge of computer systems and AI accelerators (e.g., GPUs).
Short bio
George Karypis is a Distinguished McKnight University Professor and William Norris Chair in Large Scale Computing at the Department of Computer Science & Engineering at the University of Minnesota and a Senior Principal Scientist at AWS AI/ML. His research interests span the areas of machine learning, data mining, high performance computing, collaborative filtering, bioinformatics, cheminformatics, and scientific computing. He has coauthored over 350 papers on these topics and two books: “Introduction to Protein Structure Prediction: Methods and Algorithms” (Wiley, 2010) and “Introduction to Parallel Computing” (Addison Wesley, 2003, 2nd edition). He is serving on the program committees of many conferences and workshops on these topics, and on the editorial boards of several journals. He is a Fellow of the IEEE.