
Nitesh Chawla
[intermediate] Synthetic Data Generation and Learning from Imbalanced Data: From SMOTE to LLMs
Summary
This tutorial provides a comprehensive journey through synthetic data generation, situated in two complementary perspectives: addressing class imbalance and leveraging large language models (LLMs). We begin with the foundations of learning from imbalanced data, including why standard classifiers fail, and how resampling strategies emerged as practical solutions. I will present the origins and intuitions behind SMOTE (Synthetic Minority Over-sampling Technique), one of the earliest and most influential methods for synthetic data generation, and trace its evolution through over two decades of variants. I will then examine evaluation measures appropriate for imbalanced settings and discuss why metric selection fundamentally shapes model development. I will present algorithm-level approaches, including Hellinger Distance Decision Trees (HDDT), which use the Hellinger distance as a skew-insensitive splitting criterion, bypassing the need for sampling altogether and demonstrating robust performance across varying degrees of imbalance. I will also present our recent work on AnyLoss, a framework that provides a universal surrogate objective function capable of directly optimizing any non-differentiable evaluation metric on imbalanced data, eliminating the need for metric-specific loss engineering. The second half of the tutorial pivots to the modern era: how LLMs can serve as powerful synthetic data generators, and how LLM-based generation and SMOTE-family methods can be combined to produce high-quality synthetic data at scale, even under extreme imbalance. We will discuss opportunities and risks that arise when generative AI meets imbalanced learning.
Syllabus
Part 1: Foundations of Imbalanced Learning
- The class imbalance problem: why it matters, where it arises (fraud, medical diagnosis, rare events)
- Failure modes of standard classifiers under imbalance
- Taxonomy of solutions: data-level, algorithm-level, and hybrid approaches
- Evaluation measures: accuracy paradox, AUC-ROC, F-measure, G-mean, precision-recall curves, Matthews correlation coefficient
Part 2: SMOTE and Its Legacy
- Origins and intuition behind SMOTE
- Algorithm walkthrough and geometric interpretation
- Key variants
- SMOTE for non-tabular data (text, images)
Part 3: Algorithmic Level Approaches
- Motivation: the mismatch between training loss and evaluation metric
- Hellinger Distance Decision Trees: universal decision trees for imbalanced data
- AnyLoss framework: universal surrogate objective functions for non-differentiable metrics
- Experimental results and practical guidance
Part 4: LLMs as Synthetic Data Generators
- LLMs for tabular, text, and multimodal data generation
- Prompt engineering strategies for controlled generation
- Quality, diversity, and faithfulness of LLM-generated data
- Combining SMOTE-family methods with LLM generation for extreme imbalance
- Risks: hallucination, bias amplification, privacy, evaluation pitfalls
References
Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16, 321–357.
He, H., Bai, Y., Garcia, E. A., & Li, S. (2008). ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In 2008 IEEE International Joint Conference on Neural Networks (pp. 1322–1328). IEEE.
Han, H., Wang, W.-Y., & Mao, B.-H. (2005). Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning. In D.-S. Huang, X.-P. Zhang, & G.-B. Huang (Eds.), Advances in Intelligent Computing, Lecture Notes in Computer Science, Vol. 3644 (pp. 878–887). Springer.
Fernández, A., García, S., Galar, M., Prati, R. C., Krawczyk, B., & Herrera, F. (2018). Learning from imbalanced data sets. Springer.
Branco, P., Torgo, L., & Ribeiro, R. P. (2016). A survey of predictive modeling on imbalanced domains. ACM Computing Surveys, 49(2), Article 31, 1–50.
Dablain, D., Krawczyk, B., & Chawla, N. V. (2023). DeepSMOTE: Fusing deep learning and SMOTE for imbalanced data. IEEE Transactions on Neural Networks and Learning Systems, 34(9), 6390–6404.
Fernández, A., García, S., Herrera, F., & Chawla, N. V. (2018). SMOTE for learning from imbalanced data: Progress and challenges, marking the 15-year anniversary. Journal of Artificial Intelligence Research, 61, 863–905.
Han, D., Moniz, N., & Chawla, N. V. (2024). AnyLoss: Transforming classification metrics into loss functions. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’24). ACM.
Long, L., Wang, R., Xiao, R., Zhao, J., Ding, X., Chen, G., & Wang, H. (2024). On LLMs-driven synthetic data generation, curation, and evaluation: A survey. In Findings of the Association for Computational Linguistics: ACL 2024 (pp. 11065–11082). Association for Computational Linguistics.
Han, D., Khvatskii, G., Moniz, N., Hua, T. & Chawla, N. V. (2026), Synthetic Data Generation via LLM-Seeds and SMOTE Expansion, Pre-print.
Pre-requisites
Introductory Machine Learning, AI courses.
Short bio
Nitesh Chawla is the Frank M. Freimann Professor of Computer Science and Engineering and the Lucy Family Director of Data & AI Academic Strategy at the University of Notre Dame. His research is focused on artificial intelligence, data science, and network science, and is motivated by the question of how technology can advance the common good through convergence. He is a Fellow of: the Institute of Electrical and Electronics Engineers (IEEE); the Association of Computing Machinery (ACM); the American Association for the Advancement of Science (AAAS); and the Association for the Advancement of Artificial Intelligence (AAAI). He is the recipient of multiple awards, including the National Academy of Engineers New Faculty Fellowship, IEEE CIS Outstanding Early Career Award, Rodney F. Ganey Community Impact Award, IBM Big Data & Analytics Faculty Award, IBM Watson Faculty Award, and the 1st Source Bank Technology Commercialization Award.

















