Bohyung Han
[introductory/intermediate] Robust Deep Learning
Summary
Deep learning achieves remarkable success in various fields, but still suffers from critical limitations in the trained models, which include overfitting, overconfidence, and bias. These challenges often incur unexpected results, which makes trained models unreliable, and it is extremely important to handle the problems effectively. This course first discusses various issues of datasets, training algorithms, and deep learning models, and reviews existing efforts to overcome such limitations. Specifically, it will deal with techniques for better training and inference such as regularization, confidence calibration, debiasing, and debiasing. Also, I will present a few tasks that require robust deep neural networks, such as continual learning, out-of-distribution detection, learning with label noise, and domain adaptation.
Syllabus
- Limitations of deep neural networks: overfitting, overconfidence, and bias
- Robustifying deep neural networks: regularization, confidence calibration, debiasing
- Tasks-specific approaches: continual learning, out-of-distribution detection, learning with label noise, and domain adaptation
- Applications: visual question answering
References
- N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov: Dropout: A Simple Way to Prevent Neural Networks from Overfitting. JMLR 2014
- S. Ioffe, C. Szegedy: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ICML 2015
- G. Huang, Y. Sun, Z. Liu, D. Sedra, K. Weinberger: Deep Networks with Stochastic Depth. ECCV 2016
- J. Goldberger, E. Ben-Reuven: Training Deep Neural-Networks Using a Noise Adaptation Layer. ICLR 2017
- P. H. Seo, G. Kim, B. Han: Combinatorial Inference against Label Noise. NeurIPS 2019
- Z. Li, D. Hoiem: Learning without Forgetting. ECCV 2016
- A. Douillard, M. Cord, C. Ollion, T. Robert, E. Valle: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. ECCV 2020
- H. Noh, T. You, J. Mun, B. Han, Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization, in NIPS 2017
- S. Seo, P. H. Seo, B. Han: Confidence Calibration in Deep Neural Networks through Stochastic Inferences. in CVPR 2019
- C. Guo, G. Pleiss, Y. Sun, K. Q. Weinberger: On Calibration of Modern Neural Networks. ICML 2017
- M. Ren, W. Zeng, B. Yang, R. Urtasun: Learning to Reweight Examples for Robust Deep Learning. ICML 2018
- S. Sagawa, P. W. Koh, T. B. Hashimoto, P. Liang: Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization. ICLR 2020
- J. Nam, H. Cha, S. Ahn, J. Lee, J. Shin: Learning from Failure: Training Debiased Classifier from Biased Classifier. NeurIPS 2020
- S. Seo, J.-Y. Lee, B. Han: Unsupervised Learning of Debiased Representations with Pseudo-Attributes. arXiv 2021
- K. Lee, H. Lee, K. Lee, J. Shin: Training Confidence-Calibrated Classifiers for Detecting Out-of-Distribution Samples. ICLR 2018
Pre-requisites
Basic machine learning, mathematics at the level of an undergraduate degree in computer science (multivariate calculus, probability theory, and linear algebra).
Short bio
Bohyung Han is a Professor in the Department of Electrical and Computer Engineering at Seoul National University, Korea. Prior to the current position, he was an Associate Professor in the Department of Computer Science and Engineering at POSTECH and a visiting research scientist in the Machine Intelligence Group at Google, Venice, CA, USA. He received the Ph.D. degree from the Department of Computer Science at the University of Maryland, College Park, MD, USA, in 2005. He served or will be serving as a Senior Area Chair in ICLR 2022, and an Area Chair in CVPR, ICCV, ECCV, NIPS/NeurIPS, ICLR, and IJCAI, a General Chair in ACCV 2022, a Tutorial Chair in ICCV 2019, a workshop chair in CVPR 2021, and a Demo Chair in ECCV 2022. He is also serving as an Associate Editor in TPAMI and MVA, and an Area Editor in CVIU. He is interested in various topics in computer vision and machine learning with an emphasis on deep learning. His research group won the Visual Object Tracking (VOT) Challenge in 2015 and 2016.