Jacob Goldberger
[introductory/intermediate] Calibration Methods for Neural Networks
Summary
Deep learning algorithms can provide confidence scores that are intended to reflect the likelihood that their predictions are correct. However, these models often struggle to accurately estimate the uncertainty of their predictions. This can be a major problem when using neural networks for tasks such as decision-making systems, medical diagnosis, and autonomous driving. In this tutorial, we will explore methods for identifying and measuring the level of calibration in a model. We will discuss commonly used parametric calibration methods, such as temperature scaling for classification networks and variance scaling for regression networks, as well as a the non-parametric calibration method known as conformal prediction.
Syllabus
- The concept of calibration and sources of miscalibration.
- Calibration evaluation measures for classification networks such as Expected Calibration Error (ECE) and its adaptive versions.
- Parametric calibration methods for binary and multiclass classification networks such as temperature scaling, vector scaling and matrix scaling, which were particularly designed for deep neural networks.
- Calibration evaluation measures and calibration methods for regression networks.
- Non-parametric calibration methods such as conformal prediction.
References
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger, “On calibration of modern neural networks,” in International Conference on Machine Learning (ICML), 2017.
Dan Levi, Liran Gispan, Niv Giladi, and Ethan Fetaya, “Evaluating and calibrating uncertainty prediction in regression tasks,” Sensors, vol. 22, no. 15, pp. 5540, 2022.
Lior Frenkel and Jacob Goldberger, “Calibration of medical imaging classification systems with weight scaling,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2022.
Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint arXiv:2107.07511, 2021.
Pre-requisites
Basic knowledge of deep learning and statistics.
Short bio
Jacob Goldberger is a Full Professor at Bar-Ilan University. He received the PhD degree from Tel-Aviv University in 1998, working on statistical models for speech recognition. His research deals with developing and analyzing deep learning algorithms and using them for a large variety of applications such as computer vision, speech processing, medical imaging and natural language processing. In recent years his research is focused on addressing the problems of training with noisy labels, unsupervised domain adaptation and calibration of the decision confidence of classification and regression networks. Dr Goldberger published about 150 papers in journals and peer reviewed conferences.