Tinne Tuytelaars
[introductory/intermediate] Continual Learning in Deep Neural Networks
Summary
In this tutorial, we deviate from the standard (deep) machine learning paradigm where a model is trained once on all available training data and then deployed, without ever being updated again. Instead, we explore continual learning, where the goal is to train a model sequentially on different tasks or different data distributions. For instance, after training a model to detect pedestrians under clear weather conditions, we want to extend it to also work when it rains. Or, after training a model to detect the most common object categories for a given application, we want to update it so it can also recognise a new set of classes. The challenge is to update the model to incorporate the new knowledge, without forgetting previously learned knowledge. This is an active research field, with many open questions but also a few relatively simple and practical solutions. These will be demonstrated in the context of image analysis tasks (mostly image classification).
Syllabus
- Introduction: why continual learning (CL) ?; commonalities and differences with other related fields (transfer learning, domain adaptation, few-shot learning, meta-learning, …)
- Basics: terminology of CL; different CL setups: task-incremental, class-incremental, domain-incremental and beyond; evaluation metrics; datasets
- Regularisation-based CL methods: knowledge distillation; elastic weight consolidation
- Parameter isolation-based CL methods
- Replay-based CL methods: experience replay, generative replay, gradient-based methods
- Current trends beyond solving toy problems
- Open issues
References
M De Lange, R Aljundi, M Masana, S Parisot, X Jia, A Leonardis, G Slabaugh, T Tuytelaars: A continual learning survey: Defying forgetting in classification tasks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. https://arxiv.org/pdf/1909.08383
G Van de Ven, T Tuytelaars, A Toleas, Three types of incremental learning, Nature Machine Intelligence, 2022. https://www.nature.com/articles/s42256-022-00568-3
M Masana, X Liu, B Twardowski, M Menta, AD Bagdanov, J van de Weijer: Class-incremental learning: survey and performance evaluation on image classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. https://arxiv.org/pdf/2010.15277
Mai, Z., Li, R., Jeong, J., Quispe, D., Kim, H., & Sanner, S.: Online continual learning in image classification: An empirical survey, Neurocomputing 469, 28–51 (2022). https://arxiv.org/pdf/2101.10423
Pre-requisites
Basic knowledge about machine learning and deep learning.
Short bio
Tinne Tuytelaars is a full professor at KU Leuven, Belgium, working on computer vision. Her core research interests relate to continual learning, representation learning and multimodal learning. She has been program co-chair for ECCV14 and CVPR21, and general co-chair for CVPR16. She also served as associate-editor-in-chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence during 2014-2018. She was awarded an ERC Starting Grant in 2009, an ERC Advanced Grant in 2021 and received the Koenderink test-of-time award at ECCV16.