Babak Ehteshami Bejnordi
[intermediate/advanced] Conditional Computation for Efficient Deep Learning with Applications to Computer Vision, Multi-Task Learning, and Continual Learning
Summary
To date, most AI systems focus on solving a single task or a narrow domain at a time. Developing an AI system that can solve hundreds of tasks can lead to increased efficiency, better generalization, and greater adaptability, making it a powerful tool for a wide range of applications. However, solving hundreds of tasks requires building a very large capacity model. With our current dense architectures which require the whole network to activate regardless of the input or task, this is prohibitively expensive.
Conditional computation is a technique that enables neural networks to perform different computations depending on the input or task at hand. This is achieved by routing each input through different branches or subnetworks within the network such that only the relevant parts of the network are executed. There are several benefits to using conditional computation in neural networks. Conditional computation leads to improved training and inference costs because only a fraction of the model branches is used for each example. In addition, because of their modular nature and their ability to activate/deactivate different branches, such models can be modified or extended more easily making them particularly suited for multi-task learning and continual learning settings. Finally, conditional computation can help interpretability by making it easier to understand how the model is making decisions and to identify what factors are most important for a particular task.
Syllabus
- Introduction to conditional computation
- Various approaches for implementation of conditional computing such as the mixture of experts, early exiting, dynamic token pruning, etc.
- Conditional computation in language and vision
- Various routing algorithms to learn how to route the input to the appropriate pathway in the network
- The chicken and egg problem in learning expert modules and learning how to route at the same time
- Example state-of-the-art solutions
- An introduction to multi-task learning (MTL) and continual learning (CL)
- The issue of task-interference
- How conditional computation enables learning more efficient and accurate MTL models
- State-of-the-art conditional compute models for MTL and CL
- Summary
References
Bengio, Yoshua, Nicholas Léonard, and Aaron Courville. “Estimating or propagating gradients through stochastic neurons for conditional computation.” arXiv preprint arXiv:1308.3432 (2013).
Bengio, Emmanuel, et al. “Conditional computation in neural networks for faster models.” arXiv preprint arXiv:1511.06297 (2015).
Shazeer, Noam, et al. “Outrageously large neural networks.” Proceedings of the International Conference on Learning Representations. ICLR 2017.
Veit, Andreas, and Serge Belongie. “Convolutional networks with adaptive inference graphs.” Proceedings of the European Conference on Computer Vision. ECCV 2018.
Ehteshami Bejnordi, Babak, Tijmen Blankevoort, and Max Welling. “Batch-shaping for learning conditional channel gated networks.” Proceedings of the International Conference on Learning Representations. ICLR 2020.
Teerapittayanon, Surat, Bradley McDanel, and H. T. Kung. “Branchynet: Fast inference via early exiting from deep neural networks.” CoRR abs/1709.01686 (2017). arXiv preprint arXiv:1709.01686 (2017).
Ghodrati, Amir, Babak Ehteshami Bejnordi, and Amirhossein Habibian. “Frameexit: Conditional early exiting for efficient video recognition.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
Raychaudhuri, Dripta S., et al. “Controllable Dynamic Multi-Task Architectures.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Lin, Min, Jie Fu, and Yoshua Bengio. “Conditional computation for continual learning.” arXiv preprint arXiv:1906.06635 (2019).
Abati, Davide, et al. “Conditional channel gated networks for task-aware continual learning.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
Barham, Paul, et al. “Pathways: Asynchronous distributed dataflow for ML.” Proceedings of Machine Learning and Systems 4 (2022): 430-449.
Pre-requisites
Basic knowledge of machine learning and deep learning. A basic understanding of transformers and convolutional neural networks is preferred but not mandatory.
Short bio
https://scholar.google.com/citations?user=Qk-AMk0AAAAJ&hl=en
https://www.linkedin.com/in/babakint/
Babak Ehteshami Bejnordi is a Research Scientist at Qualcomm AI Research in the Netherlands, leading a research group focusing on conditional computation for efficient deep learning. His research interests are in Conditional Computation, Efficient Deep Learning for Computer Vision, Multi-Task Learning, and Continual Learning. Babak obtained his Ph.D. in machine learning for breast cancer diagnosis from Radboud University in the Netherlands. During his Ph.D., he organized the CAMELYON16 challenge on breast cancer metastases detection which demonstrated one of the first medical diagnostic tasks in which AI algorithms outperform expert pathologists. Before joining Qualcomm he was a visiting researcher at Harvard University, BeckLab, and a member of the Broad Institute of MIT and Harvard. He has been the organizer of the Qualcomm Innovation Fellowship Program in Europe since 2019.