Tatiana Likhomanenko
[intermediate/advanced] Self-, Weakly-, Semi-Supervised Learning in Speech Recognition [virtual]
Summary
My lectures will provide a deep dive into recent advances in un-/self-/weakly-/semi-supervised learning for speech recognition. We will try to investigate what models learn from a bunch of unlabeled data. First, I will give an overview of most recent popular architectures used in end-to-end speech recognition (vanilla transformers and conformers, positional embeddings) and data augmentation technique, SpecAugment, which became standard and crucial for some methods. Then we will discuss recent large-scale datasets which played an “ImageNet” role for the ASR domain in the past several years. After learning these instruments we will discuss all recent advances in self-supervised learning: from idea of wav2vec to wav2vec-U; in semi-supervised learning, particularly popular pseudo-labeling: teacher-student, iterative pseudo-labeling (IPL), language model free IPL, momentum-based pseudo-labeling. Finally I will cover recent work in weakly-supervised learning which demonstrated that words order is not necessary to train ASR models.
Syllabus
- Transformer-based recent end-to-end models for ASR & SpecAugment
- How large-scale datasets led to the boom in semi-/self-/un-/weakly-supervised learning in ASR
- Deep dive into modern self-supervised learning: from wav2vec to wav2vec-U 2.0
- What do we learn with unlabeled data? Deep dive into modern self-training
- Word order is not a requirement for speech recognition
References
Un/self-supervised
Liu, A.H., Hsu, W.N., Auli, M. and Baevski, A., 2022. Towards End-to-end Unsupervised Speech Recognition. arXiv preprint arXiv:2204.02492.
Baevski, A., Hsu, W.N., Conneau, A. and Auli, M., 2021. Unsupervised speech recognition. Advances in Neural Information Processing Systems, 34.
Baevski, A., Hsu, W.N., Xu, Q., Babu, A., Gu, J. and Auli, M., 2022. Data2vec: A general framework for self-supervised learning in speech, vision and language. arXiv preprint arXiv:2202.03555.
Chung, Y.A., Zhang, Y., Han, W., Chiu, C.C., Qin, J., Pang, R. and Wu, Y., 2021. W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training. arXiv preprint arXiv:2108.06209.
Hsu, W.N., Bolte, B., Tsai, Y.H.H., Lakhotia, K., Salakhutdinov, R. and Mohamed, A., 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, pp.3451-3460.
Baevski, A., Zhou, Y., Mohamed, A. and Auli, M., 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33, pp.12449-12460.
Baevski, A., Schneider, S. and Auli, M., 2019, September. vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations. In International Conference on Learning Representations.
Schneider, S., Baevski, A., Collobert, R. and Auli, M., 2019. wav2vec: Unsupervised pre-training for speech recognition. Interspeech.
Semi-supervised
Zhang, Y., Park, D.S., Han, W., Qin, J., Gulati, A., Shor, J., Jansen, A., Xu, Y., Huang, Y., Wang, S. and Zhou, Z., 2021. Bigssl: Exploring the frontier of large-scale semi-supervised learning for automatic speech recognition. arXiv preprint arXiv:2109.13226.
Higuchi, Y., Moritz, N., Roux, J.L. and Hori, T., 2021. Advancing Momentum Pseudo-Labeling with Conformer and Initialization Strategy. arXiv preprint arXiv:2110.04948.
Higuchi, Y., Moritz, N., Roux, J.L. and Hori, T., 2021. Momentum pseudo-labeling for semi-supervised speech recognition. Interspeech.
Manohar, V., Likhomanenko, T., Xu, Q., Hsu, W.N., Collobert, R., Saraf, Y., Zweig, G. and Mohamed, A., 2021. Kaizen: Continuously improving teacher using exponential moving average for semi-supervised speech recognition. ASRU.
Likhomanenko, T., Xu, Q., Kahn, J., Synnaeve, G. and Collobert, R., 2021. slimipl: Language-model-free iterative pseudo-labeling. Interspeech.
Xu, Q., Likhomanenko, T., Kahn, J., Hannun, A., Synnaeve, G. and Collobert, R., 2020. Iterative pseudo-labeling for speech recognition. Interspeech.
Chen, Y., Wang, W. and Wang, C., 2020. Semi-supervised asr by end-to-end self-training. Interspeech.
Park, D.S., Zhang, Y., Jia, Y., Han, W., Chiu, C.C., Li, B., Wu, Y. and Le, Q.V., 2020. Improved noisy student training for automatic speech recognition. Interspeech.
Kahn, J., Lee, A. and Hannun, A., 2020, May. Self-training for end-to-end speech recognition. ICASSP.
Synnaeve, G., Xu, Q., Kahn, J., Likhomanenko, T., Grave, E., Pratap, V., Sriram, A., Liptchinsky, V. and Collobert, R., 2020. End-to-end asr: from supervised to semi-supervised learning with modern architectures. ICML SAS workshop.
Weakly-supervised
Pratap, V., Xu, Q., Likhomanenko, T., Synnaeve, G. and Collobert, R., 2022. Word Order Does Not Matter For Speech Recognition. ICASSP.
Palaz, D., Synnaeve, G. and Collobert, R., 2016, January. Jointly Learning to Locate and Classify Words Using Convolutional Networks. Interspeech.
Pre-requisites
Linear algebra, Fourier analysis, Speech/signal processing, Deep learning.
Short bio
I am a Research Scientist in the Machine Learning Research team at Apple. Prior to Apple, I was an AI Resident and later a Postdoctoral Research Scientist in the speech recognition team, Facebook AI Research. Back in the day, I received a Ph.D. in mixed type partial differential equations from Moscow State University. For 4 years I worked on applications of Machine Learning to High Energy Physics as a Research Scientist in the joint lab at Yandex and CERN, and later at the startup NTechLab, a leader in face recognition. The main focus of my recent research is transformers generalization and speech recognition (semi-, weakly- and unsupervised learning, domain transfer and robustness). You can see some of my publications at https://scholar.google.com/citations?user=x7Z3ysQAAAAJ&hl.