DeepLearn 2022 Autumn
7th International School
on Deep Learning
Luleå, Sweden · October 17-21, 2022
Registration
Downloads
  • Call DeepLearn 2022 Autumn
  • Poster DeepLearn 2022 Autumn
  • Lecture Materials
  • Home
  • Schedule
  • Lecturers
  • News
  • Accommodation
  • Info
    • Luleå
    • Sponsoring
    • Code of conduct
    • Visa
    • Testimonials
  • Home
  • Schedule
  • Lecturers
  • News
  • Accommodation
  • Info
    • Luleå
    • Sponsoring
    • Code of conduct
    • Visa
    • Testimonials
irdta-deeplearn-tatiana-likhomanenko

Tatiana Likhomanenko

Apple

[intermediate/advanced] Self-, Weakly-, Semi-Supervised Learning in Speech Recognition [virtual]

Summary

My lectures will provide a deep dive into recent advances in un-/self-/weakly-/semi-supervised learning for speech recognition. We will try to investigate what models learn from a bunch of unlabeled data. First, I will give an overview of most recent popular architectures used in end-to-end speech recognition (vanilla transformers and conformers, positional embeddings) and data augmentation technique, SpecAugment, which became standard and crucial for some methods. Then we will discuss recent large-scale datasets which played an “ImageNet” role for the ASR domain in the past several years. After learning these instruments we will discuss all recent advances in self-supervised learning: from idea of wav2vec to wav2vec-U; in semi-supervised learning, particularly popular pseudo-labeling: teacher-student, iterative pseudo-labeling (IPL), language model free IPL, momentum-based pseudo-labeling. Finally I will cover recent work in weakly-supervised learning which demonstrated that words order is not necessary to train ASR models.

Syllabus

  • Transformer-based recent end-to-end models for ASR & SpecAugment
  • How large-scale datasets led to the boom in semi-/self-/un-/weakly-supervised learning in ASR
  • Deep dive into modern self-supervised learning: from wav2vec to wav2vec-U 2.0
  • What do we learn with unlabeled data? Deep dive into modern self-training
  • Word order is not a requirement for speech recognition

References

Un/self-supervised

Liu, A.H., Hsu, W.N., Auli, M. and Baevski, A., 2022. Towards End-to-end Unsupervised Speech Recognition. arXiv preprint arXiv:2204.02492.

Baevski, A., Hsu, W.N., Conneau, A. and Auli, M., 2021. Unsupervised speech recognition. Advances in Neural Information Processing Systems, 34.

Baevski, A., Hsu, W.N., Xu, Q., Babu, A., Gu, J. and Auli, M., 2022. Data2vec: A general framework for self-supervised learning in speech, vision and language. arXiv preprint arXiv:2202.03555.

Chung, Y.A., Zhang, Y., Han, W., Chiu, C.C., Qin, J., Pang, R. and Wu, Y., 2021. W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training. arXiv preprint arXiv:2108.06209.

Hsu, W.N., Bolte, B., Tsai, Y.H.H., Lakhotia, K., Salakhutdinov, R. and Mohamed, A., 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, pp.3451-3460.

Baevski, A., Zhou, Y., Mohamed, A. and Auli, M., 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33, pp.12449-12460.

Baevski, A., Schneider, S. and Auli, M., 2019, September. vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations. In International Conference on Learning Representations.

Schneider, S., Baevski, A., Collobert, R. and Auli, M., 2019. wav2vec: Unsupervised pre-training for speech recognition. Interspeech.

Semi-supervised

Zhang, Y., Park, D.S., Han, W., Qin, J., Gulati, A., Shor, J., Jansen, A., Xu, Y., Huang, Y., Wang, S. and Zhou, Z., 2021. Bigssl: Exploring the frontier of large-scale semi-supervised learning for automatic speech recognition. arXiv preprint arXiv:2109.13226.

Higuchi, Y., Moritz, N., Roux, J.L. and Hori, T., 2021. Advancing Momentum Pseudo-Labeling with Conformer and Initialization Strategy. arXiv preprint arXiv:2110.04948.

Higuchi, Y., Moritz, N., Roux, J.L. and Hori, T., 2021. Momentum pseudo-labeling for semi-supervised speech recognition. Interspeech.

Manohar, V., Likhomanenko, T., Xu, Q., Hsu, W.N., Collobert, R., Saraf, Y., Zweig, G. and Mohamed, A., 2021. Kaizen: Continuously improving teacher using exponential moving average for semi-supervised speech recognition. ASRU.

Likhomanenko, T., Xu, Q., Kahn, J., Synnaeve, G. and Collobert, R., 2021. slimipl: Language-model-free iterative pseudo-labeling. Interspeech.

Xu, Q., Likhomanenko, T., Kahn, J., Hannun, A., Synnaeve, G. and Collobert, R., 2020. Iterative pseudo-labeling for speech recognition. Interspeech.

Chen, Y., Wang, W. and Wang, C., 2020. Semi-supervised asr by end-to-end self-training. Interspeech.

Park, D.S., Zhang, Y., Jia, Y., Han, W., Chiu, C.C., Li, B., Wu, Y. and Le, Q.V., 2020. Improved noisy student training for automatic speech recognition. Interspeech.

Kahn, J., Lee, A. and Hannun, A., 2020, May. Self-training for end-to-end speech recognition. ICASSP.

Synnaeve, G., Xu, Q., Kahn, J., Likhomanenko, T., Grave, E., Pratap, V., Sriram, A., Liptchinsky, V. and Collobert, R., 2020. End-to-end asr: from supervised to semi-supervised learning with modern architectures. ICML SAS workshop.

Weakly-supervised

Pratap, V., Xu, Q., Likhomanenko, T., Synnaeve, G. and Collobert, R., 2022. Word Order Does Not Matter For Speech Recognition. ICASSP.

Palaz, D., Synnaeve, G. and Collobert, R., 2016, January. Jointly Learning to Locate and Classify Words Using Convolutional Networks. Interspeech.

Pre-requisites

Linear algebra, Fourier analysis, Speech/signal processing, Deep learning.

Short bio

I am a Research Scientist in the Machine Learning Research team at Apple. Prior to Apple, I was an AI Resident and later a Postdoctoral Research Scientist in the speech recognition team, Facebook AI Research. Back in the day, I received a Ph.D. in mixed type partial differential equations from Moscow State University. For 4 years I worked on applications of Machine Learning to High Energy Physics as a Research Scientist in the joint lab at Yandex and CERN, and later at the startup NTechLab, a leader in face recognition. The main focus of my recent research is transformers generalization and speech recognition (semi-, weakly- and unsupervised learning, domain transfer and robustness).  You can see some of my publications at https://scholar.google.com/citations?user=x7Z3ysQAAAAJ&hl.

Other Courses

irdta-deeplearn-tomasso-dorigoTommaso Dorigo
OLYMPUS DIGITAL CAMERAElaine O. Nsoesie
irdta-deeplearn-sean-bensonSean Benson
irdta-deeplearn-thomas-breuelThomas Breuel
irdta-deeplearn-hao-chenHao Chen
irdta-deeplearn-jianlin-chengJianlin Cheng
Nadya ChernyavskayaNadya Chernyavskaya
Efstratios GavvesEfstratios Gavves
irdta-deeplearn-quanquan-guQuanquan Gu
irdta-deeplearn-jiawei-hanJiawei Han
irdta-deeplearn-awni-hannunAwni Hannun
Tin Kam HoTin Kam Ho
irdta-deeplearn-timothy-hospedalesTimothy Hospedales
irdta-deeplearn-shih-chieh-hsuShih-Chieh Hsu
irdta-deeplearn-othmane-rifkiOthmane Rifki
irdta-deeplearn-mayank-vatsaMayank Vatsa
Yao WangYao Wang
irdta-deeplearn-zichen-wangZichen Wang
irdta-deeplearn-alper-yilmazAlper Yilmaz

DeepLearn 2022 Autumn

CO-ORGANIZERS

Luleå University of Technology

Institute for Research Development, Training and Advice – IRDTA, Brussels/London

Active links
  • DeepLearn 2023 Winter – 8th International School on Deep Learning
Past links
  • DeepLearn 2022 Summer
  • DeepLearn 2022 Spring
  • DeepLearn 2021 Summer
  • DeepLearn 2019
  • DeepLearn 2018
  • DeepLearn 2017
© IRDTA 2021. All Rights Reserved.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-advertisement1 yearThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Advertisement".
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
PHPSESSIDsessionThis cookie is native to PHP applications. The cookie is used to store and identify a users' unique session ID for the purpose of managing user session on the website. The cookie is a session cookies and is deleted when all the browser windows are closed.
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
CookieDurationDescription
_ga2 yearsThis cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors.
_gat_gtag_UA_74880351_91 minuteThis cookie is set by Google and is used to distinguish users.
_gid1 dayThis cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected including the number visitors, the source where they have come from, and the pages visted in an anonymous form.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
Powered by CookieYes Logo