Li Xiong
[introductory/intermediate] Differential Privacy and Certified Robustness for Deep Learning
Summary
While deep learning models have achieved great success, they are also vulnerable to potential manipulations, ranging from privacy attacks that attempt to infer sensitive training data from a trained model (e.g. reconstructing the faces of the training data from a trained face recognition model), to security attacks that attempt to corrupt or deceive a model (e.g. slightly manipulating a stop sign without human detection to trick an image recognition model to misclassify). This course will provide an overview of these attacks, including 1) privacy attacks such as membership inference attacks and model inversion attacks and unintended secret memorization), 2) security attacks including adversarial examples attacks at inference time and data poisoning attacks at training time. It will then introduce the state-of-the-art defense approaches including 1) deep learning with differential privacy (DP), a rigorous statistical framework to ensure privacy of the training data; and 2) empirical defense approaches as well as certified robustness approaches that provide provable guarantees on the robustness of the model. Finally, it will discuss the connections between DP and certified robustness and the open directions.
Syllabus
- Introduction to privacy attacks (membership inference attacks, model inversion attacks, secret sharer)
- Privacy-preserving deep learning (differential privacy, gradient perturbation, objective perturbation, output perturbation, noisy ensemble)
- Introduction to security attacks (adversarial example attacks, poisoning attacks, backdoor attacks)
- Robust deep learning (detection and reform, adversarial training, certified robustness)
References
Membership Inference Attacks Against Machine Learning Models, S&P, 2017
Model inversion attacks that exploit confidence information and basic countermeasures, CCS, 2015
The secret sharer: Evaluating and testing unintended memorization in neural networks, USENIX Security, 2019
Deep Learning with Differential Privacy, CCS, 2016
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data, ICLR, 2017
The Algorithmic Foundations of Differential Privacy, 2014 (book ch. 3)
Explaining and harnessing adversarial examples, ICLR 2015
Towards Evaluating the Robustness of Neural Networks, S&P, 2017
Certified robustness to adversarial examples with differential privacy, S&P, 2019
Certified adversarial robustness via randomized smoothing, ICML, 2019
Pre-requisites
Basic knowledge of deep learning, gradient descent, and probability.
Short bio
Li Xiong is a Professor of Computer Science and Biomedical Informatics at Emory University. She held a Winship Distinguished Research Professorship from 2015-2018. She has a Ph.D. from Georgia Institute of Technology, an MS from Johns Hopkins University, and a BS from the University of Science and Technology of China. She and her research lab, Assured Information Management and Sharing (AIMS), conduct research on the intersection of data management, machine learning, and data privacy and security. She has published over 160 papers and received six best paper (runner up) awards. She has served and serves as associate editor for IEEE TKDE, VLDBJ, IEEE TDSC, general or program co-chairs for ACM CIKM 2022, IEEE BigData 2020, and ACM SIGSPATIAL 2018, 2020. She is an IEEE fellow and ACM distinguished member. More details at http://www. cs.emory.edu/~lxiong.