Doctoral Thesis: Evaluating Robustness of Neural Networks

SHARE:

Event Speaker: 

Lily Weng

Event Location: 

via Zoom, see details below

Event Date/Time: 

Tuesday, June 16, 2020 - 3:00pm

Abstract

The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. This thesis is dedicated to develop several robustness quantification frameworks for deep neural networks against both adversarial and non-adversarial input perturbations, including the first robustness score CLEVER, efficient certification algorithms Fast-Lin, CROWN, CNN-Cert, and probabilistic robustness verification algorithm PROVEN. Our proposed approaches are computationally efficient and provide good quality of robustness estimate/certificate as demonstrated by extensive experiments on MNIST, CIFAR and ImageNet.

Thesis Committee:
Prof. Luca Daniel (Thesis supervisor)
Prof. Tommi Jaakkola
Prof. Alexandre Megretski

To attend this defense, please contact the candidate at twweng at mit dot edu