Doctoral Thesis: Understanding and Improving Representational Robustness of Machine Learning Models
Haus Room (36-428)
By: Ching Yun (Irene) Ko
Supervisor: Luca Daniel
Thesis Committee: Luca Daniel, Duane Boning * Pin-Yu Chen
Details
- Date: Friday, May 3
- Time: 10:00 am - 11:30 am
- Location: Haus Room (36-428)
Additional Location Details:
Abstract
The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public. In this presentation, we will do a systematic study on the understanding and improving representational robustness of machine learning models. Specifically, we define representational robustness as the ability of neural network representations to maintain desirable trustworthy properties, which could include accuracy, fairness, robustness, etc. In this presentation, we will put our focus on the interplay between them. For a smoothed network, we discover that the certifiable robustness of randomized smoothing is at the cost of class unfairness. For a generic non-smooth network, we find a link between self-supervised contrastive learning and supervised neighborhood component analysis, which naturally allows us to propose a general framework that achieves better accuracy and robustness. Lastly, we realize that the current evaluation practice of foundational representation models involves extensive experiments across various real-world tasks, which are computationally expensive and prone to test set leakage. As a solution, we propose a more lightweight, privacy-preserving, and sound evaluation framework for both vision and language models by utilizing synthetic data.