Doctoral Thesis: Techniques for Interpretability and Transparency of Black-Box Models

Thursday, December 1
10:00 am - 11:00 am

32-D463 (Star)

Yilun Zhou

Abstract: Recently, black-box models, such as neural networks, have been increasingly adopted in many tasks. However, their opacity, or the inability to understand their inner-workings, has hindered the deployment in high-stakes domains such as healthcare or finance. In this talk, I describe my research in interpretability and transparency to address this issue. 

In the interpretability category, I introduce two fundamental properties of good explanations for model predictions, correctness and understandability. Correctness captures the notion that the explanations should faithfully represent the model’s decision making logic, and understandability reflects the requirement that these explanations should be reliably understood by human users. For both properties, I propose evaluation metrics as well as methods that improve upon existing ones, while identifying avenues for future work. 

In the transparency category, I present the transparency-by-example framework, a Bayesian sampling formulation to inspect models and identify a wide range of model behaviors. I demonstrate the flexibility of this Bayesian approach by applying it to both deep neural networks and non-differentiable robot controllers, revealing hidden and hard-to-find insights in both cases. 

Details

  • Date: Thursday, December 1
  • Time: 10:00 am - 11:00 am
  • Category:
  • Location: 32-D463 (Star)
Additional Location Details:

Thesis Committee: Prof. Julie Shah, supervisor

Prof. Jacob Andreas, Prof. Peter Szolovits, Dr. Marco Tulio Ribeiro

To attend via zoom, contact the doctoral candidate at yilun@mit.edu