Doctoral Thesis: Towards Effective Tools for Debugging Machine Learning Models
32-G449 (Patil/Kiva)
Julius Adebayo
Abstract
This thesis addresses the challenge of detecting and fixing the errors of a machine learning (ML) model–model debugging. Current ML models, especially overparametrized deep neural networks (DNNs) trained on crowd-sourced data, easily latch onto spurious signals, underperform for small subgroups, and can be derailed by errors in training labels. Consequently, the ability to detect and fix a model’s mistakes prior to deployment is of critical importance.
In the first part of this thesis, we introduce a framework to categorize model bugs that arise as part of the standard supervised learning pipeline. Equipped with the bug categorization, we assess whether several post hoc model explanation approaches are effective for detecting and fixing the categories of bugs proposed in the framework. We find that current feature attribution approaches struggle to detect a model’s reliance on spurious signals, are unable to identify training inputs with wrong labels, and provide no direct avenue for fixing model errors. In addition, we demonstrate that practitioners struggle to use these tools effectively. With the limitations of current approaches established, in the second part of the thesis, we present new tools that address the limitations of current post hoc explanation approaches. Taken together, this thesis makes advances towards better debugging tools for machine learning models.
Details
- Date: Tuesday, August 9
- Time: 3:00 pm - 5:00 pm
- Category: Thesis Defense
- Location: 32-G449 (Patil/Kiva)
Additional Location Details:
Thesis Supervisor: Prof. Hal Abelson
Zoom Link: https://mit.zoom.us/j/96882639339?pwd=TzFaSVJ5NElKOEZyZnNGak5CSitnUT09