Doctoral Thesis: Context and Participation in ML
32-G882 (Hewlett Room)
ML systems are shaped by human choices and norms, from problem conceptualization to deployment. They are then used in complex socio-technical contexts, where they interact with and affect diverse populations. However, development decisions are often made in isolation, without deeply taking into account the deployment context in which the system will be used. And they are typically hidden to users in that context, who have few avenues to understand if or how they should use the system. As a result, there are numerous examples of ML systems that in practice are harmful, poorly understood, or misused.
We propose an alternate approach to the development and deployment of ML systems that is focused on the people in a particular deployment context who use and are affected by the system. First, we ask a prospective question: when a new system is being conceptualized, how can each step of the ML lifecycle be proactively shaped to fit the deployment context? We address this question through an in-depth case study of co-designing ML tools to support activists who monitor gender-related violence. Drawing from intersectional feminist theory and participatory design, we develop methods for data collection, annotation, modeling, and evaluation that prioritize sustainable partnerships and challenge power inequalities. Then, we consider an alternative paradigm where we do not have full control over the development lifecycle, e.g., where a model has already been built and made available. In these cases, we show how we can still intervene at deployment, building deployment tools that give downstream stakeholders the information and agency to understand and hold ML systems accountable. We describe how we can design deployment tools to provide intuitive, useful, and context-relevant insight into model strengths and limitations. Drawing from these design goals, we present Kaleidoscope, a workflow & user-facing system for context-specific evaluation that allows users to translate implicit expectations of “good model behavior” for their context into explicitly-defined, semantically-meaningful tests.
- Date: Friday, January 20
- Time: 11:00 am - 12:30 pm
- Category: Thesis Defense
- Location: 32-G882 (Hewlett Room)
Additional Location Details:
Thesis Supervisors: Professors John Guttag and Arvind Satyanarayan