The last five years have seen a vast increase in academic and popular interest in "fair" machine learning. But while the community has made significant progress towards developing algorithmic interventions to mitigate unfairness, research has focused predominantly on static classification settings. Real-world algorithmic decision-making, however, increasingly happens in more dynamic settings. In this thesis, we will study fairness in some of these settings.
The first part focuses on mitigating unfairness in settings in which decision-makers can choose to spend part of a limited budget on acquiring more information for individuals. For example, a doctor who is unsure about a diagnosis can first decide to conduct additional tests before making a final decision. Studying fairness in this budget-constrained decision-making setting, also called active feature-value acquisition, is important not only because of its applicability to a wide range of domains but also because it offers a novel perspective on how fairness can be defined and improved. We will propose three methods for achieving fairness in this setting that provide guarantees at the level of a population subgroup or at the level of an individual. The second part of the thesis studies a real-world budget-constrained application of algorithmic decision-making. We detect bias in statistical models that are currently deployed to support the distribution of social programs among millions of households in the developing world. To mitigate this bias, while accounting for the complex multi-stakeholder decision-making process, we propose a domain-specific decision support tool. Finally, in the last part of this thesis, we study cooperation in network games with spatio-temporal complexities using multi-agent reinforcement learning. While most of the literature focuses on interventions at the agent level, we will investigate how environmental interventions can promote cooperation between agents and create more equitable outcomes.
Thesis Supervisor: Prof. Alex Pentland (MIT Media Lab)
To attend this defense, please contact the doctoral candidate at bakker at mit dot edu