We explore relationships between machine learning (ML) and causal inference. We focus on improvements in each by borrowing ideas from one another.
ML has been successfully applied to many problems, but the lack of strong theoretical guarantees has led to many unexpected failures. Models that perform well on the training distribution tend to break down when applied to different distributions; small perturbations can "fool" the trained model and drastically change its predictions; arbitrary choices in the training algorithm lead to vastly different models; and so forth. On the other hand, while there has been tremendous progress in developing causal inference methods with strong theoretical guarantees, existing methods typically do not apply in practice since they assume an abundance of data. Working at the intersection of ML and causal inference, we directly address the lack of robustness in ML, and improve the statistical efficiency of causal inference techniques.
The motivation behind the work presented in this talk is to improve methods for building predictive, and causal models that are used to guide decision making. We focus mostly on decision making in the healthcare context. On the ML for causality side, we use ML tools and analysis techniques to develop statistically efficient causal models that can guide clinicians when choosing between two treatments. On the causality for ML side, we study how knowledge of the causal mechanisms that generate observed data can be used to efficiently regularize predictive models without introducing biases. In a clinical context, we show how causal knowledge can be used to build robust, and accurate models to predict the spread of contagious infections. In a non-clinical setting, we study how to use causal knowledge to train models that are robust to distribution shifts in the context of image classification.
Thesis Supervisor: Prof. John Guttag
To attend this defense, please contact the doctoral candidate at mmakar at mit dot edu