Doctoral Thesis: Learning to improve clinical decisions and AI safety by leveraging structure

Tuesday, June 4
10:00 am - 11:30 am

32-G449 (Patil/Kiva)

By: Geeticka Chauhan

Supervisor: Peter Szolovits

Details

  • Date: Tuesday, June 4
  • Time: 10:00 am - 11:30 am
  • Category:
  • Location: 32-G449 (Patil/Kiva)
Additional Location Details:

Abstract:
The availability of large collections of digitized healthcare data along with the increasing power of computation has allowed machine learning (ML) for healthcare to become one of the key applied research domains in ML. ML for health has great potential in providing clinical decision-making support that can improve quality of care and reduce healthcare spending by easing clinical operations. However, the successful development of ML models in healthcare is contingent on data that is complex, noisy, heterogeneous, limited in labels and highly sensitive. In this thesis, we leverage the unique structure present in medical data along with some external knowledge to guide model predictions. Additionally, we develop differentially private (DP) training techniques using gradient structure to mitigate privacy leakage.

In this talk, we will focus on methods that leverage multimodality towards developing highly accurate, resource-efficient and interpretable machine learning models in healthcare. We will present insights learned from applying contrastive learning and generative pre-training objectives on radiology data which is limited in labels and consisting of confounding disease processes. Finally, we will discuss the challenges associated with privacy leakage from sensitive datasets using the speech domain as a case study, and present techniques for mitigation of data vulnerabilities via differentially private model training.