Doctoral Thesis: Neurosymbolic Learning for Robust and Reliable Intelligent Systems

SHARE:

Event Speaker: 

Jeevana Inala

Event Location: 

via Zoom (details below)

Event Date/Time: 

Friday, September 17, 2021 - 2:00pm

 

Abstract: 

This thesis shows that looking at intelligent systems through the lens of neurosymbolic models has several benefits over traditional deep learning approaches. Neurosymbolic models are composed of both symbolic programmatic constructs such as loops and conditionals and continuous neural components. The symbolic part makes the model interpretable, generalizable, and robust, while the neural part handles the complexity of the intelligent systems. Concretely, this thesis presents two classes of neurosymbolic models --- state-machines and neurosymbolic transformers, and evaluates them on two case studies --- reinforcement-learning based autonomous systems and multi-robot systems. In these domains, we showed that the learned neurosymbolic models are human-readable, can be extrapolated to unseen scenarios, and can handle robust objectives in the specification. To efficiently learn these neurosymbolic models, we present new neurosymbolic learning algorithms that leverage the latest techniques from machine learning and program synthesis.

 

Advisor: Armando Solar-Lezama

Thesis committe members: Josh Tennenbaum and Mike Carbin 

To attend this defense, please contact the doctoral candidate at jinala at csail dot edu