6.884 Neurosymbolic Models for Natural Language Processing

SHARE:

Graduate Level
Units: 2-0-10
Prereqs: 6.864 or 9.19
Instructor:  Prof. Jacob Andreas (jda@mit.edu)
Schedule:  Lectures Friday 11:30-1:30, online instruction
 
Description
 
This subject qualifies as an Artificial Intelligent concentration subject. Deep neural networks have become the dominant modeling tool in most language processing applications, outperforming symbolic approaches based on learned grammars and state machines at many tasks. However, human language users (and, by design, symbolic models) exhibit a capacity for systematic generalization that remains out of reach of standard neural models. This seminar will survey neurosymbolic approaches to language processing, aiming to understand how we can build neural models with latent or implicit symbolic structure that combine the advantages of both modeling frameworks.

More information on how this subject will be taught can be found here:
https://eecs.scripts.mit.edu/eduportal/__How_Courses_Will_Be_Taught_Online_or_Oncampus__/F/2020/#6.884