Doctoral Thesis: Deep Learning for Spoken Dialogue Systems: Application to Nutrition

SHARE:

Event Speaker: 

Mandy Korpusik

Event Location: 

32-G449

Event Date/Time: 

Wednesday, April 17, 2019 - 11:00am

Abstract
 
Personal digital assistants such as Siri, Cortana, and Alexa must translate a user's natural language query into a semantic representation that the back-end can then use to retrieve information from relevant data sources. For example, answering a user's question about the number of calories in a food requires querying a database with nutrition facts for various foods. In this thesis, we demonstrate deep learning techniques for performing a semantic mapping from raw, unstructured, human natural language directly to a structured, relational database, without any intermediate pre-processing steps or string matching heuristics. Specifically, we show that a novel, end-to-end convolutional neural architecture learns a shared latent space, where vector representations of natural language queries lie close to embeddings of database entries that have semantically similar meanings. 
 
The first instantiation of this technology is in the nutrition domain, with the goal of reducing the burden on individuals monitoring their food intake to support healthy eating or manage their weight. To train the models, we collected 31,712 written and 2,962 spoken meal descriptions that were weakly annotated with only information about which database foods were described in the meal, but not explicitly where they were mentioned. Our best deep learning models achieve 95.8% average semantic tagging F1 score on a held-out test set of spoken meal descriptions, and 97.1% top-5 food database recall in a fully deployed iOS application. We also observed a significant correlation between data logged by our system and that recorded during a 24-hour dietary recall conducted by expert nutritionists in a pilot study with 14 participants. Finally, we show that our approach generalizes beyond nutrition and database mapping to other tasks such as dialogue state tracking.
 
Supervisor: Dr. James Glass