Great progress has been made in natural language processing thanks to
many different algorithms, each often specific to one application.
Most learning algorithms force language into simplified representations such as
bag-of-words or fixed-sized windows or require human-designed features.
I will introduce three models based on recursive neural networks
that can learn linguistically plausible representations of language.
These methods jointly learn compositional features and grammatical
sentence structure for parsing or phrase level sentiment predictions.
They can also be used to represent the visual meaning of a sentence which
can be used to find images based on query sentences or to describe images
with a more complex description than single object names.
Besides the state-of-the-art performance, the models capture interesting phenomena
in language such as compositionality. For instance, people easily see that the
"with" phrase in "eating spaghetti with a spoon" specifies a way of eating
whereas in "eating spaghetti with some pesto" it specifies the dish.
I show that my model solves these prepositional attachment problems
well thanks to its distributed representations.
In sentiment analysis, a new tensor-based recursive model learns different
types of high level negation and how they can change the meaning of longer
phrases with many positive words. They also learn that when contrastive
conjunctions such as "but" are used the sentiment of the phrases following
them usually dominates.
Richard Socher is a PhD student at Stanford working with Chris Manning
and Andrew Ng. His research interests are machine learning for NLP and
vision. He is interested in developing new deep learning models that
learn useful features, capture compositional structure in multiple
modalities and perform well across different tasks.
He was awarded the 2011 Yahoo! Key Scientific Challenges Award,
the Distinguished Application Paper Award at ICML 2011, a Microsoft
Research PhD Fellowship in 2012 and a 2013 "Magic Grant" from the
Brown Institute for Media Innovation.