How can we maximally leverage available resources--such as computation,
communication, multi-processors, or even privacy--when performing machine
learning? In this talk, I will suggest statistical risk (a rigorous notion of
the accuracy of learning procedures) as a way to incorporate such criteria
in a framework for development of algorithms. In particular, we follow a
two-pronged approach, where we (1) study the fundamental difficulties of
problems, bringing in tools from optimization, information theory, and
statistical minimax theory, and (2) develop algorithms that optimally trade
among multiple criteria for improved performance. The resulting algorithms are
widely applied in industrial and academic settings, giving up to order of
magnitude improvements in speed and accuracy for several problems. To
illustrate the practical benefits that a focus on the tradeoffs of statistical
learning procedures brings, we explore examples from computer vision, speech
recognition, document classification, and web search.
John is currently a PhD candidate in computer science at Berkeley, where he started in the fall of 2008. His research interests include optimization, statistics, machine learning, and computation. He works in the Statistical Artificial Intelligence Lab (SAIL) under the joint supervision of Michael Jordan and Martin Wainwright. He obtained his MA in statistics in Fall 2012, and received a BS and MS from Stanford University in computer science under the supervision of Daphne Koller. John has won several awards and fellowships, including a best student paper award at the International Conference on Machine Learning (ICML) and the NDSEG and Facebook graduate fellowships. John has also worked as a software engineer and researcher at Google Research under the supervision of Yoram Singer and Fernando Pereira.