Abstract: Noting that all iterative algorithms are dynamical systems, I will illustrate how most of the popular optimization methods used in machine learning can be cast as a family of feedback systems long studied in control theory. Leaning on this abstraction enables us to apply powerful, control-theoretic methods to algorithm analysis. I will show how the convergence rates of most common algorithms—including gradient descent, mirror descent, Nesterov’s method, etc.—can be verified using a unified set of potential functions. I will then describe how such potential functions can themselves be found by solving small semidefinite programming problems. These techniques can be used to search for optimization algorithms with desired performance characteristics and provide a new methodology for algorithm design. I will close with several additional examples of how a dynamical view can provide new insights into how to make machine learning systems more safe and reliable as they interact with increasingly complex and uncertain environments.
Bio: Benjamin Recht is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Ben's research group studies the theory and practice of optimization algorithms with a focus on applications in machine learning and data analysis. Ben is the recipient of a Presidential Early Career Awards for Scientists and Engineers, an Alfred P. Sloan Research Fellowship, the 2012 SIAM/MOS Lagrange Prize in Continuous Optimization, the 2014 Jamon Prize, the 2015 William O. Baker Award for Initiatives in Research, and the 2017 NIPS Test of Time Award.
Host: Leslie Kaelbling