Many computational challenges in machine learning involve the three problems of optimization, integration, and fixed-point computation. These three can often be reduced to each other, so they may also provide distinct vantages on a single problem. In this talk, I present a small part of this picture through a discussion of my work on AlphaGo and Hamiltonian descent methods. AlphaGo is the first computer program to defeat a world-champion player, Lee Sedol, in the board game of Go. My work laid the groundwork of the neural net components of AlphaGo, and culminated in our Nature publication describing AlphaGo's algorithm, at whose core hide these three problems. The work introducing Hamiltonian descent methods presents a family of gradient-based optimization algorithms inspired by the Monte Carlo literature and recent work on reducing the problem of optimization to that of integration. These methods can achieve fast linear convergence without strong convexity by using a non-standard kinetic energy to condition the optimization. I conclude by bringing this basic idea back to the study of the classical gradient descent method.
Chris Maddison is a PhD candidate in the Statistical Machine Learning Group in the Department of Statistics at the University of Oxford. He is an Open Philanthropy AI Fellow and spends two days a week as a Research Scientist at DeepMind. His research is broadly focused on the development of numerical methods for deep learning and machine learning. He has worked on methods for variational inference, numerical optimization, and Monte Carlo estimation with a specific focus on those that might work at scale with few assumptions. Chris received his MSc. from the University of Toronto. He received a NIPS Best Paper Award in 2014, and was one of the founding members of the AlphaGo project.
Host: Pablo Parrilo