Reception to follow.
The alternating direction method of multipliers (ADMM) originated in the 1970s as a method for solving convex optimization problems. We begin by explaining how the ADMM can be re-interpreted as a message-passing algorithm on a bipartite graph, suitable for general optimization problems, including problems with hard constraints. Based on this interpretation, we introduce the Three Weight Algorithm (TWA), an improved version of the standard ADMM algorithm which assigns each message a weight. In the TWA, weights are restricted to three different values: infinity for absolute confidence, zero for complete lack of confidence and one as a default confidence level. This simple change has an enormous impact on the speed with which solutions are found for non-convex problems.
As a first example, the TWA is used to solve very large (e.g. 100 by 100 instead of 9 by 9) Sudoku puzzles. Here, the infinite weights play a major role and allow the propagation of information through hard constraints. In the second example, the TWA is used to pack a very large number of hard disks in a box. Now the zero weights play the major role and speed up convergence several orders of magnitude compared to standard ADMM. In a third example, we show that the TWA can solve difficult multi-robot trajectory planning problems.
These examples demonstrate the significant advantages that the TWA has compared to other message-passing algorithms like belief propagation, while having very similar memory requirements. Finally, we conclude by showing that the zero-weight messages in the TWA also naturally enable the integration of higher-order knowledge into a lower-level parallel optimization algorithm.
(This talk describes work done jointly with José Bento and Nate Derbinsky.)
Jonathan Yedidia is a senior research scientist at Disney Research Boston. He holds a Ph.D. in theoretical statistical physics from Princeton (1990). From 1990 to 1993, he was a junior fellow at Harvard's Society of Fellows. From 1993 to 1997 he played chess professionally (he is an international master and was New England chess champion in 1992 and co-champion in 2012). In 1997 he joined the web start-up Viaweb, where he helped develop the shopping search engine that became Yahoo! Shopping. In 1998, he joined Mitsubishi Electric Research Labs (MERL), where he worked on probabilistic inference algorithms such as the famous belief propagation algorithm, and their applications in a variety of fields, including communications, signal processing and artificial intelligence. He left his position as distinguished research scientist at MERL in 2011 to join Disney Research, where he works on algorithms for artificial intelligence, optimization, computer vision, and machine learning.