Information planning addresses the problem of determining the optimal set of measurements that would reduce the uncertainty over latent variables of interest under a set of constraints. A commonly used reward for quantifying the expected reduction in uncertainty is mutual information (MI). One application of information planning can be found in object tracking where a network of sensors generate noisy measurements of a moving object’s position and we are interested in selecting the set of sensors that would maximally reduce the uncertainty of predicting the object’s track. Optimal measurement selection can be combinatorially complex and intractable for large-scale problems; interestingly, it has been shown that simple greedy algorithms that choose the best measurement at each time step given past selections, provide nearly optimal solutions for submodular monotone rewards.
In this thesis we examine several challenges that arise when performing real-world information planning. Our contributions are three-fold: (i) we provide theoretical guarantees for the greedy algorithm used in the submodular monotone case when it is applied to different problem settings: (1) non-monotone rewards, (2) budget constraints, (3) settings where only a set of latent variables is of relevance; (ii) we demonstrate how to substantially reduce the complexity of evaluating MI in Gaussian models by taking advantage of sparsity in the measurement process; and (iii) we propose a variant of belief propagation that is suitable for adaptive inference settings.
In the first part of the talk, we present conditions under which open-loop is equivalent to closed-loop information planning. Furthermore, we provide bounds on the greedy performance for submodular non-monotone functions. We also consider the case when measurements are valued based on their relative information content and explore the conditions under which the known bounds for the submodular monotone case apply to this modified setting. Lastly, we provide bounds for budgeted and focused planning. In the former, each measurement induces a cost and there is a limited budget constraint, while in the latter only a set of latent variables is of interest.
In the second part of this talk, we propose a method for reducing the complexity of function evaluations that take place during information planning. Previous works have assumed oracle-value models, where the informational value of any set of measurements is provided in constant time, an assumption which is often unrealistic. We focus on Gaussian models and MI and show that we can substantially reduce complexity by exploiting sparsity in the measurement process.
In the third part, we propose a variant of belief propagation (BP) that is well-suited to adaptive inference settings and is exact on trees. During information planning we need to update the marginal distribution of only one latent variable at each step of the greedy algorithm. This task can be achieved naïvely, where inference is performed from scratch at every step, or by taking advantage of repeating calculations. Adaptive inference is concerned with the latter approach. We suggest a minimal messaging schedule, where only the necessary messages are sent at every step to guarantee the correct marginal distributions at the nodes of interest. We also provide extensions to Gaussian loopy graphs and to the problem of finding the most likely sequence of hidden variables.
Thesis Supervisor: John W. Fisher III
Thesis Committee: Jon How, Costis Daskalakis