When real people interact with algorithms (e.g. over mediums like the Internet), they impose additional desiderata beyond simply that the algorithm is correct, fast, and uses little storage. People prefer algorithms that are simple and transparent. People behave strategically when interacting, and prefer simple optimizations versus complex reasoning. Algorithms deployed in these settings should also be robust to potential strategic manipulation. In this talk I will discuss several recent works that address these novel challenges, with an extra emphasis on robustness against strategic manipulation.
Specifically, in traditional algorithm design, no incentives come into play: the input is given, and your algorithm must produce a correct output. How much harder is it to solve the same problem when the input is not given directly, but instead reported by strategic agents with interests of their own? The unique challenge stems from the fact that the agents may choose to lie about the input in order to manipulate the behavior of the algorithm, and tools from Game Theory are therefore required in order to predict how these agents will behave.
We develop a new algorithmic framework with which to study such problems. Specifically, we provide a computationally efficient black-box reduction from solving any optimization problem on ``strategic input,'' to solving a perturbed version of that same optimization problem when the input is directly given. We further demonstrate the power of our framework by addressing several long-standing open problems. First, we extend Myerson's celebrated characterization of single item auctions to multiple items in a computationally efficient manner. Next, we design a computationally efficient 2-approximate mechanism for job scheduling on unrelated machines, the original problem studied in Nisan and Ronen's seminal paper introducing the field of Algorithmic Mechanism Design. This matches the guarantee of the best known computationally efficient algorithm when the input is directly given.
Matt received his PhD in EECS from MIT in 2014, advised by Costis Daskalakis, where he was an NPSC, NSF, and Microsoft Graduate Research Fellow. He is now a postdoc at Princeton University in the Computer Science department. His research interests are broadly Algorithmic Game Theory, Mechanism Design and Online Algorithms, with a focus on designing algorithms that address constraints imposed by the strategic nature of the agents that interact with them. Matt received his B.A. in Math from Cornell University.