Abstract: From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this talk I will present two systems that mine and operationalize an understanding of human life from large text corpora. The first system, Augur, focuses on what people *do* in daily life: capturing many thousands of relationships between human activities (e.g., taking a phone call, using a computer, going to a meeting) and the scene context that surrounds them. The second system, Empath, focuses on what people *say*: capturing hundreds of linguistic signals through a set of pre-generated lexicons, and allowing computational social scientists to create new lexicons on demand. Between these projects, I will demonstrate how unsupervised learning can enable many new applications and analyses for interactive systems.
Bio: Ethan Fast is a PhD Candidate in Computer Science at Stanford University where he studies Human-Computer Interaction. His work covers the areas of data-driven software interfaces, programming tools, and computational social science, where he draws on insights from artificial intelligence, natural language processing, and programming languages. He has published more than 10 first author papers across venues such as CHI, UIST, EMNLP, AAAI, and ICWSM, and has received a NSF Graduate Fellowship, a Brown Institute Grant for Media Innovation, and several Best Paper awards.
Host: David Karger