Abstract: Internet and personal photo collections now add up to over a trillion photos, with people being in most of them. The availability of so many photos presents a unique opportunity to model and predict the appearance and shape of virtually the whole human population. A major challenge is to create algorithms that operate in the wild (completely uncalibrated) from photos taken by mobile phones, wearable cameras, etc., and on any person accounting for age, gender, facial expression, and ethnicity. I will show how to estimate and synthesize 3D shape and motion of people from YouTube videos, how to predict people's appearance in an older age, and how to use face modeling for large photo collection visualization. I’ll close by describing how this research will enable breakthrough advances in virtual reality, recognition, and health.
Ira Kemelmacher-Shlizerman is an Assistant Professor in the Department of Computer Science & Engineering at the University of Washington. She received her Ph.D in computer science and applied mathematics at the Weizmann Institute of Science in 2009, then spent three years as a postdoctoral researcher at UW CSE, and started as tenure-track faculty at UW Computer Science in spring 2013. Ira works in computer vision and computer graphics, with a particular interest to develop computational tools to model people from the vast visual data that is captured all over the world with the goal of enabling breakthrough advances in recognition, virtual reality, and health. Ira received the Google faculty award, her work “Moving Portraits” was selected to the cover of the Communications of the ACM, Research Highlights, and tech transferred to Google. Her work “Illumination aware age progression” and its application to missing children search featured by interviews on CBS, NBC, and others.