Abstract: People are avid consumers of visual content. Every day, we watch videos, play games, and share photos on social media. However, there is an asymmetry – while everybody is able to consume visual content, only a chosen few (e.g., painters, sculptors, film directors) are talented enough to express themselves visually. For example, in modern computer graphics workflows, professional artists have to explicitly specify everything “just right” including geometry, materials, and lighting, for a human to perceive an image as realistic. To automate this tedious process, I present several general-purpose machine learning algorithms for image synthesis. Our methods can discover the structure of the visual world from the data itself and learn to synthesize realistic high-dimensional outputs directly. I then demonstrate applications in different fields such as vision, graphics, and robotics, as well as usages by developers and visual artists. Finally, I discuss our ongoing efforts on learning to synthesize 3D objects and high-resolution videos, with the ultimate goal of building machines that can recreate the visual world and help everyone tell their visual stories. Host: Tommi Jaakkola.
Bio: Jun-Yan Zhu is a postdoctoral researcher at MIT CSAIL. He obtained his Ph.D. in computer science from UC Berkeley after studying at CMU and UC Berkeley, and before that, received his B.E. from Tsinghua University. He studies computer graphics, computer vision, and machine learning, with the goal of building intelligent machines, capable of recreating the visual world. He is the recipient of Facebook Fellowship, ACM SIGGRAPH Outstanding Doctoral Dissertation Award, and UC Berkeley EECS David J. Sakrison Memorial Prize for outstanding doctoral research. His work has been covered in the New Yorker, the New York Times, and the Economist. Jun-Yan has served as a Technical Paper Committee member at SIGGRAPH Asia 2018, a guest editor of International Journal of Computer Vision (IJCV), and a co-instructor of the Deep Learning course at Udacity.