Department of EECS announces 2022 promotions

Upper row, left to right: Luqiao Liu, Caroline Uhler, Tim Kraska, Stefanie Jegelka. Lower row, left to right: Martin Wainwright, Guy Bresler, Michael Carbin, Yury Polyanskiy.

The Department of EECS is proud to announce the following promotions and hire:

To Associate Professor with tenure

Guy Bresler is being promoted to Associate Professor with tenure, effective July 1, 2022. Bresler received his PhD from the University of California, Berkeley in 2012. After graduation, he was a post-doctoral fellow at MIT until 2015, when he joined MIT as an Assistant Professor in EECS and core faculty member of IDSS. He was promoted to Associate Professor without tenure in July 2019. Bresler’s research area is at the intersection of the fields of high-dimensional statistical inference and computation. His work considers the two key factors for an inference or learning task to be possible: (1) informational/statistical complexity (does the data contain enough information for the task to be in principle feasible) and (2) computational complexity (does the problem have a structure that can be exploited in order to obtain computationally feasible algorithms). Bresler has made several central contributions to this field. In the context of graphical models, he demonstrated the surprising result that learning an Ising model (an important probabilistic model where each node of the graph is a random variable taking two discrete values) can be done efficiently for models with bounded degree. He recently extended these results to Ising models with latent (unobservable) variables. In the last several years, Bresler focused his research in the area of Statistical-to-Computational tradeoffs, an emerging area addressing questions related to computational hardness for certain high dimensional statistical problems. In this context, Bresler developed a comprehensive ‘average case complexity’ theory that maps out a rich web of relations between important statistical problems such as sparse PCA, community detection, bi-clustering, and others, ultimately showing that these problems are at least as hard as the problem of discovering a planted clique in a random graph.

Bresler is the co-chair of the Machine Learning and Inference student admission group in EECS, and a member of the curriculum committee in AI+D; he has organized external seminars in both LIDS and SDSC. Additionally, he has taught and developed multiple classes in EECS, including Probability Theory, Machine Learning, Algorithms For Inference, and Discrete Stochastic Processes, and has contributed to the curriculum of all of these classes.  Bresler was awarded the COLT (Conference on Learning Theory) Best Student Award in 2018 and 2020, and the NSF CAREER award in 2020.    

Michael Carbin is being promoted to Associate Professor with tenure, effective December 3, 2021. Carbin joined MIT first as a visiting scientist in March 2015, becoming an Assistant Professor in January 2016. In July 2021, he was promoted to Associate without Tenure (AWOT). He earned his PhD in Computer Science at MIT in 2015 before working at Microsoft Research. The head of the CSAIL Programming Systems Group, Carbin’s research focuses on the design, semantics, and implementation of language-driven systems, with a focus on systems that operate in the presence of uncertainty in their environment (perception), implementation (neural networks or approximate transformations), or execution (unreliable hardware). His research has produced deep results in probabilistic programming and his Lottery Ticket Hypothesis (LTH) for training neural networks caused a sensation in the field because it contradicted prevailing beliefs about use of sparsity in training.

At MIT, Carbin has taught 6.035 and 6.UAR and has developed 6.S081, a new course on dynamic language implementations. Additionally, he served on the EECS Admissions Committee and the Sprowls Dissertation Award Committee, runs the Programming Languages seminar, and is active in underrepresented minority (URM) related activities in the department. Within his field, Carbin has served on many program committees for top conferences (including PLDI, OOPLSA, CGO, ECOOP, ASPLOS, and USENIX ATC) and has been involved in organizing several workshops, including the Workshop on Approximate Computing (2018-2020). He has also been invited to participate in Computing Research Association, Visioning Workshop on Digital Computing Beyond Moore’s Law (2018), and NSF Workshop on Future Directions for Parallel and Distributed Computing (2019). Carbin is a member of the DARPA’s ISAT Committee (2020-23); regularly organizes and represents EECS at the Tapia Celebration for Diversity; and has won the MIT Frank E. Perkins Award for Excellence in Graduate Advising. Among many other honors, Carbin received the Google Faculty Research Award in 2018; the NSF CAREER Award, also in 2018; the Facebook Research Award in 2019; and the Sloan Research Fellowship in 2020.

Stefanie Jegelka is being promoted to Associate Professor with tenure, effective July 1, 2022. Jegelka received her PhD in 2012 from ETH Zurich and the Max Planck Institute for Intelligent Systems, before becoming a postdoctoral researcher at UC Berkeley. She joined MIT as an Assistant Professor in EECS in January 2015, and was promoted to Associate Professor without tenure in July 2019. Jegelka’s research is at the intersection of machine learning, combinatorial and continuous optimization. Her work focuses on identifying and exploiting combinatorial structure to develop and analyze models that represent important interdependencies in data and to develop efficient, robust, scalable learning and optimization algorithms. Combining fundamental theoretical understanding with practical motivation and efficient implementation, Jegelka’s work was foundational in showing how a broad range of discrete ML problems are captured by submodular set functions (submodularity can be seen as a discrete analogue of convexity for functions defined on sets or any lattice.) Jegelka’s recent work in understanding the representational power of general graph neural networks (GNNs) has set new directions in the field and formed the basis of theoretically grounded models, establishing the equivalence between message-passing GNNs and the Weisfeiler-Lehman graph isomorphism test and using the equivalence to prove limitations of existing GNN architectures.

At the undergraduate level, Jegelka co-developed, together with Caroline Uhler, a new hands-on data analysis course, “Statistics, Computation, and Applications,” that demonstrates the interplay of statistics and computation. Jegelka has also developed a graduate course, “Learning with Combinatorial Structure,” that covers models, algorithms, and applications, analyzing how various types of mathematical structures can be used for machine learning. A member of the Advisory Board for Social and Ethical Responsibilities of Computing (SERC), and of the Editorial Board for Case Studies in Social and Ethical Responsibilities of Computing, Jegelka is also a steering committee member of the MIT Climate and Sustainability Consortium. Her awards include the Google Anita Borg Scholarship, NSF Career Award, DARPA Young Faculty Award, and Sloan Research Fellowship. For her work on submodularity, Jegelka was awarded the German pattern recognition award—the first-ever female awardee. 

Tim Kraska is being promoted to Associate Professor with tenure, effective July 1, 2022. Kraska earned his B.S. from Westfalische-Universitat Munster in 2004, his M.S. from the University of Sydney in 2006, and his Ph.D. from ETH Zurich in 2010. He held a postdoctoral appointment at Berkeley during 2010-2012, before joining Brown as an Assistant Professor in 2013. He joined EECS and MIT as an Associate Professor without tenure (AWOT) in January 2018. Kraska pioneered the idea of database systems that learn, and has focused on building systems infrastructure to enable interactive data science—allowing users to explore a data set, understand its characteristics, and analyze it.  The sizes of data sets, and the demands for processing those data sets, are growing faster than the capabilities of existing data-processing systems. The opportunity to address this challenge lies in specializing data-processing systems to specific data sets and workloads, an approach which traditionally required high engineering costs. Kraska’s research has demonstrated using “learned indexes” and other examples that modern machine learning tools can give much of the benefit of custom-designed systems without the associated high development costs.

His earlier research on data visualization has culminated in the NorthStar data science platform, now being commercialized by a startup company co-founded by Kraska and his students, Einblick Analytics.

Within the department, Kraska led the development of a new course on software systems for data science, 6.S080, and has been a regular instructor in 6.033. He has served on multiple program committees for top conferences and received a number of best paper awards, including the VLDB Best Demo award, an ICDE Best Paper award, a VLDB Best Paper Award, a SIGMOD Best Paper Award, and other papers invited to Special Issues of their respective conferences. His honors also include the AFOSR Young Investigator Award, NSF CAREER Award, Google Research Award, VMWare Early Career Faculty Grant, Sloan Research Fellowship, Intel Outstanding Researcher Award, and the VLDB Early Career Award. 

Luqiao Liu is being promoted to Associate Professor with tenure, effective July 1, 2022. Liu received his PhD in 2012 from Cornell University. He joined MIT EECS as an Assistant Professor in September 2015 and was promoted to Associate Professor without Tenure in July 2019. Liu’s research effort is centered on spintronics, where he has leveraged his knowledge of solid-state physics and electrical engineering to make contributions to both logic and storage. Within the realm of storage, Liu has dramatically improved the performance of spintronic-based magnetic switches by leveraging novel materials, called tunable ferrimagnets, that allow him to bypass speed limitations of traditional materials systems. Within logic, Liu made the first direct visualization that magnetic domain walls can be used to manipulate magnons, setting up a facile approach for all-magnon processing (a critical step for their use in logic devices). He then obtained strong coupling between magnons and photons with a system ~1000-fold smaller, in magnet size, than prior typical approaches.

Within the department, Liu has taught 6.002, 6.003, 6.012, 6.014, and 6.730. Additionally, he has been heavily engaged in EECS graduate admissions, coordinating the EE research overview session for the graduate student visit days and serving on the EECS student award selection committee. He has also been involved in the meetings of his research community, including the APS March meeting, and the annual Conference of Magnetism and Magnetic Materials. His awards include the McMillan Award (2017), an annual award given to an outstanding young researcher in condensed matter physics; an NSF CAREER Award (2017); an AFOSR Young Investigator Award (2018); and, most recently, a Sloan Research Fellowship (2021). 

Hire / appointment to full Professor

Caroline Uhler is being promoted to Full Professor, effective July 1, 2022. She received her PhD in Statistics from the University of California, Berkeley, in 2011. Upon her graduation and after a postdoctoral appointment at the Institute for Mathematics and its Applications (IMA) in Minneapolis, she held the position of Assistant Professor at IST Austria until 2015, when she joined MIT as an Assistant Professor in EECS and core faculty member in IDSS. 

Uhler has made important contributions to statistics and machine learning (e.g., theory and applications of probabilistic graphical models, causal inference) and to the field of biology. Recent technological developments in single-cell biology have led to an explosion of data in different modalities, including imaging and sequencing. The main challenge, which Uhler’s research has addressed, has been that most of these technologies are destructive to cells so that only one of the modalities can be measured in each cell at a time, thus leading to the necessity of finding a translation between both modalities when aligned data is not available. The key idea to solve this problem is to select a random set of cells and do sequencing for some and imaging for the others and then do the alignment at the distribution level. The translation between the two modalities is done by finding a common latent space (using autoencoders) were both modalities produce the same distribution. Uhler’s innovative application of this autoencoder framework has facilitated modality transfer between cell imaging data and single-cell RNA-seq data. Her research combines technical sophistication in ML and deep understanding of questions that arise in biology, allowing her work to unearth significant insights about biological processes. She is developing methods that can predict temporal changes in the cell, model relations between different types of cells within the same organism, and predict the impact of cell perturbation on its status. Her work on prediction of cell development allows her to accurately recreate cell images at different times from a few snapshots in time, enabling biologists to see latent cell processes that cannot be obtained using experimental data, and bringing new insights about cell transformation. Uhler has developed unsupervised approaches that learn multimodal data representations revealing latent structural dependencies in biological data.

Uhler helped co-develop IDS.012/6.419 with Stefanie Jegelka. Additionally, she helped develop IDS.136/6.244, which delves into the algebraic structures of exponential families of graphical models and which addresses causality and graphical models with latent variables. Uhler has designed and taught Online MIT Professional / Executive Education Courses (DataScienceX and AI in Pharma), as well as recorded an online MITx version of the capstone course for the MicroMasters in Statistics and  Data Science at MIT. Among her prestigious awards, Uhler has been recognized with the the Sloan Fellowship (2017) and the NSF CAREER award (2017). She participated in the organization of the joint effort of MIT (IDSS), Harvard, and Microsoft Research New England to bring the Women in Data Science (WiDS) conference to Cambridge, and is one of the three founding organizers of the new conference on Causal Learning and Reasoning (CLeaR). She was recently named co-director of the Eric and Wendy Schmidt Center at the Broad Institute, which brings together a community of scientists to promote interdisciplinary research between the data and life sciences.

Yury Polyanskiy is being promoted to Full Professor, effective July 1, 2022. Polyanskiy received his Ph.D. in electrical engineering in 2010 from Princeton University. After a postdoc at Princeton, he joined MIT EECS in July 2011 as an Assistant Professor. 

Polyanskiy’s research is rooted in a fascination with the flow of information, initially in the traditional setting of point-to-point communications, then in the context of communication networks, and more recently in general types of networks (including neural networks). Beginning with his doctoral thesis, Polyanskiy developed a refinement of the classical approach, centered around a suitably defined concept of channel dispersion, which can be used to quantify the achievable communication rates in the finite block-length regime. This work led to multiple awards, and is considered by now as one of the core pieces of modern information theory. Polyanskiy has made an impressive number of deep contributions in several different domains in the last several years. Traditional information and communication theory has addressed either point-to-point transmission of information or network problems involving a moderate number of users. Consider now a setting involving ten thousand users who wish to transmit only 1 kbit per second, on the average. This latter regime is a much better reflection of current IoT trends, where there is a very large number of devices, each of which wakes up only occasionally and wishes to transmit a relatively short message. Polyanskiy, in his seminal 2017 ISIT paper (“A perspective on massive random-access”), proposed a mathematical model that captures this regime. This model has since become known as UMAC (Unsourced Multiple Access). At a high level, Polyanskiy essentially showed that traditional orthogonality-based coding schemes are bound to be inefficient, and developed a lower bound on the required energy per bit, as a function of the number of users. He also derived an upper bound (by showing the existence of a code), which is not too far from the lower bound. 

Within EECS, Polyanskiy has redesigned 6.441, the graduate information theory class. The lecture notes for this course are broadly used. Additionally, he co-developed a new undergraduate subject, 6.401, and co-developed new graduate subject 6.265/15.070. In 2016, Polyanskiy received the Jerome Saltzer Teaching Award for his undergraduate teaching of 6.02. 

In 2011-2021 he was invited to be part of the TPC of the International Symposium on Information Theory (ISIT) – the main conference in his field. Polyanskiy is also an active member of the NSF Center for the Science of Information and sits on the editorial board of Foundations and Trends in Communication and Information Theory. His excellence in research and teaching was recognized by the IEEE Information Theory society James L. Massey award in 2020. This is the highest honor given by the IEEE Information Theory society to a faculty member.

Martin Wainwright will join the Department of Electrical Engineering and Computer Science as a Full Professor in July 2022. Prior to joining MIT, Wainwright was the Chancellor’s Professor at the University of California at Berkeley, with a joint appointment between the Department of Statistics and the Department of EECS. He received a Bachelor’s degree in Mathematics from University of Waterloo, Canada, and Ph.D. degree in EECS from Massachusetts Institute of Technology (MIT). His research interests include high-dimensional statistics, statistical machine learning, information theory, and optimization theory. Among other awards, he has received the COPSS Presidents’ Award (2014) from the Joint Statistical Societies; the David Blackwell Lectureship (2017) and Medallion Lectureship (2013) from the Institute of Mathematical Statistics; and Best Paper awards from the IEEE Signal Processing Society and IEEE Information Theory Society. He was a Section Lecturer at the International Congress of Mathematicians in 2014.

Media Inquiries

Journalists seeking information about EECS, or interviews with EECS faculty members, should email

Please note: The EECS Communications Office only handles media inquiries related to MIT’s Department of Electrical Engineering & Computer Science. Please visit other school, department, laboratory, or center websites to locate their dedicated media-relations teams.