Caroline Uhler named SIAM Fellow for 2023

The Department of Electrical Engineering and Computer Science (EECS) is proud to announce that Caroline Uhler, Professor of EECS and in the Institute for Data, Systems and Society (IDSS), has been named a Fellow of the Society for Industrial and Applied Mathematics (SIAM), Class of 2023.  In the award announcement, SIAM noted that Uhler is being honored for her “fundamental contributions at the interface of statistics, machine learning, and biology”. 

Uhler joined the MIT faculty in 2015 and is currently a full professor in EECS (Electrical Engineering & Computer Science) and IDSS (Institute for Data, Systems and Society). At MIT she is also affiliated with LIDS (Laboratory for Information and Decision Systems), the Center for Statistics, Machine Learning at MIT, and the ORC (Operations Research Center). In addition, she is a core member of the Broad Institute of MIT and Harvard, where she is a co-director of the Eric and Wendy Schmidt Center. Uhler’s research focuses on machine learning, statistics and computational biology, in particular on causal inference, generative modeling, and applications to genomics. Her use of probabilistic graphical models and development of scalable algorithms with healthcare applications has enabled her research group to gain insights into causal relationships hidden within massive amounts of data (such as those generated during gene knockout or knockdown experiments.) 

Uhler holds an MSc in mathematics, a BSc in biology, and an MEd in mathematics education from the University of Zurich, and a PhD in statistics from UC Berkeley. Before joining MIT, she spent a semester in the “Big Data” program at the Simons Institute at UC Berkeley, held postdoctoral positions at the IMA and at ETH Zurich, and spent 3 years as an assistant  professor at IST Austria.

She is an elected member of the International Statistical Institute, and is the recipient of a Simons Investigator Award, a Sloan Research Fellowship, an NSF Career Award, a Sofja Kovalevskaja Award from the Humboldt Foundation, and a START Award from the Austrian Science Foundation.

School of Engineering welcomes new faculty

The School of Engineering is welcoming 11 new members of its faculty across six of its academic departments and institutes. This new cohort of faculty members, who have either recently started their roles at MIT or will start their new roles within the next year, conduct research across a diverse range of disciplines. Their areas of expertise include semiconducting materials, human health in space, physics-informed deep learning, materials for nuclear energy, and using machine learning to address challenges in agriculture and climate change, to name a few.

“I warmly welcome this group of incredibly talented new faculty to our engineering community at MIT,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “The work each of them is doing holds tremendous potential to drive solutions for many of the challenges our world faces. Their contributions as researchers and educators will have lasting impact on the school community. I look forward to seeing them thrive as they settle into their new roles.”

A number of these new faculty members conduct research at the intersection of computing and other engineering fields. New faculty members Sara Beery, Priya Donti, Ericmoore Jossou, and Sherrie Wang were hired as part of a shared faculty search focused on computing for the health of the planet with the MIT Stephen A. Schwarzman College of Computing. Among the new faculty members, eight total have positions with both the School of Engineering and the college: six new faculty from the Department of Electrical Engineering and Computer Science (EECS), which reports to both the school and college; one shared between the Department of Mechanical Engineering and the Institute for Data, Systems, and Society, which reports to the college; and one shared between the Department of Nuclear Science and Engineering and EECS. 

Katya Arquilla joined MIT’s Department of Aeronautics and Astronautics (AeroAstro) as an assistant professor in June 2022. She serves as the Boeing Career Development Professor in Aeronautics and Astronautics. In her research, she monitors humans to quantify and augment their health and performance in extreme operational environments. She specializes in bioastronautics, psychophysiological monitoring, wearable sensor systems, and human-computer interaction. Previously, she was a postdoc working with Professor Julie Shah in the Interactive Robotics Group in AeroAstro. There, she worked on integrating psychophysiological monitoring — connecting physiological signals to psychological state — into human-robot interactions as a measure of psychological safety and trust. Arquilla earned a BS in astrophysics at Rice University and an MS and PhD in aerospace engineering from the University of Colorado at Boulder. Before her graduate studies, she taught math and physics to middle and high school students at YES Prep Public Schools, a charter school for students in Houston’s underserved communities.

Sara Beery will join the Department of EECS as an assistant professor in September. She is currently a visiting faculty researcher at Google Research. Beery’s work focuses on building computer vision methods that enable global-scale environmental and biodiversity monitoring across data modalities and tackling real-world challenges, including strong spatiotemporal correlations, imperfect data quality, fine-grained categories, and long-tailed distributions. She collaborates with nongovernmental organizations and government agencies to deploy her methods worldwide and works toward increasing the diversity and accessibility of academic research in artificial intelligence through interdisciplinary capacity-building and education. Beery earned a BS in electrical engineering and mathematics from Seattle University and a PhD in computing and mathematical sciences from Caltech, where she was honored with the Amori Prize for her outstanding dissertation.

Joseph Casamento will join MIT’s Department of Materials Science and Engineering (DMSE) as an assistant professor in January 2024. Casamento is currently a postdoc at Penn State University. He conducts research on nitride semiconducting materials at the Center for 3D Ferroelectric Microelectronics (3DFeM), a U.S. Department of Energy Energy Frontier Research Center. This work has applications in the development of next-generation energy-efficient electronic, photonic, and acoustic devices. Casamento received a BS in materials science and engineering at the University of Michigan, and an MS and PhD in material science and engineering at Cornell University.

Christina Delimitrou joined the Department of EECS as an assistant professor in September 2022 and was promoted to associate professor without tenure in January. Previously, she served as an assistant professor at Cornell University. Her main interests are in computer architecture and computer systems. Specifically, she is one of the first researchers to apply machine learning techniques to cloud systems problems, such as resource management and scheduling. She is also working on data-center server design, hardware acceleration, and distributed system debugging. Delimitrou was named an Alfred P. Sloan Research Fellow and honored with two Google Faculty Research Awards, a Microsoft Research Faculty Fellowship, an IEEE TCCA Young Computer Architect Award, an Intel Rising Star Award, and a Facebook Faculty Research Award. She earned a BS in electrical and computer engineering from the National Technical University of Athens and an MS and PhD in electrical engineering, both from Stanford University.

Priya Donti will join the Department of EECS as an assistant professor in September. Currently a part of the Jacobs Technion-Cornell Institute’s Runway Startup Postdoc Program, she is working to build Climate Change AI, a global nonprofit that she co-founded in 2019. Her work focuses on physics-informed deep learning for forecasting, optimization, and control in high-renewables power grids. Donti earned a BS in computer science and mathematics from Harvey Mudd College and a PhD in computer science and public policy from Carnegie Mellon University. She was honored with the MIT Technology Review Innovators Under 35 Award and the ACM SIGEnergy Doctoral Dissertation Award. She was also honored as a U.S. Department of Energy Computational Science Graduate Fellow, Siebel Scholar, NSF Graduate Research Fellow, and Thomas J. Watson Fellow.

Gabriele Farina will join the Department of EECS as an assistant professor in September. Farina currently serves as a research scientist at Meta in the Facebook AI Research group. Farina’s work lies at the intersection of artificial intelligence, computer science, operations research, and economics. Specifically, he focuses on learning and optimization methods for sequential decisio­­­n-making and convex-concave saddle point problems, with applications to equilibrium finding in games. Farina also studies computational game theory and recently served as co-author on a Science study about combining language models with strategic reasoning. He is a recipient of a NeurIPS Best Paper Award and was a Facebook Fellow in economics and computer science. Farina earned a BS in automation and control engineering from Politecnico di Milano and is currently finishing up his PhD studies in computer science from Carnegie Mellon University.

Ericmoore Jossou will join MIT as an assistant professor in a shared position between the departments of Nuclear Science and Engineering and EECS in July. He is currently an assistant scientist at the Brookhaven National Laboratory, a U.S. Department of Energy-affiliated lab which conducts research in nuclear and high-energy physics, energy science and technology, environmental and bioscience, nanoscience, and national security. His research at MIT will focus on understanding the processing-structure-properties correlation of materials for nuclear energy applications through advanced experiments, multiscale simulations, and data science. Jossou earned a BS in chemistry from Ahmadu Bello University, Zaria and a master’s degree in materials science and engineering at the African University of Science and Technology, Abuja. He obtained his PhD in mechanical engineering with a specialization in materials science from the University of Saskatchewan. Jossou received the Petroleum Technology Development Fund scholarship in 2008, the African Development Bank scholarship in 2012, and the International Dean’s scholarship for doctoral studies at the University of Saskatchewan in 2015.

Laura Lewis PhD ’14 joined the Department of EECS and the Institute for Medical Engineering and Science (IMES) as associate professor without tenure in February. She has been appointed as the Athinoula A. Martinos Associate Professor of IMES and EECS. Lewis is a principal investigator in the Research Laboratory of Electronics at MIT, as well as an associate faculty member at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital. Previously, she served as assistant professor of biomedical engineering at Boston University. As a neuroscientist and engineer, Lewis focuses on neuroimaging approaches that better map brain function, with a particular focus on sleep. She is developing computational and signal processing approaches for neuroimaging data and applying these tools to study how neural computation is dynamically modulated across sleep, wake, attentional, and affective states. Lewis earned her BSc at McGill University and her PhD in neuroscience at MIT. She has been honored with the Society for Neuroscience Peter and Patricia Gruber International Research Award, the One Mind Rising Star Award, the 1907 Trailblazer Award, the Sloan Fellowship, the Searle Scholar Award, the McKnight Scholar Award, and the Pew Biomedical Scholar Award.

Kuikui Liu will join the Department of EECS as an assistant professor in September. He is currently a Foundations of Data Science Institute postdoc at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). His research interests are in the design and analysis of Markov chains, with applications to statistical physics, high-dimensional geometry, and statistics. To study these complex stochastic dynamics, he develops and uses mathematical tools from fields such as high-dimensional expanders, geometry of polynomials, algebraic combinatorics, statistical physics, and more. He earned a BS in mathematics and computer science, an MS in computer science, and a PhD in computer science, all from the University of Washington. He was the co-recipient of a best paper award at STOC 2019 and the William Chan Memorial Dissertation Award.

Lonnie Petersen joined MIT’s Department of AeroAstro as an assistant professor in September 2022. She serves as the Charles Stark Draper Career Development Professor of Aeronautics and Astronautics. Petersen also joined the core faculty of IMES. Previously, she served as an assistant professor in mechanical and aerospace engineering at the University of California at San Diego. As both a medical doctor and engineer, Petersen’s work bridges these two worlds. She holds a PhD in gravitational physiology, and her work is focused on fluid and perfusion regulation, specifically focusing on the brain. Applications include space and aviation physiology, including countermeasure development for long-duration spaceflight and exploration class missions. Additionally, she works on the application of knowledge gained in space for life on earth, including translation of technology and human-hardware interaction. Petersen earned a BA in physics, math, and chemistry at Frederiksberg College. She received her MS in space and aviation physiology from the University of Copenhagen. Petersen obtained an MD and PhD in gravitational physiology and space medicine from the University of Copenhagen. She has completed postdoctoral fellowships at Toyo University in Tokyo and UC San Diego School of Medicine. In addition to emergency medicine, Petersen has served in pre-hospital care and remote areas, including Greenland. 

Sherrie Wang will join MIT as an assistant professor in a shared position between the Department of Mechanical Engineering and the Institute for Data, Systems, and Society in April 2023. She will serve as the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor in Mechanical Engineering. Her research uses novel data and computational algorithms to monitor our planet and enable sustainable development. Her primary application areas are improving agricultural management and mitigating climate change, especially in low- or middle-income regions of the world. She frequently works with satellite imagery, crowdsourced data, and other spatial data. Due to the scarcity of ground truth data in many applications and noisiness of real-world data in general, her methodological work focuses on developing machine learning tools that work well within these constraints. Prior to MIT, Wang was a Ciriacy-Wantrup Postdoctoral Fellow at the University of California at Berkeley, hosted by the Global Policy Lab. She earned a BA in biomedical engineering from Harvard University and an MS and PhD in computational and mathematical engineering from Stanford.

A method for designing neural networks optimally suited for certain tasks

Neural networks, a type of machine-learning model, are being used to help humans complete a wide variety of tasks, from predicting if someone’s credit score is high enough to qualify for a loan to diagnosing whether a patient has a certain disease. But researchers still have only a limited understanding of how these models work. Whether a given model is optimal for certain task remains an open question.

MIT researchers have found some answers. They conducted an analysis of neural networks and proved that they can be designed so they are “optimal,” meaning they minimize the probability of misclassifying borrowers or patients into the wrong category when the networks are given a lot of labeled training data. To achieve optimality, these networks must be built with a specific architecture.

The researchers discovered that, in certain situations, the building blocks that enable a neural network to be optimal are not the ones developers use in practice. These optimal building blocks, derived through the new analysis, are unconventional and haven’t been considered before, the researchers say.

In a paper published this week in the Proceedings of the National Academy of Sciences, they describe these optimal building blocks, called activation functions, and show how they can be used to design neural networks that achieve better performance on any dataset. The results hold even as the neural networks grow very large. This work could help developers select the correct activation function, enabling them to build neural networks that classify data more accurately in a wide range of application areas, explains senior author Caroline Uhler, a professor in the Department of Electrical Engineering and Computer Science (EECS).

“While these are new activation functions that have never been used before, they are simple functions that someone could actually implement for a particular problem. This work really shows the importance of having theoretical proofs. If you go after a principled understanding of these models, that can actually lead you to new activation functions that you would otherwise never have thought of,” says Uhler, who is also co-director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS) and its Institute for Data, Systems and Society (IDSS).

Joining Uhler on the paper are lead author Adityanarayanan Radhakrishnan, an EECS graduate student and an Eric and Wendy Schmidt Center Fellow, and Mikhail Belkin, a professor in the Halicioğlu Data Science Institute at the University of California at San Diego.

Activation investigation

A neural network is a type of machine-learning model that is loosely based on the human brain. Many layers of interconnected nodes, or neurons, process data. Researchers train a network to complete a task by showing it millions of examples from a dataset.

For instance, a network that has been trained to classify images into categories, say dogs and cats, is given an image that has been encoded as numbers. The network performs a series of complex multiplication operations, layer by layer, until the result is just one number. If that number is positive, the network classifies the image a dog, and if it is negative, a cat.

Activation functions help the network learn complex patterns in the input data. They do this by applying a transformation to the output of one layer before data are sent to the next layer. When researchers build a neural network, they select one activation function to use. They also choose the width of the network (how many neurons are in each layer) and the depth (how many layers are in the network.)

“It turns out that, if you take the standard activation functions that people use in practice, and keep increasing the depth of the network, it gives you really terrible performance. We show that if you design with different activation functions, as you get more data, your network will get better and better,” says Radhakrishnan.

He and his collaborators studied a situation in which a neural network is infinitely deep and wide — which means the network is built by continually adding more layers and more nodes — and is trained to perform classification tasks. In classification, the network learns to place data inputs into separate categories.

“A clean picture”

After conducting a detailed analysis, the researchers determined that there are only three ways this kind of network can learn to classify inputs. One method classifies an input based on the majority of inputs in the training data; if there are more dogs than cats, it will decide every new input is a dog. Another method classifies by choosing the label (dog or cat) of the training data point that most resembles the new input.

The third method classifies a new input based on a weighted average of all the training data points that are similar to it. Their analysis shows that this is the only method of the three that leads to optimal performance. They identified a set of activation functions that always use this optimal classification method.

“That was one of the most surprising things — no matter what you choose for an activation function, it is just going to be one of these three classifiers. We have formulas that will tell you explicitly which of these three it is going to be. It is a very clean picture,” he says.

They tested this theory on a several classification benchmarking tasks and found that it led to improved performance in many cases. Neural network builders could use their formulas to select an activation function that yields improved classification performance, Radhakrishnan says.

In the future, the researchers want to use what they’ve learned to analyze situations where they have a limited amount of data and for networks that are not infinitely wide or deep. They also want to apply this analysis to situations where data do not have labels.

“In deep learning, we want to build theoretically grounded models so we can reliably deploy them in some mission-critical setting. This is a promising approach at getting toward something like that — building architectures in a theoretically grounded way that translates into better results in practice,” he says.

This work was supported, in part, by the National Science Foundation, Office of Naval Research, the MIT-IBM Watson AI Lab, the Eric and Wendy Schmidt Center at the Broad Institute, and a Simons Investigator Award.

Robot armies duke it out in Battlecode’s epic on-screen battles

In a packed room in MIT’s Stata Center, hundreds of digital robots collide across a giant screen projected at the front of the room. A crowd of students in the audience gasps and cheers as the battle’s outcome hangs in the balance. In an upper corner of the screen, the people who have programmed the robot armies’ strategies narrate the action in real time.

This isn’t the latest e-sports event, it’s MIT’s long-running Battlecode competition. Open to student teams around the world, Battlecode tasks participants with writing the code to program entire armies — not just individual bots — before they duke it out. The resulting dramatic, often-unexpected outcomes are decided based on whose programming strategy aligns best with the parameters of the game and the circumstances of the battle.

The unique competition pushes teams to spend hours coding and refining their armies in a quest for the perfectly crafted game plan. Since 2007, the competition has involved high school and college students from around the world, upping the intellectual ante as people with diverse backgrounds tackle the open-ended challenge.

“We change it every year, so there’s new rules, new types of robots, new actions they can do against each other, and a new goal for how to win,” Battlecode co-president and MIT sophomore Serena Li said before this year’s final match on Feb. 5. “The strategies change every year because the game changes.”

MIT was especially well-represented in this year’s final tournament. Of the 16 finalist teams, three were made up entirely of MIT students, while another included three MIT students and one Yale University student. The winning team was made up of students from Carnegie Mellon University and Georgia Tech.

Although this year’s competition is officially closed, the hard work and long hours required for success in Battlecode often create a bond among participants that lasts far beyond the tight timeline of the competition.

“The spirit of the competitors is what makes the program so great,” fellow co-president and MIT junior Andy Wang says. “There’s always teams looking to create more and more advanced robots and heuristics to solve this thing, and people are putting in all this work and dedication, only to be matched by competitors doing the same thing. It creates a really incredible atmosphere every year.”

Setting the code

Since the early 2000s, Battlecode has given students a specified amount of time and computing power to write a program for armies of bots that battle in a video-game-style tournament.

When the program kicks off in January, participants are given the Battlecode software and the year’s game parameters. Throughout Independent Activities Period (IAP), which MIT students can take for course credit, participants learn to use artificial intelligence, pathfinding, distributed algorithms, and more to make the best possible strategy.

“This is a game that’s too complicated to play manually,” explains MIT senior Isaac Liao, who won the main tournament last year. “You can’t control every unit because there are hundreds of them and you’re going for 2,000 turns.”

Battlecode includes tracks for first-time MIT participants, U.S. college students (including MIT students who have competed before), international college students, and high school teams.

“The ability for anyone to compete really opens up the opportunity for everyone to try their skills on an even playing field,” Wang says. “High schoolers and international students do really well, and it’s cool because a lot of these teams will stick together and keep contacting each other even after high school.”

Following a month of refining their strategies, teams begin competing in tournament matches that lead up to the final event. Battlecode’s organizers fly in the international finalists and set them up in a hotel, where they often meet in person for the first time after weeks of online back and forth. Liao, who has competed for several years, says he still keeps in touch with former competitors.

The final battle is played out in front of a live audience at MIT, with the top teams receiving cash prizes.

Over the years, there have been many memorable events. One year an MIT student broke the game by figuring out how to leave the software space designed for contestants. (He kindly informed organizers of the flaw before the actual tournament). Another year organizers threw a new variable into the battles: zombies. A team made the finals by hiding a bot in the corner of the screen and letting the rest of the bots turn to zombies to consume the opposition.

This year’s total prize pool was over $20,000. Organizers made about 200 T-shirts to give out before the final event and quickly ran out.

The unpredictable final match makes for a tense scene as competitors are given a mic to explain the strategies unfolding on screen in real time.

Wang says organizing the event, which has increased in complexity with the inclusion of international players, is hectic but fun.

“The Battlecode members are all really friendly and welcoming, and it’s a great time running the actual event and meeting all these new people and seeing this project you work on all semester come together,” Wang says.

Indeed, the ultimate legacy of Battlecode might be the friendships formed through the intense competition.

“A lot of teams are made of students who haven’t worked together too closely,” Wang says. “They found each other through the team-building process or they know each other casually, but a lot of them end up sticking together and go on to do a lot of things together. It’s a way to form these lifetime acquaintances.”

Skills that last a lifetime

A number of current and former players noted the skills required to have success in Battlecode transfer well to startups.

“Rather than other competitions where it’s just you in front of a computer, there’s a lot to be gained from teamwork in Battlecode,” says senior and former president Jerry Mao. “That really transfers into industry and into the real world.”

This year’s sponsors included Dropbox and Regression Games, which were both founded by past participants of Battlecode. Another past sponsor, Amplitude, was founded by Spenser Skates ’10 and Curtis Liu ’10, who met during Battlecode and have been working together ever since.

“There are a lot of parallels between what you’re trying to do in Battlecode and what you end up having to do in the early stages of a startup,” Liu says. “You have limited resources, limited time, and you’re trying to accomplish a goal. What we found is trying a lot of different things, putting our ideas out there and testing them with real data, really helped us focus on the things that actually mattered. That method of iteration and continual improvement set the foundation for how we approach building products and startups.”

Beyond startups, participants and organizers said Battlecode can prepare students for a number of careers, from quantitative trading to training AI systems to conducting research. Perhaps that’s why students keep coming back.

“The most important skills for success are a lot of iteration and perseverance and willingness to adapt on the fly — basically to change how you’re working quickly,” Wang says. “You see what other teams are doing and you’re not just competing but also talking to them, studying what they’re doing well, and adding their strengths to your bots. I think those skills are important anywhere, whether you’re building a startup or doing research or working in a big company.”

Strengthening trust in machine-learning models

Probabilistic machine learning methods are becoming increasingly powerful tools in data analysis, informing a range of critical decisions across disciplines and applications, from forecasting election results to predicting the impact of microloans on addressing poverty.

This class of methods uses sophisticated concepts from probability theory to handle uncertainty in decision-making. But the math is only one piece of the puzzle in determining their accuracy and effectiveness. In a typical data analysis, researchers make many subjective choices, or potentially introduce human error, that must also be assessed in order to cultivate users’ trust in the quality of decisions based on these methods.

To address this issue, MIT computer scientist Tamara Broderick, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Laboratory for Information and Decision Systems (LIDS), and a team of researchers have developed a classification system — a “taxonomy of trust” — that defines where trust might break down in a data analysis and identifies strategies to strengthen trust at each step. The other researchers on the project are Professor Anna Smith at the University of Kentucky, professors Tian Zheng and Andrew Gelman at Columbia University, and Professor Rachael Meager at the London School of Economics. The team’s hope is to highlight concerns that are already well-studied and those that need more attention.

In their paper, published in February in Science Advances, the researchers begin by detailing the steps in the data analysis process where trust might break down: Analysts make choices about what data to collect and which models, or mathematical representations, most closely mirror the real-life problem or question they are aiming to answer. They select algorithms to fit the model and use code to run those algorithms. Each of these steps poses unique challenges around building trust. Some components can be checked for accuracy in measurable ways. “Does my code have bugs?”, for example, is a question that can be tested against objective criteria. Other times, problems are more subjective, with no clear-cut answers; analysts are confronted with numerous strategies to gather data and decide whether a model reflects the real world.

“What I think is nice about making this taxonomy, is that it really highlights where people are focusing. I think a lot of research naturally focuses on this level of ‘are my algorithms solving a particular mathematical problem?’ in part because it’s very objective, even if it’s a hard problem,” Broderick says.

“I think it’s really hard to answer ‘is it reasonable to mathematize an important applied problem in a certain way?’ because it’s somehow getting into a harder space, it’s not just a mathematical problem anymore.”

Capturing real life in a model

The researchers’ work in categorizing where trust breaks down, though it may seem abstract, is rooted in real-world application.

Meager, a co-author on the paper, analyzed whether microfinances can have a positive effect in a community. The project became a case study for where trust could break down, and ways to reduce this risk.

At first look, measuring the impact of microfinancing might seem like a straightforward endeavor. But like any analysis, researchers meet challenges at each step in the process that can affect trust in the outcome. Microfinancing — in which individuals or small businesses receive small loans and other financial services in lieu of conventional banking — can offer different services, depending on the program. For the analysis, Meager gathered datasets from microfinance programs in countries across the globe, including in Mexico, Mongolia, Bosnia, and the Philippines.

When combining conspicuously distinct datasets, in this case from multiple countries and across different cultures and geographies, researchers must evaluate whether specific case studies can reflect broader trends. It is also important to contextualize the data on hand. For example, in rural Mexico, owning goats may be counted as an investment.

“It’s hard to measure the quality of life of an individual. People measure things like, ‘What’s the business profit of the small business?’ Or ‘What’s the consumption level of a household?’ There’s this potential for mismatch between what you ultimately really care about, and what you’re measuring,” Broderick says. “Before we get to the mathematical level, what data and what assumptions are we leaning on?”

With data on hand, analysts must define the real-world questions they seek to answer. In the case of evaluating the benefits of microfinancing, analysts must define what they consider a positive outcome. It is standard in economics, for example, to measure the average financial gain per business in communities where a microfinance program is introduced. But reporting an average might suggest a net positive effect even if only a few (or even one) person benefited, instead of the community as a whole.

“What you really wanted was that a lot of people are benefiting,” Broderick says. “It sounds simple. Why didn’t we measure the thing that we cared about? But I think it’s really common that practitioners use standard machine learning tools, for a lot of reasons. And these tools might report a proxy that doesn’t always agree with the quantity of interest.”

Analysts may consciously or subconsciously favor models they are familiar with, especially after investing a great deal of time learning their ins and outs. “Someone might be hesitant to try a nonstandard method because they might be less certain they will use it correctly. Or peer review might favor certain familiar methods, even if a researcher might like to use nonstandard methods,” Broderick says. “There are a lot of reasons, sociologically. But this can be a concern for trust.”

Final step, checking the code 

While distilling a real-life problem into a model can be a big-picture, amorphous problem, checking the code that runs an algorithm can feel “prosaic,” Broderick says. But it is another potentially overlooked area where trust can be strengthened.

In some cases, checking a coding pipeline that executes an algorithm might be considered outside the purview of an analyst’s job, especially when there is the option to use standard software packages.

One way to catch bugs is to test whether code is reproducible. Depending on the field, however, sharing code alongside published work is not always a requirement or the norm. As models increase in complexity over time, it becomes harder to recreate code from scratch. Reproducing a model becomes difficult or even impossible.

“Let’s just start with every journal requiring you to release your code. Maybe it doesn’t get totally double-checked, and everything isn’t absolutely perfect, but let’s start there,” Broderick says, as one step toward building trust.

Paper co-author Gelman worked on an analysis that forecast the 2020 U.S. presidential election using state and national polls in real-time. The team published daily updates in The Economist magazine, while also publishing their code online for anyone to download and run themselves. Throughout the season, outsiders pointed out both bugs and conceptual problems in the model, ultimately contributing to a stronger analysis.

The researchers acknowledge that while there is no single solution to create a perfect model, analysts and scientists have the opportunity to reinforce trust at nearly every turn.

“I don’t think we expect any of these things to be perfect,” Broderick says, “but I think we can expect them to be better or to be as good as possible.”

A design tool to democratize the art of color-changing mosaics

A colorful new design tool developed by MIT researchers allows individuals to create polarized light mosaics that can be printed on cellophane to make data visualizations, passive light displays, mechanical animations, fashion accessories, educational science and design tools, and more.

Ticha Melody Sethapakdi, a PhD student in electrical engineering and computer science and affiliate of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), is leading the use of regenerated cellulose to make what she calls Polagons, machine-made color-changing mosaics that use polarized light to inform and delight. Such polarized light mosaics have previously been crafted by hand, and Sethapakdi was inspired by artists such as Austine Wood Comarow, whose innovative “polage art” is based on the same physics principles. The new computational Polagon design system, however, enables a fabrication process based on laser cutting and welding, all with minimal assembly by the user.

Users first import custom mosaic designs, and the system computes the feasible color palette given a user’s cellophane supply. Uploading multiple designs to the interface allows users to create “morphing” mosaics that can transition from one image to another. Then, it’s on to logistics: Polagons optimizes the necessary constituent components for each scenario, like the type and number of sheets needed. Once the user finishes uploading designs, playing around with colors, and visualizing the color-changing behaviors, they can export the fabrication files and chop them up in a laser cutter.

What’s so special about cellophane, anyway? The sheets have a property called birefringence, meaning that when light passes through, the speed of light is different depending on the propagation angle. When the sheets get put into a “sandwich” of two polarizers (material that only lets certain polarities of light pass through while blocking others), they appear colored. The colors you see depend on different factors in the material, like thickness and the angle of the material relative to the two polarizers. To create that color-changing effect, you just need to rotate the image, or the polarizers, because then you’re changing the angle of light propagation.

“Perhaps, in the future, since these designs are non-electronic, they could enable interesting underwater applications, where you put these types of mosaics in places that might be difficult for electronics to stay in. That’s what’s special here, that all of these color-changing effects are mechanical,” says Sethapakdi.

One limitation of Polagons is the inability to represent every color on the rainbow in a continuous way. The team believes a potential solution could be changing the fabrication process to support a constant mode of building colors. This might mean 3D printing a birefringent material to access a more extensive palette and have more control over the colors. 

“In creating this system, I was mostly interested in democratizing this art form and helping preserve something that might only be accessible to skilled individuals. If something happens to the creator of this layering principle, Austine Wood Comarow’s family, does the art then die with them? If we didn’t have some way of preserving that or continuing it, then you would lose something that would be very precious to the world,” says Sethapakdi. “I think there is a real benefit to building these systems that democratize niche art forms. We hope this tool can expand the community of modern polarized light mosaicists. Since we are making this process accessible to a larger group of users, it can add new programmable material to the palette of options in [human-computer interaction].”

Sethapakdi recently wrote a paper on Polagons alongside Laura Huang ’21, former MIT mechanical engineering undergraduate and current mechanical engineer at Neocis; Vivian Chan, a master’s student at Brown University and Rhode Island School of Design; Mackenzie Leake, a postdoc at MIT CSAIL; Stefanie Mueller, associate professor in EECS at MIT and a CSAIL principal investigator; Lung-Pan Cheng, assistant professor at National Taiwan University; and Fernando Fuzinatto Dall’Agnol, professor at the Federal University of Santa Catarina. The research will be present at the 2023 Conference on Human Factors in Computing Systems (CHI 2023).

Ruonan Han named EECS Undergraduate Laboratory Officer

Ruonan Han, Associate Professor in the Department of Electrical Engineering and Computer Science (EECS) has been named Undergraduate Laboratory Officer for the department. He succeeds Karl Berggren in the role, which involves long-term strategic planning and supervision duties for the EECS Department Teaching Laboratories (DTLs).

The DTLs supply faculty, students, and staff with the necessary workspace and resources to apply theory from research and classes directly to practical implementation. They also contain one of the major campus maker spaces, providing students from across the institute with access to facilities for electronics fabrication and testing, mechanical assembly, and 3D printing among many other hardware capabilities. More than 30 classes across the EECS spectrum use the teaching laboratories, with most students in those classes using the space several times per week. The 25,378-square-foot space remains open and staffed more than 14 hours per day, six days per week, to serve as a regular classroom location and study area.

Ruonan Han is a core faculty member of the Microsystems Technology Laboratories (MTL), where his research explores microelectronic circuits and systems which bridge the terahertz gap between microwave and infrared domains. Han received his B.S. degree in microelectronics from Fudan University, China, in 2007; his M.S. degree in electrical engineering from the University of Florida in 2009; and his Ph.D. in electrical and computer engineering from Cornell University in 2014. Han is the recipient of the ECE Director’s Best Thesis Research Award and Innovation Award from Cornell University; three Best Student Paper Awards from IEEE RFIC Symposium (2012, 2017 and 2021); NSF Faculty Early CAREER Development Award (2017); Intel Outstanding Researcher Award (2019); IEEE Microwave Theory & Technique Society Distinguished Lecturer (2019), IEEE Solid-State Circuits Society New Frontier Award (2023), and many other awards and honors. He joined MIT EECS in 2014.

Thriving Stars panel: women leaders share career wisdom

On February 7, 2023, EECS Thriving Stars hosted its second annual career panel, featuring five highly accomplished women with PhDs. The panelists shared their career experiences, learnings, and advice to a community of around 80 current and newly admitted PhD students in EECS. While speaking, the panelists often built off each other’s thoughts, fostering a supportive and welcoming environment. 

This year’s panelists were: Jelena Notaros, the Robert J. Shillman (1974) Career Development Professor in Electrical Engineering and Computer Science; Neha Sardesai, senior application engineer of education at MathWorks; Lila Snyder, CEO of Bose Corporation; Grace Wang, litigating high tech patent attorney at Allen and Overy, LLC; and Songyee Yoon, president and chief strategy officer of video gaming giant NCSOFT. EECS Department Head Asu Ozdaglar moderated the panel, which was held virtually via Zoom. The event was sponsored by Jane Street Capital.

To start, each panelist shared a glimpse into their working lives. Snyder explained that as CEO of Bose, her responsibilities revolve around building teams and making sure they have the necessary tools to be successful. When things are going smoothly, she takes a step back. “I do everything and nothing all at the same time,” she joked. Yoon, an executive at NCSOFT, echoed similar responsibilities in her job, noting that learning how to work on and build large teams was what drew her to a career in industry in the first place.

Sardesai shared that as an application engineer at MathWorks, she works to empower students and researchers by providing advice on using the MATLAB and Simulink software tools. 

Wang, as a litigator, represents tech companies and defends their patents. She spends her days explaining the technology covered in patents to people, often through writing or in the courtroom. While she enjoys the communication aspects of her job, there’s “no better high than coming off a jury trial and getting a decision in your favor,” she said.

And finally, Notaros broke down her responsibilities as a professor into four categories: research, teaching, mentoring, and service. “What I love most [about being a professor] is your biggest output isn’t just your research – it’s also your students,” she said.

The panelists then reflected on how many of the soft skills they learned during their PhD are skills that they still use today. The skills at the top of their list included problem solving and perseverance, especially in situations with lots of uncertainty. Sardesai also emphasized the importance of communication skills, citing the need to communicate technical concepts to people across disciplines in her job. She had picked up this skill from being the only chemical engineering PhD student in an electrical engineering lab.

In the stories they shared, the panelists often attributed their success to the mentors and communities that they’ve had in their careers. For example, growing up, “I never thought I’d get a PhD,” Sardesai said. That is, until she was encouraged to do so by two of her research mentors. Notaros also credited her career to two mentors: a high school teacher who had fueled her interest in teaching and an undergraduate research advisor who had inspired her to pursue her current research field.

Sardesai noted that having a community of women at work also greatly helped her career. During her PhD, she had been the only woman in her subteam and had trouble getting recognition for her work. But, at MathWorks, she’s “lucky enough to be on a team with 50/50 [women and men],” she said, where the women share their experiences and help build each other’s confidence.

Yoon also mentioned often being the only woman in school and work settings. But now, as an executive, she leverages her position to create more diverse communities. “It’s very important for a woman to give other women opportunities,” she said. During her executive tenure, she has promoted eight women into executive positions at NCSOFT. That brings the total to ten, much larger than other similar companies, Yoon said.

Next, the panelists gave advice on work/life balance. Many believe a good approach is to first figure out your own priorities and then schedule your life to reflect them. For example, Wang, Snyder, and Notaros all block off time for their family, but they each reserve different times of the day or week to reflect when they most enjoy family time. Sardesai reminded attendees to also make time for themselves, even if they have a family. 

During audience Q&A, the attendees were eager to ask the panelists for advice, especially about their personal situations. One current PhD student in computer science expressed concern that spending a year doing a PhD had put her behind her peers on the industry career ladder. She cited difficulty with getting a management position after graduation, even though these positions are held by many of her peers without PhDs. The panelists reassured her that while it may not look like it right now, having a PhD will benefit her in the long run. “It’s not where you start; it’s where you finish,” Snyder said. “You might have frustration with your first role, but the experience you gained in your PhD will help you accelerate past your peers.”

At the end of the event, the panelists gave parting advice for success in any career path. Notaros shared a practical piece of advice: “one thing that can really help you with research or work is to be organized,” she said. Sardesai recommended attendees to read at least ten minutes every day to broaden their knowledge. And finally, Snyder encouraged people to take risks, whether it be volunteering for a project or talking to colleagues in a different lab. “Throughout my whole career, the people who get the farthest tend to take a step beyond their comfort zone,” she said.

Student-led conference charts the future of micro- and nanoscale research, reinforces scientific community

2023 marked the 19th year for the student-led Microsystems Annual Research Conference. Pictured is the 2023 graduate student committee with faculty leads. Front row, left to right: co-chairs Maitreyi Ashok and Jennifer Wang, Duhan Zhang, Matthew Yeung, and Aya Amer. Back row, left to right: MIT.nano Director Vladimir Bulović, Mansi Joisher, Beth Whittier, Will Banner, Adina Bechhofer, Kaidong Peng, MTL Director Tomás Palacios, and Jaidi Zhu. Not pictured: Patricia Jastrzebska-Perfect, Milica Notaros, Narumi Wong, and Abigail Zhien Wang.

Snowshoeing and microelectronics are not often mentioned together in the same sentence, but at the Microsystems Annual Research Conference (MARC), winter activities, technical talks, and poster sessions all combine for a two-day flurry of research celebrations.

Returning to the Omni Mount Washington Resort in New Hampshire on Jan. 24-25 for the first time since before the pandemic, MARC gathered over 240 MIT students, faculty, staff, and industry partners to chart the future of microsystems and nanotechnology. Now in its 19th year, the student-run conference is organized by the Microsystems Technology Laboratories (MTL) and, since 2020, co-sponsored jointly with MIT.nano.

In a letter to attendees, MIT Department of Electrical Engineering and Computer Science (EECS) graduate student co-chairs Maitreyi Ashok and Jennifer Wang laid out the goals of the conference: “to celebrate the scientific and technical achievements of the past year, to revisit and redefine our roles as researchers in ever-shifting sociopolitical contexts, and to acknowledge the precious and resilient community that we have built and maintained together.”

The theme for MARC 2023 was metamorphosis to a new era of innovative microsystems. “We wanted to reflect the ongoing transition of tools from MTL to MIT.nano, as well as national and international developments in the microsystems community following the recent global chip shortage and passing of the CHIPS and Science Act,” explained Wang, a third-year graduate student in Assistant Professor Kevin O’Brien’s Quantum Coherent Electronics group. “This theme symbolizes change, growth, and a renewed energy in our research goals.”

In addition to Ashok and Wang, the core planning group included MTL graduate students Will Banner, Adina Bechhofer, and Patricia Jastrzebska-Perfect, all from EECS, as well as Narumi Wong of the Department of Chemical Engineering (ChemE) and Duhan Zhang of the Department of Materials Science and Engineering (DMSE).

“The planning of MARC truly would not have been possible without the support of the student committee, staff, and MTL and MIT.nano directors,” said Ashok, a fourth-year PhD candidate in the Energy-Efficient Circuits and Systems group, led by Dean of Engineering and EECS Professor Anantha Chandrakasan. “Thanks to the efforts of our committee, we effectively integrated lessons and tools from the past two virtual MARCs with an in-person experience. We were thrilled to be able to facilitate face-to-face meetings between students and industry partners from the MTL Microsystems Industry Group and MIT.nano Member Advisory Panel, as well as bring back winter activities.”

The return to the Omni Mount Washington also brought the return of in-person poster sessions. The student committee recruited over 100 students to submit abstracts from 40 different research groups, the most groups ever represented at MARC.

“You are the rock stars”

“You should be ready to make history,” said Tomás Palacios, director of MTL and professor of electrical engineering, in his opening remarks. “We are at an amazing time in the history of semiconductors and microelectronics. You are the rock stars of the next 25 years of technology.”

This enthusiasm and eye toward the future continued with the opening keynote by Eileen Tanghal ’97, founder and general partner of Black Opal Ventures. Tanghal spoke about her career path since graduating from MIT with a bachelor’s degree in EECS and offered advice to current students: “Find your ‘nerd posse,’” “take a risk early in life,” “look for inelastic demand,” and “it’s OK to take a break,” among others.

She shared her experiences working for a startup and then as a venture capitalist while also raising a family. In closing, she came full circle back to MIT, explaining that she started Black Opal with a group of undergraduate classmates — all women — to invest in startups at the intersection of health and technology. “You will see, in 25 years,” she said. “You need to come up with this nerd posse that will support you decades into your future.”

The future of the semiconductor industry

The second day opened with a technical keynote from Ann Kelleher, executive vice president and general manager of technology development at Intel, who focused on the evolution of Moore’s Law. “We’ve just passed the 75th year of the transistor,” said Kelleher. “Six generations have worked in the semiconductor industry. You are the next generation. The future of the semiconductor and the innovation that’s needed to keep it moving forward at the same pace is in your hands.”

Both Kelleher and Tanghal spoke about the importance of scale — being able to go from one to many when creating a product, and what that will require. “It’s one thing to make one of them. That’s proof of concept. It’s a whole different thing to make millions of them,” emphasized Kelleher.

Then it was the MIT students’ and postdocs’ turn to showcase their work and visions for the future. Two poster sessions were divided into eight areas: electronic devices, integrated circuits, medical devices and biotechnology, power, materials and manufacturing, nanotechnology and nanomaterials, optics and photonics, and quantum technologies. Each category was carefully curated by one of eight MTL graduate student session chairs: Aya Amer, Mansi Joisher, Milica Notaros, Kaidong Peng, Beth Whittier, Matthew Yeung, Abigail Zhien Wang, and Jiadi Zhu.

Before the sessions kicked off, the researchers delivered 60-second lightning talks. Attendees voted for their favorite, and best pitches were awarded to postdocs Saurabh Nath (MechE) and Roberto Rodriguez-Moncayo (EECS), and graduate students David Morales Loro (EECS) and Hanrui Wang (EECS).

“The team of graduate student organizers has once again delivered a fantastic conference,” said MIT.nano Director Vladimir Bulović, the Fariborz Maseeh (1990) Professor of Emerging Technology. “I am impressed by the scope of activities MARC 2023 highlighted, the potential impact of the research described, and the professionalism of the student presenters. MARC conferences allow us to peer into the future, as envisioned by the next generation of inventors.  Every year it is a remarkable experience.”

MARC 2023 was held in conjunction with the QSEC Annual Research Conference, which took place on Jan. 23 and 24, also at Bretton Woods. MIT students and faculty, as well as industry affiliates, were encouraged to attend both events and experience a breadth of research and engineering in materials, structures, devices, circuits, and systems.

Palacios named Associate Director of new semiconductor research center; Jing Kong, Farnaz Niroui, Luqiao Liu, and Bilge Yildiz to act as PI’s.

Headshot of Tomás Palacios.

Tomás Palacios has been named Associate Director of the SUPeRior Energy-efficient Materials and dEvices (SUPREME) Center, a new research center which will bring together researchers from nine different higher education institutions with a focus on developing energy-efficient semiconductor materials and technologies. Palacios is Director of the Microsystems Technology Laboratories (MTL) and Professor of Electrical Engineering within the Department of Electrical Engineering and Computer Science (EECS) at MIT. Besides Palacios, four other MIT faculty members will be acting as PI’s for the new center: Professor of EE Jing Kong, Assistant Professor Farnaz Niroui, Associate Professor Luqiao Liu, and Professor of Nuclear Science, Materials Science and Engineering Bilge Yildiz.

Sponsored by the Semiconductor Research Corporation (SRC) and one of the seven centers funded by SRC’s JUMP 2.0 consortium, the center represents a partnership between Cornell; MIT; Boise State University; Georgia Institute of Technology; North Carolina State University; Northwestern University; Rensselaer Polytechnic Institute; Rochester Institute of Technology; Stanford University; Yale University; the University of Colorado, Boulder; the University of Texas, Austin; the University of California, Santa Barbara; and the University of Notre Dame.

Cornell Professor of Engineering Huili Grace Xing will be the center’s director; in a press release from Cornell, she said, “Our center will focus on the material science, the new device architectures and how they interplay with each other… “We want technology that can use as little energy as possible but provide as much function as possible. That is essential if we want to propagate equality,” Xing said. “If we’re able to lower the energy consumption for all of those essential means we want to have in modern life, we can lower the barrier for everybody to have access to information, to have access for education, and to have access to opportunities.”

Tomás Palacios received his PhD from the University of California – Santa Barbara, and his undergraduate degree in Telecommunication Engineering from the Universidad Politécnica de Madrid (Spain). A world expert in Gallium Nitride electronics for both radio frequency and power applications, Palacios and his group have also made seminal contributions to two-dimensional materials and devices, and their heterogeneous integration with state-of-the-art silicon electronics. As Associate Director of the new SUPREME Center, Prof. Palacios will oversee more than 40 different research tasks, 10 of them at MIT, spanning from advanced new electronic materials, to novel semiconductor devices and systems that can push microelectronics to a new level of performance.