Six With Ties to MIT Honored as ACM Fellows

The headshots of all six MIT-related Fellows for ACM 2022.

On January 18th, the Association for Computing Machinery (ACM) announced its 2022 Fellows, those it recognizes “for significant contributions in areas including cybersecurity, human-computer interaction, mobile computing, and recommender systems among many other areas.” Included in the crop of new Fellows were six distinguished scientists with ties to MIT. 

Faculty

Constantinos Daskalakis, the Armen Avanessians (1982) Professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT, was honored “for contributions to the foundations of algorithmic game theory, mechanism design, sublinear algorithms, and theoretical machine learning”. Daskalakis is a theoretical computer scientist who works at the interface of game theory, economics, probability theory, statistics, and machine learning. His current work focuses on multi-agent learning, learning from biased and dependent data, causal inference and econometrics.

A native of Greece, Daskalakis joined the MIT faculty in 2009. He is a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and is affiliated with the Laboratory for Information and Decision Systems (LIDS) and the Operations Research Center (ORC). He is also an investigator in the Foundations of Data Science Institute. He has previously received such honors as the 2018 Nevanlinna Prize from the International Mathematical Union, the 2018 ACM Grace Murray Hopper Award, the Kalai Game Theory and Computer Science Prize from the Game Theory Society, and the 2008 ACM Doctoral Dissertation Award.

Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences and an Associate Director of the MIT Media Lab, was honored for “contributions to tangible user interfaces and to human-computer interaction.” Ishii joined the MIT Media Lab in 1995 and established the Tangible Media research group with the goal of making digital tangible by giving physical form to digital information and computation. He is recognized as a founder of “Tangible User Interfaces (TUI).” 

Ishii and his research team have presented their visions of “Tangible Bits” and “Radical Atoms” at a wide variety of academic, design, and artistic venues including ACM SIGGRAPH, Ars Electronica, ICC, Centre Pompidou, Cooper Hewitt Design Museum, and Milan Design Week. The exhibits have served to show that the design of engaging and inspiring tangible interactions requires the rigor of both scientific and artistic review, encapsulated by Ishii’s motto, “Be Artistic and Analytic. Be Poetic and Pragmatic.” Ishii was elected to the CHI Academy in 2006, and in 2019 received the SIGCHI Lifetime Research Award for his fundamental and influential research contributions to the field of human-computer interaction.

Alumni 

Kevin Fu ’98, MNG ’99, PhD ’05 (EECS), Professor of Electrical and Computer Engineering and Professor in the Khoury College of Computer Sciences at Northeastern University, was honored “for contributions to computer security, and especially to the secure engineering of medical devices.” Fu’s research interests include security as it relates to emerging sensor technology in biomedical engineering and cyberphysical systems; his work has important implications for ​​medical devices, autonomous transportation, healthcare delivery, manufacturing, and the Internet of Things.

Prior to joining Northeastern in January 2023, Fu was an Associate Professor at the University of Michigan and an Associate Professor at UMass Amherst; additionally, beginning in 2021, he was Acting Director of Medical Device Cybersecurity within the FDA Center for Devices and Radiological Health (CDRH) and Program Director for Cybersecurity within the FDA Digital Health Center of Excellence (DHCoE). His honors include the Sloan Research Fellow, MIT Technology Review TR35 Innovator of the Year, IEEE Fellow, a Fed100 Award, and an NSF CAREER Award. He has received best paper awards from USENIX Security, IEEE S&P, and ACM SIGCOMM, and his work on pacemaker security received an inaugural Test of Time Award from IEEE Security and Privacy.

Jimmy Lin ’00, MNG ’01, PhD ’04 (EECS), Professor and David R. Cheriton Chair in the School of Computer Science at the University of Waterloo, was honored “for contributions to question answering, information retrieval, and natural language processing.

Lin’s research centers on the challenge of connecting users with relevant information at scale. Over the years, he has worked on systems designed for diverse users, ranging from causal searchers on the web to intelligence analysts, medical doctors, historians, and data scientists. Prior to joining the University of Waterloo, Lin was at the University of Maryland; additionally, he has spent time at Twitter, Cloudera, Microsoft, and the National Library of Medicine (NLM). He is currently the Chief Technology Officer of Primal, a Waterloo-based knowledge graph and deep learning company; previously, he was the Chief Scientist of RSVP.ai, a Waterloo-based startup.

Rafael Pass PhD ’06 (EECS), a Professor of Computer Science at Tel-Aviv University, and the director of the Checkpoint Institute for Information Security, as well as a Professor at Cornell Tech/Cornell University, was honored “for contributions to the foundations of cryptography.” Pass’s work focuses on cryptography and its interplay with computational complexity and game theory, as well as theoretical foundations of blockchains, and connections between cryptography and Kolmogorov complexity. 

His honors include: Winner of the 9th NSA Best Scientific Cybersecurity Paper Competition, 2022; The Best Paper Award at the 41st Annual International Cryptology Conference (CRYPTO), 2021; the Wallenberg Academy Fellow (awarded by the Royal Academy of Science in Sweden); the Alfred P. Sloan Fellowship; AFOSR Young Investigator Award; a Microsoft Research Faculty Fellowship; and an NSF Career Award, among others. Before earning his PhD from MIT, he earned his bachelor’s in Engineering Physics and a master’s in Computer Science, both from the Royal Institute of Technology (KTH) in Sweden.

Jaime Teevan SM ’01, PhD ’07 (EECS), Chief Scientist and a Technical Fellow at Microsoft, was honored “for contributions to human-computer interaction, information retrieval, and productivity.” Teevan is responsible for driving research-backed innovation related to everything from AI to hybrid work in Microsoft’s core products. Previously, she was Technical Advisor to CEO Satya Nadella, and led the Productivity team at Microsoft Research. 

This year, in addition to becoming an ACM Fellow, Teevan was inducted into the ACM SIGIR and CHI Academies. She is also an Affiliate Professor at the University of Washington; before earning her master’s and PhD from MIT, she earned her BS from Yale.

Mohammad Alizadeh named new Industry Officer for EECS

Mohammad Alizadeh has been named the Industry Officer for MIT’s Department of Electrical Engineering and Computer Science. In this role, Alizadeh will oversee the EECS Alliance, the department’s industry outreach program which provides access for EECS students to internships, post graduate employment, networking and collaborations. He succeeds Tomás Palacios and Aude Oliva in the role.

A member of CSAIL, Alizadeh’s research interests are in the areas of computer networks and systems. His current research focuses on machine learning for systems, network protocols, and resource management in a variety of settings, including datacenters and cloud computing, edge computing, Internet video delivery, and large-scale decentralized systems.

Before joining MIT in 2015, Alizadeh spent time at Microsoft Research, Insieme Networks, and Cisco Systems. He earned his MS and PhD in Electrical Engineering from Stanford University, and his BS from the Sharif University of Technology in Iran. Among his many honors, Alizadeh has received the Microsoft Research Faculty Fellowship, VMware Systems Research Award, SIGCOMM Rising Star Award, NSF CAREER Award, Alfred P. Sloan Research Fellowship, SIGCOMM Test of Time Award, and multiple best paper awards.

Subtle biases in AI can influence emergency decisions

It’s no secret that people harbor biases — some unconscious, perhaps, and others painfully overt. The average person might suppose that computers — machines typically made of plastic, steel, glass, silicon, and various metals — are free of prejudice. While that assumption may hold for computer hardware, the same is not always true for computer software, which is programmed by fallible humans and can be fed data that is, itself, compromised in certain respects.

Artificial intelligence (AI) systems — those based on machine learning, in particular — are seeing increased use in medicine for diagnosing specific diseases, for example, or evaluating X-rays. These systems are also being relied on to support decision-making in other areas of health care. Recent research has shown, however, that machine learning models can encode biases against minority subgroups, and the recommendations they make may consequently reflect those same biases.

new study by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Jameel Clinic, which was published last month in Communications Medicine, assesses the impact that discriminatory AI models can have, especially for systems that are intended to provide advice in urgent situations. “We found that the manner in which the advice is framed can have significant repercussions,” explains the paper’s lead author, Hammaad Adam, a PhD student at MIT’s Institute for Data Systems and Society. “Fortunately, the harm caused by biased models can be limited (though not necessarily eliminated) when the advice is presented in a different way.” The other co-authors of the paper are Aparna Balagopalan and Emily Alsentzer, both PhD students, and the professors Fotini Christia and Marzyeh Ghassemi.

AI models used in medicine can suffer from inaccuracies and inconsistencies, in part because the data used to train the models are often not representative of real-world settings. Different kinds of X-ray machines, for instance, can record things differently and hence yield different results. Models trained predominately on white people, moreover, may not be as accurate when applied to other groups. The Communications Medicine paper is not focused on issues of that sort but instead addresses problems that stem from biases and on ways to mitigate the adverse consequences.

A group of 954 people (438 clinicians and 516 nonexperts) took part in an experiment to see how AI biases can affect decision-making. The participants were presented with call summaries from a fictitious crisis hotline, each involving a male individual undergoing a mental health emergency. The summaries contained information as to whether the individual was Caucasian or African American and would also mention his religion if he happened to be Muslim. A typical call summary might describe a circumstance in which an African American man was found at home in a delirious state, indicating that “he has not consumed any drugs or alcohol, as he is a practicing Muslim.” Study participants were instructed to call the police if they thought the patient was likely to turn violent; otherwise, they were encouraged to seek medical help.

The participants were randomly divided into a control or “baseline” group plus four other groups designed to test responses under slightly different conditions. “We want to understand how biased models can influence decisions, but we first need to understand how human biases can affect the decision-making process,” Adam notes. What they found in their analysis of the baseline group was rather surprising: “In the setting we considered, human participants did not exhibit any biases. That doesn’t mean that humans are not biased, but the way we conveyed information about a person’s race and religion, evidently, was not strong enough to elicit their biases.”

The other four groups in the experiment were given advice that either came from a biased or unbiased model, and that advice was presented in either a “prescriptive” or a “descriptive” form. A biased model would be more likely to recommend police help in a situation involving an African American or Muslim person than would an unbiased model. Participants in the study, however, did not know which kind of model their advice came from, or even that models delivering the advice could be biased at all. Prescriptive advice spells out what a participant should do in unambiguous terms, telling them they should call the police in one instance or seek medical help in another. Descriptive advice is less direct: A flag is displayed to show that the AI system perceives a risk of violence associated with a particular call; no flag is shown if the threat of violence is deemed small.  

A key takeaway of the experiment is that participants “were highly influenced by prescriptive recommendations from a biased AI system,” the authors wrote. But they also found that “using descriptive rather than prescriptive recommendations allowed participants to retain their original, unbiased decision-making.” In other words, the bias incorporated within an AI model can be diminished by appropriately framing the advice that’s rendered. Why the different outcomes, depending on how advice is posed? When someone is told to do something, like call the police, that leaves little room for doubt, Adam explains. However, when the situation is merely described — classified with or without the presence of a flag — “that leaves room for a participant’s own interpretation; it allows them to be more flexible and consider the situation for themselves.”

Second, the researchers found that the language models that are typically used to offer advice are easy to bias. Language models represent a class of machine learning systems that are trained on text, such as the entire contents of Wikipedia and other web material. When these models are “fine-tuned” by relying on a much smaller subset of data for training purposes — just 2,000 sentences, as opposed to 8 million web pages — the resultant models can be readily biased.  

Third, the MIT team discovered that decision-makers who are themselves unbiased can still be misled by the recommendations provided by biased models. Medical training (or the lack thereof) did not change responses in a discernible way. “Clinicians were influenced by biased models as much as non-experts were,” the authors stated.

“These findings could be applicable to other settings,” Adam says, and are not necessarily restricted to health care situations. When it comes to deciding which people should receive a job interview, a biased model could be more likely to turn down Black applicants. The results could be different, however, if instead of explicitly (and prescriptively) telling an employer to “reject this applicant,” a descriptive flag is attached to the file to indicate the applicant’s “possible lack of experience.”

The implications of this work are broader than just figuring out how to deal with individuals in the midst of mental health crises, Adam maintains.  “Our ultimate goal is to make sure that machine learning models are used in a fair, safe, and robust way.”

Unpacking the “black box” to build better AI models

When deep learning models are deployed in the real world, perhaps to detect financial fraud from credit card activity or identify cancer in medical images, they are often able to outperform humans.

But what exactly are these deep learning models learning? Does a model trained to spot skin cancer in clinical images, for example, actually learn the colors and textures of cancerous tissue, or is it flagging some other features or patterns?

These powerful machine-learning models are typically based on artificial neural networks that can have millions of nodes that process data to make predictions. Due to their complexity, researchers often call these models “black boxes” because even the scientists who build them don’t understand everything that is going on under the hood.

Stefanie Jegelka isn’t satisfied with that “black box” explanation. A newly tenured associate professor in the MIT Department of Electrical Engineering and Computer Science, Jegelka is digging deep into deep learning to understand what these models can learn and how they behave, and how to build certain prior information into these models.

“At the end of the day, what a deep-learning model will learn depends on so many factors. But building an understanding that is relevant in practice will help us design better models, and also help us understand what is going on inside them so we know when we can deploy a model and when we can’t. That is critically important,” says Jegelka, who is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Institute for Data, Systems, and Society (IDSS).

Jegelka is particularly interested in optimizing machine-learning models when input data are in the form of graphs. Graph data pose specific challenges: For instance, information in the data consists of both information about individual nodes and edges, as well as the structure — what is connected to what. In addition, graphs have mathematical symmetries that need to be respected by the machine-learning model so that, for instance, the same graph always leads to the same prediction. Building such symmetries into a machine-learning model is usually not easy.

Take molecules, for instance. Molecules can be represented as graphs, with vertices that correspond to atoms and edges that correspond to chemical bonds between them. Drug companies may want to use deep learning to rapidly predict the properties of many molecules, narrowing down the number they must physically test in the lab.

Jegelka studies methods to build mathematical machine-learning models that can effectively take graph data as an input and output something else, in this case a prediction of a molecule’s chemical properties. This is particularly challenging since a molecule’s properties are determined not only by the atoms within it, but also by the connections between them.  

Other examples of machine learning on graphs include traffic routing, chip design, and recommender systems.

Designing these models is made even more difficult by the fact that data used to train them are often different from data the models see in practice. Perhaps the model was trained using small molecular graphs or traffic networks, but the graphs it sees once deployed are larger or more complex.

In this case, what can researchers expect this model to learn, and will it still work in practice if the real-world data are different?

“Your model is not going to be able to learn everything because of some hardness problems in computer science, but what you can learn and what you can’t learn depends on how you set the model up,” Jegelka says.

She approaches this question by combining her passion for algorithms and discrete mathematics with her excitement for machine learning.

From butterflies to bioinformatics

Jegelka grew up in a small town in Germany and became interested in science when she was a high school student; a supportive teacher encouraged her to participate in an international science competition. She and her teammates from the U.S. and Hong Kong won an award for a website they created about butterflies, in three languages.

“For our project, we took images of wings with a scanning electron microscope at a local university of applied sciences. I also got the opportunity to use a high-speed camera at Mercedes Benz — this camera usually filmed combustion engines — which I used to capture a slow-motion video of the movement of a butterfly’s wings. That was the first time I really got in touch with science and exploration,” she recalls.

Intrigued by both biology and mathematics, Jegelka decided to study bioinformatics at the University of Tübingen and the University of Texas at Austin. She had a few opportunities to conduct research as an undergraduate, including an internship in computational neuroscience at Georgetown University, but wasn’t sure what career to follow.

When she returned for her final year of college, Jegelka moved in with two roommates who were working as research assistants at the Max Planck Institute in Tübingen.

“They were working on machine learning, and that sounded really cool to me. I had to write my bachelor’s thesis, so I asked at the institute if they had a project for me. I started working on machine learning at the Max Planck Institute and I loved it. I learned so much there, and it was a great place for research,” she says.

She stayed on at the Max Planck Institute to complete a master’s thesis, and then embarked on a PhD in machine learning at the Max Planck Institute and the Swiss Federal Institute of Technology.

During her PhD, she explored how concepts from discrete mathematics can help improve machine-learning techniques.

Teaching models to learn

The more Jegelka learned about machine learning, the more intrigued she became by the challenges of understanding how models behave, and how to steer this behavior.

“You can do so much with machine learning, but only if you have the right model and data. It is not just a black-box thing where you throw it at the data and it works. You actually have to think about it, its properties, and what you want the model to learn and do,” she says.

After completing a postdoc at the University of California at Berkeley, Jegelka was hooked on research and decided to pursue a career in academia. She joined the faculty at MIT in 2015 as an assistant professor.

“What I really loved about MIT, from the very beginning, was that the people really care deeply about research and creativity. That is what I appreciate the most about MIT. The people here really value originality and depth in research,” she says.

That focus on creativity has enabled Jegelka to explore a broad range of topics.

In collaboration with other faculty at MIT, she studies machine-learning applications in biology, imaging, computer vision, and materials science.

But what really drives Jegelka is probing the fundamentals of machine learning, and most recently, the issue of robustness. Often, a model performs well on training data, but its performance deteriorates when it is deployed on slightly different data. Building prior knowledge into a model can make it more reliable, but understanding what information the model needs to be successful and how to build it in is not so simple, she says.

She is also exploring methods to improve the performance of machine-learning models for image classification.

Image classification models are everywhere, from the facial recognition systems on mobile phones to tools that identify fake accounts on social media. These models need massive amounts of data for training, but since it is expensive for humans to hand-label millions of images, researchers often use unlabeled datasets to pretrain models instead.

These models then reuse the representations they have learned when they are fine-tuned later for a specific task.

Ideally, researchers want the model to learn as much as it can during pretraining, so it can apply that knowledge to its downstream task. But in practice, these models often learn only a few simple correlations — like that one image has sunshine and one has shade — and use these “shortcuts” to classify images.

“We showed that this is a problem in ‘contrastive learning,’ which is a standard technique for pre-training, both theoretically and empirically. But we also show that you can influence the kinds of information the model will learn to represent by modifying the types of data you show the model. This is one step toward understanding what models are actually going to do in practice,” she says.

Researchers still don’t understand everything that goes on inside a deep-learning model, or details about how they can influence what a model learns and how it behaves, but Jegelka looks forward to continue exploring these topics.

“Often in machine learning, we see something happen in practice and we try to understand it theoretically. This is a huge challenge. You want to build an understanding that matches what you see in practice, so that you can do better. We are still just at the beginning of understanding this,” she says.

Outside the lab, Jegelka is a fan of music, art, traveling, and cycling. But these days, she enjoys spending most of her free time with her preschool-aged daughter.

New quantum computing architecture could be used to connect large-scale devices

Quantum computers hold the promise of performing certain tasks that are intractable even on the world’s most powerful supercomputers. In the future, scientists anticipate using quantum computing to emulate materials systems, simulate quantum chemistry, and optimize hard tasks, with impacts potentially spanning finance to pharmaceuticals.

However, realizing this promise requires resilient and extensible hardware. One challenge in building a large-scale quantum computer is that researchers must find an effective way to interconnect quantum information nodes — smaller-scale processing nodes separated across a computer chip. Because quantum computers are fundamentally different from classical computers, conventional techniques used to communicate electronic information do not directly translate to quantum devices. However, one requirement is certain: Whether via a classical or a quantum interconnect, the carried information must be transmitted and received.    

To this end, MIT researchers have developed a quantum computing architecture that will enable extensible, high-fidelity communication between superconducting quantum processors. In work published today in Nature Physics, MIT researchers demonstrate step one, the deterministic emission of single photons — information carriers — in a user-specified direction. Their method ensures quantum information flows in the correct direction more than 96 percent of the time.

Linking several of these modules enables a larger network of quantum processors that are interconnected with one another, no matter their physical separation on a computer chip.

“Quantum interconnects are a crucial step toward modular implementations of larger-scale machines built from smaller individual components,” says Bharath Kannan PhD ’22, co-lead author of a research paper describing this technique.

“The ability to communicate between smaller subsystems will enable a modular architecture for quantum processors, and this may be a simpler way of scaling to larger system sizes compared to the brute-force approach of using a single large and complicated chip,” Kannan adds.

Kannan wrote the paper with co-lead author Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) at MIT. The senior author is William D. Oliver, an MIT professor of electrical engineering and computer science and of physics, an MIT Lincoln Laboratory Fellow, director of the Center for Quantum Engineering, and associate director of RLE.

Moving quantum information

In a conventional classical computer, various components perform different functions, such as memory, computation, etc. Electronic information, encoded and stored as bits (which take the value of 1s or 0s), is shuttled between these components using interconnects, which are wires that move electrons around on a computer processor.

But quantum information is more complex. Instead of only holding a value of 0 or 1, quantum information can also be both 0 and 1 simultaneously (a phenomenon known as superposition). Also, quantum information can be carried by particles of light, called photons. These added complexities make quantum information fragile, and it can’t be transported simply using conventional protocols.

A quantum network links processing nodes using photons that travel through special interconnects known as waveguides. A waveguide can either be unidirectional, and move a photon only to the left or to the right, or it can be bidirectional.

Most existing architectures use unidirectional waveguides, which are easier to implement since the direction in which photons travel is easily established. But since each waveguide only moves photons in one direction, more waveguides become necessary as the quantum network expands, which makes this approach difficult to scale. In addition, unidirectional waveguides usually incorporate additional components to enforce the directionality, which introduces communication errors.

“We can get rid of these lossy components if we have a waveguide that can support propagation in both the left and right directions, and a means to choose the direction at will. This ‘directional transmission’ is what we demonstrated, and it is the first step toward bidirectional communication with much higher fidelities,” says Kannan.

Using their architecture, multiple processing modules can be strung along one waveguide. A remarkable feature the architecture design is that the same module can be used as both a transmitter and a receiver, he says. And photons can be sent and captured by any two modules along a common waveguide.

“We have just one physical connection that can have any number of modules along the way. This is what makes it scalable. Having demonstrated directional photon emission from one module, we are now working on capturing that photon downstream at a second module,” Almanakly adds.

Leveraging quantum properties

To accomplish this, the researchers built a module comprising four qubits.

Qubits are the building blocks of quantum computers, and are used to store and process quantum information. But qubits can also be used as photon emitters. Adding energy to a qubit causes the qubit to become excited, and then when it de-excites, the qubit will emit the energy in the form of a photon.

However, simply connecting one qubit to a waveguide does not ensure directionality. A single qubit emits a photon, but whether it travels to the left or to the right is completely random. To circumvent this problem, the researchers utilize two qubits and a property known as quantum interference to ensure the emitted photon travels in the correct direction.

The technique involves preparing the two qubits in an entangled state of single excitation called a Bell state. This quantum-mechanical state comprises two aspects: the left qubit being excited and the right qubit being excited. Both aspects exist simultaneously, but which qubit is excited at a given time is unknown.

When the qubits are in this entangled Bell state, the photon is effectively emitted to the waveguide at the two qubit locations simultaneously, and these two “emission paths” interfere with each other. Depending on the relative phase within the Bell state, the resulting photon emission must travel to the left or to the right. By preparing the Bell state with the correct phase, the researchers choose the direction in which the photon travels through the waveguide.

They can use this same technique, but in reverse, to receive the photon at another module.

“The photon has a certain frequency, a certain energy, and you can prepare a module to receive it by tuning it to the same frequency. If they are not at the same frequency, then the photon will just pass by. It’s analogous to tuning a radio to a particular station. If we choose the right radio frequency, we’ll pick up the music transmitted at that frequency,” Almanakly says.

The researchers found that their technique achieved more than 96 percent fidelity — this means that if they intended to emit a photon to the right, 96 percent of the time it went to the right.

Now that they have used this technique to effectively emit photons in a specific direction, the researchers want to connect multiple modules and use the process to emit and absorb photons. This would be a major step toward the development of a modular architecture that combines many smaller-scale processors into one larger-scale, and more powerful, quantum processor.

“The work demonstrates an on-demand quantum emitter, in which the interference of the emitted photon from an entangled state defines the direction, beautifully manifesting the power of waveguide quantum electrodynamics,” says Yasunobu Nakamura, director of the RIKEN Center for Quantum Computing, who was not involved with this research. “It can be used as a fully programmable quantum node that can emit/absorb/pass/store quantum information on a quantum network and as an interface for a bus connecting multiple quantum computer chips.”

The research is funded, in part, by the AWS Center for Quantum Computing, the U.S. Army Research Office, the U.S. Department of Energy Office of Science National Quantum Information Science Research Centers, the Co-design Center for Quantum Advantage, and the U.S. Department of Defense.

Putting a new spin on computer hardware

Luqiao Liu was the kind of kid who would rather take his toys apart to see how they worked than play with them the way they were intended.

Curiosity has been a driving force throughout his life, and it led him to MIT, where Liu is a newly tenured associate professor in the Department of Electrical Engineering and Computer Science and a member of the Research Laboratory of Electronics.

Rather than taking things apart, he’s now using novel materials and nanoscale fabrication techniques to build next-generation electronics that use dramatically less power than conventional devices. Curiosity still comes in handy, he says, especially since he and his collaborators work in the largely uncharted territory of spin electronics — a field that only emerged in the 1980s.

“There are many challenges that we must overcome in our work. In spin electronics, there is still a gap between what could be done fundamentally and what has been done so far. There is a lot still to study in terms of getting better materials and finding new mechanisms so we can reach higher and higher performance,” says Liu, who is also a member of the MIT-IBM Watson AI Lab.

Electrons are subatomic particles that possess a fundamental quantum property known as spin. One way to visualize this is to think of a spinning top that circulates around itself, which gives the top angular momentum. That angular momentum, a product of the spinning top’s mass, radius, and velocity, is known as its spin.

Although electrons don’t technically rotate on an axis like a top, they do possess the same kind of spin. Their angular momentum can be pointing “up” or “down.” Instead of using positive and negative electric charges to represent binary information (1s and 0s) in electronic devices, engineers can use the binary nature of electron spin.

Because it takes less energy to change the spin direction of electrons, electron spin can be used to switch transistors in electronic devices using much less power than with traditional electronics. Transistors, the basic building blocks of modern electronics, are used to regulate electrical signals.

Also, due to their angular momentum, electrons behave like tiny magnets. Researchers can use these magnetic properties to represent and store information in computer memory hardware. Liu and his collaborators are aiming to accelerate the process, removing the speed bottlenecks that hold back lower-power, higher-performance computer memory devices.

Attracted to magnetism

Liu’s path to studying computer memory hardware and spin electronics began with refrigerator magnets. As a young child, he wondered why a magnet would stick to the fridge.

That early curiosity helped to spark his interest in science and math. As he delved into those subjects in high school and college, learning more about physics, chemistry, and electronics, his curiosity about magnetism and its uses in computers deepened.

When he had the opportunity to pursue a PhD at Cornell University and join a research group that was studying magnetic materials, Liu found the perfect match.

“I spent the next five or six years looking into new and more efficient ways to generate electron spin current and use that to write information into magnetic computer memories,” he says.

While he was fascinated by the world of research, Liu wanted to try his hand at an industry career, so he joined IBM’s T.J. Watson Research Center after graduate school. There, his work focused on developing more efficient magnetic random access memory hardware for computers.

“Making something finally work in a commercially available format is quite important, but I didn’t find myself fully engaged with that kind of fine-tuning work. I wanted to show the viability of very novel work — to prove that some new concept is possible,” Liu says. He joined MIT as an assistant professor in 2015.

Material matters

Some of Liu’s most recent work at MIT involves building computer memories using nanoscale, antiferromagnetic materials. Antiferromagnetic materials, such as manganese, contain ions which act as tiny magnets due to electron spin. They arrange themselves so that ions spinning “up” and those spinning “down” are opposite one another, so the magnetism cancels out.

Because they don’t produce magnetic fields, antiferromagnetic materials can be packed closer together onto a memory device, which leads to higher storage capacity. And their lack of a magnetic field means the spin states can be switched between “up” and “down” very quickly, so antiferromagnetic materials can switch transistors much faster than traditional materials, Liu explains.

“In the scientific community, it had been under debate whether you can electrically switch the spin orientation inside these antiferromagnetic materials. Using experiments, we showed that you can,” he says.

In his experiments, Liu often uses novel materials that were created just a few years ago, so all their properties are not yet well-understood. But he enjoys the challenge of integrating them into devices and testing their functionality. Finding better materials to leverage electron spin in computer memories can lead to devices that use less power, store more information, and retain that information for a longer period of time.

Liu takes advantage of the cutting-edge equipment inside MIT.nano, a shared 214,000-square-foot nanoscale research center, to build and test nanoscale devices. Having such state-of-the-art facilities at his fingertips is a boon for his research, he says.

But for Liu, the human capital is what really fuels his work.

“The colleagues and students are the most precious part of MIT. To be able to discuss questions and talk to people who are the smartest in the world, that is the most enjoyable experience of doing this job,” he says.

He, his students, and colleagues are pushing the young field of spin electronics forward.

In the future, he envisions using antiferromagnetic materials in tandem with existing technologies to create hybrid computing devices that achieve even better performance. He also plans to dive deeper into the world of quantum technologies. For instance, spin electronics could be used to efficiently control the flow of information in quantum circuits, he says.

In quantum computing, signal isolation is critical — the information must flow in only one direction from the quantum circuit to the external circuit. He is exploring the use of a phenomenon known as a spin wave, which is the excitation of electron spin inside magnetic materials, to ensure the signal only moves in one direction.

Whether he is investigating quantum computing or probing the properties of new materials, one thing holds true — Liu continues to be driven by an insatiable curiosity.

“We are continually exploring, delving into many exciting and challenging new topics toward the goal of making better computing memory or digital logic devices using spin electronics,” he says.

Fadel Adib promoted to Associate Professor with Tenure

The Department of EECS is proud to announce the promotion to Associate Professor with Tenure of Fadel Adib, who is a faculty member in Media Arts and Sciences (MAS) with a joint appointment in Electrical Engineering and Computer Science (EECS). Adib’s research focuses on the invention of novel wireless technologies to interconnect, sense, and perceive the physical world in ways that were not possible before. His work brings new wireless capabilities to challenging environments such as the ocean and inside the human body and opens up new wireless sensing primitives in areas spanning climate, ocean conservation, health monitoring, robotic perception and manipulation, logistics, and commerce.

Following the completion of his Master’s and PhD degrees at MIT in 2016, Professor Adib accepted a faculty appointment at the MIT Media Lab, where he founded the Lab’s Signal Kinetics research group. Two years later, he accepted a joint appointment in the Department of Electrical Engineering and Computer Science. In the ensuing years, he and his research team have made significant contributions to ocean research in the form of underwater backscatter networking; batteryless underwater communications nodes; and finding viable solutions to the challenges of underwater-to-air communications. Professor Adib has also made important contributions in novel contactless sensing with applications that include health monitoring, food safety sensing, and robotics. In addition, his work has expanded batteryless connectivity deep into the human body, enabling novel health applications such as batteryless micro-implants. He is currently founder and CEO of Cartesian Systems, a startup aimed at wireless mapping of the physical world; his PhD research on wireless sensing (which won the ACM SIGMOBILE Dissertation Award) led to Emerald Innovations, which specializes in devices for remote health monitoring.

Professor Adib received his undergraduate degree from the American University of Beirut (2011) before earning his master’s (2013) and PhD (2016) at MIT, winning the best master’s and best PhD thesis awards in computer science at MIT. Additionally, he has received multiple prestigious early-career faculty honors, including the CAREER Award (2019) from the US National Science Foundation; the Young Investigator Award (2019) and the Early Career Grant (2020) from the Office of Naval Research; the Sloan Research Fellowship (2021); the Google Faculty Research Award (2017); the Technology Review 35 under 35 award (2014); and most recently, the ACM SIGMOBILE Rockstar Award (2022).

School of Engineering unveils MIT Postdoctoral Fellowship Program for Engineering Excellence

In July 2022, the MIT School of Engineering welcomed its first class of scholars selected for the Postdoctoral Fellowship Program for Engineering Excellence. The idea for the fellowship grew from conversations taking place within the school’s Diversity, Equity and Inclusion (DEI) Committee — established in 2020 — that identified a need to diversify the pool of postdocs employed within the school. The program seeks to discover and develop the next generation of faculty leaders to help guide the school toward a more diverse and inclusive culture.

“We are excited to offer this new fellowship opportunity,” says Anantha Chandrakasan, dean of the School of Engineering. “I look forward to the positive impact these postdoctoral fellows will bring to their work and research while also helping the School of Engineering continue our growth as a more welcoming and diverse community for all.”

The program offers annual stipends for postdocs to pursue research and educational efforts that widen the scope and breadth of the school’s current work, while maintaining its commitment to excellence in engineering. It is partially inspired by MIT’s Dr. Martin Luther King Jr. Visiting Scholars and Professor Program, which aims to bring a greater number of diverse scholars to campus.

Engineering is a field at MIT that has long struggled with supporting scholars from underrepresented backgrounds. Today, only 8 percent of School of Engineering graduate students identify as an underrepresented minority. Only 5 percent of undergraduates identify as Black or African American and only 14 percent identify as Hispanic or Latinx. Women account for about half of the School of Engineering’s undergraduate enrollment but make up just a third of the school’s graduate students.

Postdoc demographics are equally disconcerting, says Dan Hastings, the School of Engineering’s associate dean of DEI and head of the Department of Aeronautics and Astronautics.

“If we looked at the data from institutional research on postdocs in the School of Engineering, the diversity of that group was terrible. There’s no other way to describe it,” says Hastings. “The sense was, why can’t we have a program like the MLK Program that attracts a diverse population of postdocs?”

The Postdoctoral Fellowship Program for Engineering Excellence aims to build on the school’s other initiatives, like its DEI committee, the MIT Summer Research Program initiative, and the work of the gender equity committee. The aim is to specifically diversify the pool of postdoc researchers hired by the school each year. Supporting postdocs is particularly important, says Hastings, because hiring for those positions often happens through diffuse professional networks and via personal faculty contacts.

“We hope that by intentionally building a supportive community for our scholars, we can create a space where postdoctoral scholars that are historically underrepresented in engineering can thrive,” says Nandi Bynoe, assistant dean, DEI for the School of Engineering.

Aside from supporting postdocs in their research, the program provides opportunities for fellows to gain professional skills required to succeed in potential careers in three different areas: entrepreneurship, engineering leadership — supported by The Daniel J. Riccio Graduate Engineering Leadership Program (GradEL) — and academia.

The 2022-23 MIT Postdoctoral Fellows for Engineering Excellence are:

Sofia Arevalo is a School of Engineering Distinguished Postdoctoral Fellow in the Department of Civil and Environmental Engineering. Arevalo’s doctoral work focused on nanomechanical analysis of orthopedic implants to optimize the longevity of total joint replacements. Her research expertise is in materials characterization, nanomechanics, medical polymers, and failure analysis. Her postdoctoral research focuses on learning from nature to optimize performance of self-healing materials for medical applications. In addition to research, she has extensive experience mentoring and teaching graduate- and undergraduate-level engineering courses and was a recipient of the University of California at Berkeley’s Outstanding Graduate Student Instructor Award in 2021. Arevalo received her BS, MS, and PhD in mechanical engineering from UC Berkeley and was a recipient of the National Science Foundation Graduate Research Fellowship Program in 2016. 

Molly Carton is a School of Engineering Distinguished Postdoctoral Fellow in the Department of Mechanical Engineering. Her research focuses on using algorithmic design and computational fabrication to generate architected materials and mechanisms with new mechanical properties. Carton earned her BA in physics from Princeton University, and her MS in applied mathematics and PhD in mechanical engineering from the University of Washington at Seattle.

Steven Ceron is a School of Engineering Distinguished Postdoctoral Fellow in the Department of Electrical Engineering and Computer Science. His research area focuses on leveraging coupled oscillators to enable robot swarms to exhibit diverse morphologies and functions across all length scales. Ceron earned his BS in mechanical engineering from the University of Florida and PhD in mechanical engineering from Cornell University.

Matthew Clarke is a Boeing School of Engineering Distinguished Postdoctoral Fellow in the Department of Aeronautics and Astronautics. His research focuses on aircraft design, aerodynamics, and aeroacoustics, with an emphasis on the analysis and optimization of electric vehicles for urban air mobility. Clarke is an alumnus of the MIT Summer Research Program, earned his BS from Howard University in mechanical engineering, and both his MS and PhD from Stanford University in aeronautics and astronautics.

Suhas Eswarappa Prameela is an aeronautics and astronautics School of Engineering Distinguished Postdoctoral Fellow. His research interests include materials discovery for extreme environments, propulsion materials for space applications, machine learning, and informatics. Eswarappa Prameela has a PhD in materials science and engineering from Johns Hopkins University, an MS in material science and engineering from Arizona State University, and a BS in mechanical engineering (gold medalist) from RV College of Engineering, India.

Amy Rae Fox is a joint fellow in the MIT Computer Science and Artificial Intelligence Laboratory METEOR Postdoctoral Fellowship Program and the School of Engineering Postdoctoral Fellowship Program. She is a School of Engineering Distinguished Postdoctoral Fellow in the Department of Electrical Engineering and Computer Science. Her research focuses on the role of cognition in information visualization, and she aims to build bridges between basic research in cognitive psychology and design research in human-computer interaction. Fox earned her BS in computer science from University of North Carolina at Chapel Hill, MSEd in instructional design from Université Pierre-Mendès France, MA in interdisciplinary studies from California State University at Chico, and PhD in cognitive science from University of California at San Diego.

Timothy Holder is an IBM School of Engineering Distinguished Postdoctoral Fellow in the Department of Aeronautics and Astronautics. His research interests include development of wearable, non-contact, and remote psychophysiological sensor systems for the detection of affective states, and for the development of wellness interventions in underserved populations. He also investigates cognitive and performative latent variables for human-robot interactions. Holder received his BS in chemistry-engineering from Washington and Lee University and his PhD in biomedical engineering from North Carolina State University and the University of North Carolina at Chapel Hill.

Michael Kitcher is a School of Engineering Distinguished Postdoctoral Fellow in the Department of Materials Science and Engineering. His research examines spin transport and chiral interactions in magnetic materials with the goal of developing spintronic devices that address far-reaching needs, such as energy-efficient computing. Kitcher earned his BS in materials science and engineering from MIT before earning his PhD, also in materials science and engineering, from Carnegie Mellon University.

Ulri Lee is an Electrical Engineering and Computer Science School of Engineering Distinguished Postdoctoral Fellow. Lee’s research focuses on developing microfluidic technologies to model the blood-brain barrier and investigate links between its dysfunction and neuropsychiatric disorders. Lee received her BS and PhD in chemistry from the University of Washington, where she was the 2020 SLAS Graduate Research Fellow.

Jorge Méndez is an IBM School of Engineering Distinguished Postdoctoral Fellow in Electrical Engineering and Computer Science. His research seeks to create versatile artificially intelligent systems that accumulate knowledge over a lifetime, with applications in computer vision, robotics, and natural language. Méndez received his BS in electronics engineering from Universidad Simón Bolívar, and his MSE in robotics and his PhD in computer and information science from the University of Pennsylvania.

Kristina Monakhova is a Boeing School of Engineering Distinguished Postdoctoral fellow in Electrical Engineering and Computer Science. Her research interests involve combining computational imaging with machine learning to design better, smaller, and more capable cameras and microscopes. Monakhova received her BS in electrical engineering from the State University of New York at Buffalo and her PhD in electrical engineering and computer science from the University of California at Berkeley.

George Moore is a School of Engineering Distinguished Postdoctoral Fellow in the Department of Mechanical Engineering. His research focuses on user journeys through design thinking practices and the environmental impacts of small-scale manufacturing techniques related to these design thinking practices. Moore earned his BS in mechanical engineering from the University of South Alabama, and his MS and PhD in mechanical engineering from the University of California at Berkeley.

Kimia Nadjahi is a School of Engineering Distinguished Postdoctoral Fellow in the Department of Electrical Engineering and Computer Science. Her research interests lie in designing machine-learning algorithms that offer a good balance between practical advantages and theoretical justification, with the long-term goal of facilitating their deployment in real-world applications. Nadjahi received her engineer’s degree in applied mathematics and computer science from Ensimag (France), her MS in computer vision and machine learning from ENS Cachan (France), and her PhD from Telecom Paris (France).

Maria Ramos Gonzalez is a School of Engineering Distinguished Postdoctoral Fellow in the Department of Mechanical Engineering. Her research focuses on the design of robotic hands that she plans to translate to upper limb neuroprosthetics. Ramos Gonzalez earned her BS and PhD in mechanical engineering from the University of Nevada at Las Vegas and was selected as the Nevada System of Higher Education Regents’ Scholar.

Matthew Rivera is a Chemical Engineering School of Engineering Distinguished Postdoctoral Fellow. His thesis work focused on organic solvent separations with new composite membranes. At MIT, his work focuses on data-driven materials discovery to address challenging chemical separations problems. Rivera received dual BS degrees in chemistry and chemical engineering from Mississippi State University, and his PhD in chemical engineering from Georgia Tech.

Joseph Wasswa is a School of Engineering Distinguished Postdoctoral Fellow in the Department of Civil and Environmental Engineering. Using analytical and computational skills, his current research focuses on understanding the transformation and fate of contaminants in the environment. Wasswa earned a BS in agricultural engineering from Makerere University, his MS in civil engineering from San Diego State University, and his PhD in civil engineering from Syracuse University. He also obtained a Certificate of Advanced Study in Sustainable Enterprise (CASSE) in 2021 from the Martin J. Whitman School of Management at Syracuse University.

2022-23 EECS Student Award Roundup

This ongoing listing of student-won awards and recognitions is added to all year, beginning in September.

Shelly Ben-David was awarded the 2023 Fulbright Fellowship.

Teresa Gao was named one of the winners of the George J. Mitchell Scholarship’s Class of 2024.

Saachi Jain was named one of the 2023 Apple Scholars in AIML.

Wengong Jin won the 2022 Dimitris N. Chorafas Prize, from the Chorafas Foundation.

Bharath Kannan won the 2022 Dimitris N. Chorafas Prize, from the Chorafas Foundation.

Rachana Madhukara was awarded the 2023 Fulbright Fellowship.

Mercy Oladipo was awarded the 2023 Fulbright Fellowship.

Erica P. Santana ’18 was awarded the 2023 Fulbright Fellowship.

Michael Sutton was awarded the 2023 Fulbright Fellowship.

Ted Pyne was the recipient of a 2023 Graduate Research Fellowship Award from Jane Street Capital.

Yannan Nellie Wu won the Distinguished Artifact Award at the IEEE/ACM International Symposium on Microarchitecture (MICRO 2022), held Oct. 1-5, 2022, alongside her team Po-An Tsai, Angshuman Parashar, Vivienne Sze, and Joel Emer, for “Sparseloop: An Analytical Approach to Sparse Tensor Accelerator Modeling”.

Jiaqi Zhang was named one of the 2023 Apple Scholars in AIML.

Jonathan Zong was named one of Forbes’ “30 Under 30: Science” for his work designing new interfaces for non-visual data.

Tomás Palacios named new Director of the Microsystems Technology Laboratories (MTL)

Tomás Palacios stands for a portrait.

The Microsystems Technology Laboratories (MTL) will now be helmed by a new director. Maria Zuber, Vice President for Research and E.A. Griswold Professor of Geophysics and Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, recently announced that Tomás Palacios assumed the role of Director of MTL on December 1, 2022. Palacios has served as Director of the 6-A MEng Thesis Program; Industry Officer; and Professor of Electrical Engineering within the Department of Electrical Engineering and Computer Science (EECS). He succeeds Advanced Television and Signal Processing (ATSP) Professor Hae-Seung (Harry) Lee, who has been the Director of MTL since 2019. 

“MTL is a very unique place,” said Prof. Palacios in his email to the lab’s community. “The research being done here is second to none, and MTL’s commitment to developing innovative technologies at all levels of the stack, from materials to devices, circuits and systems is an example to all. We just need to browse the internet, make a phone call, or recharge our electric vehicle to see how technologies that came out of MTL have found their place in applications all around us.” 

Palacios joined MTL in 2006, after receiving his PhD from the University of California – Santa Barbara, and his undergraduate degree in Telecommunication Engineering from the Universidad Politécnica de Madrid (Spain). A world expert in Gallium Nitride electronics for both radio frequency and power applications, Palacios and his group have also made seminal contributions to two-dimensional materials and devices, and their heterogeneous integration with state-of-the-art silicon electronics. Prof. Palacios is the founding director of the MTL Center for Graphene Devices and 2D Systems, as well as the co-founder of Finwave Semiconductor, Inc., an MTL spin-off company commercializing Gallium Nitride power amplifiers for 5G communications. 

Palacios is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), and has served the microelectronics community in many roles, more recently as the General Chair for the IEEE Symposium on Very Large-Scale Integration (VLSI) Technology and Circuits. His work has been recognized with multiple awards, including the Presidential Early Career Award for Scientists and Engineers, the 2012 and 2019 IEEE George Smith Award, and the National Science Foundation (NSF), Office of Naval Research (ONR), and the Defense Advanced Research Projects Agency (DARPA) Young Faculty Awards, among many others. 

Palacios follows in the footsteps of Advanced Television and Signal Processing (ATSP) Professor Hae-Sung (Harry) Lee, whose guidance of MTL throughout the pandemic Palacios praised, saying, “his amazing leadership during [….] arguably the most difficult times I have ever seen, has been both inspiring, and key to position MTL in the amazing place we are today.” 

The lab will continue to build upon its long history of innovation, Palacios promised: “Semiconductors and microsystems have never been more important. It is not only about their tremendous implications to computing and communication, but also that they are the key to solving the climate crisis, transforming healthcare and, even, the future of education. We have a-once-in-a-generation opportunity to set the foundation for the future of semiconductors and microsystems, and everything that means for the future of our society. I am convinced that the MTL community will play a vital role in setting this foundation.”