“Forever grateful for MIT Open Learning for making knowledge accessible and fostering a network of curious minds”

Bia Adams, a London-based neuropsychologist, former professional ballet dancer, and MIT Open Learning learner, has built her career across decades of diverse, interconnected experiences and an emphasis on lifelong learning. She earned her bachelor’s degree in clinical and behavioral psychology, and then worked as a psychologist and therapist for several years before taking a sabbatical in her late 20s to study at the London Contemporary Dance School and The Royal Ballet — fulfilling a long-time dream.

“In hindsight, I think what drew me most to ballet was not so much the form itself,” says Adams, “but more of a subconscious desire to make sense of my body moving through space and time, my emotions and motivations — all within a discipline that is rigorous, meticulous, and routine-based. It’s an endeavor to make sense of the world and myself.”

After acquiring some dance-related injuries, Adams returned to psychology. She completed an online certificate program specializing in medical neuroscience via Duke University, focusing on how pathology arises out of the way the brain computes information and generates behavior.

In addition to her clinical practice, she has also worked at a data science and AI consultancy for neural network research.

In 2022, in search of new things to learn and apply to both her work and personal life, Adams discovered MIT OpenCourseWare within MIT Open Learning. She was drawn to class 8.04 (Quantum Physics I), which specifically focuses on quantum mechanics, as she was hoping to finally gain some understanding of complex topics that she had tried to teach herself in the past with limited success. She credits the course’s lectures, taught by Allan Adams (physicist and principal investigator of the MIT Future Ocean Lab), with finally making these challenging topics approachable.

“I still talk to my friends at length about exciting moments in these lectures,” says Adams. “After the first class, I was hooked.”

Adams’s journey through MIT Open Learning’s educational resources quickly led to a deeper interest in computational neuroscience. She learned how to use tools from mathematics and computer science to better understand the brain, nervous system, and behavior.

She says she gained many new insights from class 6.034 (Artificial Intelligence), particularly in watching the late Professor Patrick Winston’s lectures. She appreciated learning more about the cognitive psychology aspect of AI, including how pioneers in the field looked at how the brain processes information and aimed to build programs that could solve problems. She further enhanced her understanding of AI with the Minds and Machines course on MITx Online, part of Open Learning.

Adams is now in the process of completing Introduction to Computer Science and Programming Using Python, taught by John Guttag; Eric Grimson, former interim vice president for Open Learning; and Ana Bell.

“I am multilingual, and I think the way my brain processes code is similar to the way computers code,” says Adams. “I find learning to code similar to learning a foreign language: both exhilarating and intimidating. Learning the rules, deciphering the syntax, and building my own world through code is one of the most fascinating challenges of my life.”

Adams is also pursuing a master’s degree at Duke and the University College of London, focusing on the neurobiology of sleep and looking particularly at how the biochemistry of the brain can affect this critical function. As a complement to this research, she is currently exploring class 9.40 (Introduction to Neural Computation), taught by Michale Fee and Daniel Zysman, which introduces quantitative approaches to understanding brain and cognitive functions and neurons and covers foundational quantitative tools of data analysis in neuroscience.

In addition to the courses related more directly to her field, MIT Open Learning also provided Adams an opportunity to explore other academic areas. She delved into philosophy for the first time, taking Paradox and Infinity, taught by Professor Agustín Rayo, the Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences, and Digital Learning Lab Fellow David Balcarras, which looks at the intersection of philosophy and mathematics. She also was able to explore in more depth immunology, which had always been of great interest to her, through Professor Adam Martin’s lectures on this topic in class 7.016 (Introductory Biology).

“I am forever grateful for MIT Open Learning,” says Adams, “for making knowledge accessible and fostering a network of curious minds, all striving to share, expand, and apply this knowledge for the greater good.”

Toward video generative models of the molecular world

As the capabilities of generative AI models have grown, you’ve probably seen how they can transform simple text prompts into hyperrealistic images and even extended video clips.

More recently, generative AI has shown potential in helping chemists and biologists explore static molecules, like proteins and DNA. Models like AlphaFold can predict molecular structures to accelerate drug discovery, and the MIT-assisted “RFdiffusion,” for example, can help design new proteins. One challenge, though, is that molecules are constantly moving and jiggling, which is important to model when constructing new proteins and drugs. Simulating these motions on a computer using physics — a technique known as molecular dynamics — can be very expensive, requiring billions of time steps on supercomputers.

As a step toward simulating these behaviors more efficiently, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Mathematics researchers have developed a generative model that learns from prior data. The team’s system, called MDGen, can take a frame of a 3D molecule and simulate what will happen next like a video, connect separate stills, and even fill in missing frames. By hitting the “play button” on molecules, the tool could potentially help chemists design new molecules and closely study how well their drug prototypes for cancer and other diseases would interact with the molecular structure it intends to impact.

Co-lead author Bowen Jing SM ’22 says that MDGen is an early proof of concept, but it suggests the beginning of an exciting new research direction. “Early on, generative AI models produced somewhat simple videos, like a person blinking or a dog wagging its tail,” says Jing, a PhD student at CSAIL. “Fast forward a few years, and now we have amazing models like Sora or Veo that can be useful in all sorts of interesting ways. We hope to instill a similar vision for the molecular world, where dynamics trajectories are the videos. For example, you can give the model the first and 10th frame, and it’ll animate what’s in between, or it can remove noise from a molecular video and guess what was hidden.”

The researchers say that MDGen represents a paradigm shift from previous comparable works with generative AI in a way that enables much broader use cases. Previous approaches were “autoregressive,” meaning they relied on the previous still frame to build the next, starting from the very first frame to create a video sequence. In contrast, MDGen generates the frames in parallel with diffusion. This means MDGen can be used to, for example, connect frames at the endpoints, or “upsample” a low frame-rate trajectory in addition to pressing play on the initial frame.

This work was presented in a paper shown at the Conference on Neural Information Processing Systems (NeurIPS) this past December. Last summer, it was awarded for its potential commercial impact at the International Conference on Machine Learning’s ML4LMS Workshop.

Some small steps forward for molecular dynamics

In experiments, Jing and his colleagues found that MDGen’s simulations were similar to running the physical simulations directly, while producing trajectories 10 to 100 times faster.

The team first tested their model’s ability to take in a 3D frame of a molecule and generate the next 100 nanoseconds. Their system pieced together successive 10-nanosecond blocks for these generations to reach that duration. The team found that MDGen was able to compete with the accuracy of a baseline model, while completing the video generation process in roughly a minute — a mere fraction of the three hours that it took the baseline model to simulate the same dynamic.

When given the first and last frame of a one-nanosecond sequence, MDGen also modeled the steps in between. The researchers’ system demonstrated a degree of realism in over 100,000 different predictions: It simulated more likely molecular trajectories than its baselines on clips shorter than 100 nanoseconds. In these tests, MDGen also indicated an ability to generalize on peptides it hadn’t seen before.

MDGen’s capabilities also include simulating frames within frames, “upsampling” the steps between each nanosecond to capture faster molecular phenomena more adequately. It can even ​​“inpaint” structures of molecules, restoring information about them that was removed. These features could eventually be used by researchers to design proteins based on a specification of how different parts of the molecule should move.

Toying around with protein dynamics

Jing and co-lead author Hannes Stärk say that MDGen is an early sign of progress toward generating molecular dynamics more efficiently. Still, they lack the data to make these models immediately impactful in designing drugs or molecules that induce the movements chemists will want to see in a target structure.

The researchers aim to scale MDGen from modeling molecules to predicting how proteins will change over time. “Currently, we’re using toy systems,” says Stärk, also a PhD student at CSAIL. “To enhance MDGen’s predictive capabilities to model proteins, we’ll need to build on the current architecture and data available. We don’t have a YouTube-scale repository for those types of simulations yet, so we’re hoping to develop a separate machine-learning method that can speed up the data collection process for our model.”

For now, MDGen presents an encouraging path forward in modeling molecular changes invisible to the naked eye. Chemists could also use these simulations to delve deeper into the behavior of medicine prototypes for diseases like cancer or tuberculosis.

“Machine learning methods that learn from physical simulation represent a burgeoning new frontier in AI for science,” says Bonnie Berger, MIT Simons Professor of Mathematics, CSAIL principal investigator, and senior author on the paper. “MDGen is a versatile, multipurpose modeling framework that connects these two domains, and we’re very excited to share our early models in this direction.”

“Sampling realistic transition paths between molecular states is a major challenge,” says fellow senior author Tommi Jaakkola, who is the MIT Thomas Siebel Professor of electrical engineering and computer science and the Institute for Data, Systems, and Society, and a CSAIL principal investigator. “This early work shows how we might begin to address such challenges by shifting generative modeling to full simulation runs.”

Researchers across the field of bioinformatics have heralded this system for its ability to simulate molecular transformations. “MDGen models molecular dynamics simulations as a joint distribution of structural embeddings, capturing molecular movements between discrete time steps,” says Chalmers University of Technology associate professor Simon Olsson, who wasn’t involved in the research. “Leveraging a masked learning objective, MDGen enables innovative use cases such as transition path sampling, drawing analogies to inpainting trajectories connecting metastable phases.”

The researchers’ work on MDGen was supported, in part, by the National Institute of General Medical Sciences, the U.S. Department of Energy, the National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis Consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the Defense Threat Reduction Agency, and the Defense Advanced Research Projects Agency.

This fast and agile robotic insect could someday aid in mechanical pollination

With a more efficient method for artificial pollination, farmers in the future could grow fruits and vegetables inside multilevel warehouses, boosting yields while mitigating some of agriculture’s harmful impacts on the environment.

To help make this idea a reality, MIT researchers are developing robotic insects that could someday swarm out of mechanical hives to rapidly perform precise pollination. However, even the best bug-sized robots are no match for natural pollinators like bees when it comes to endurance, speed, and maneuverability.

Now, inspired by the anatomy of these natural pollinators, the researchers have overhauled their design to produce tiny, aerial robots that are far more agile and durable than prior versions.

The new design of these tiny, aerial robots is far more robust and durable than prior versions. Here, the robot is subjected to a collision test. Image courtesy of the researchers.

The new bots can hover for about 1,000 seconds, which is more than 100 times longer than previously demonstrated. The robotic insect, which weighs less than a paperclip, can fly significantly faster than similar bots while completing acrobatic maneuvers like double aerial flips.

The revamped robot is designed to boost flight precision and agility while minimizing the mechanical stress on its artificial wing flexures, which enables faster maneuvers, increased endurance, and a longer lifespan.

The new design also has enough free space that the robot could carry tiny batteries or sensors, which could enable it to fly on its own outside the lab.

“The amount of flight we demonstrated in this paper is probably longer than the entire amount of flight our field has been able to accumulate with these robotic insects. With the improved lifespan and precision of this robot, we are getting closer to some very exciting applications, like assisted pollination,” says Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), head of the Soft and Micro Robotics Laboratory within the Research Laboratory of Electronics (RLE), and the senior author of an open-access paper on the new design.

Chen is joined on the paper by co-lead authors Suhan Kim and Yi-Hsuan Hsiao, who are EECS graduate students; as well as EECS graduate student Zhijian Ren and summer visiting student Jiashu Huang. The research appears today in Science Robotics.

Boosting performance

Prior versions of the robotic insect were composed of four identical units, each with two wings, combined into a rectangular device about the size of a microcassette.

“But there is no insect that has eight wings. In our old design, the performance of each individual unit was always better than the assembled robot,” Chen says.

This performance drop was partly caused by the arrangement of the wings, which would blow air into each other when flapping, reducing the lift forces they could generate.

The new design chops the robot in half. Each of the four identical units now has one flapping wing pointing away from the robot’s center, stabilizing the wings and boosting their lift forces. With half as many wings, this design also frees up space so the robot could carry electronics.

The robotic insect, weighing less than a paperclip, can fly significantly faster than similar bots while completing acrobatic maneuvers like aerial flips. Credit: courtesy of the researchers.

In addition, the researchers created more complex transmissions that connect the wings to the actuators, or artificial muscles, that flap them. These durable transmissions, which required the design of longer wing hinges, reduce the mechanical strain that limited the endurance of past versions.

“Compared to the old robot, we can now generate control torque three times larger than before, which is why we can do very sophisticated and very accurate path-finding flights,” Chen says.

Yet even with these design innovations, there is still a gap between the best robotic insects and the real thing. For instance, a bee has only two wings, yet it can perform rapid and highly controlled motions.

“The wings of bees are finely controlled by a very sophisticated set of muscles. That level of fine-tuning is something that truly intrigues us, but we have not yet been able to replicate,” he says.

Less strain, more force

The motion of the robot’s wings is driven by artificial muscles. These tiny, soft actuators are made from layers of elastomer sandwiched between two very thin carbon nanotube electrodes and then rolled into a squishy cylinder. The actuators rapidly compress and elongate, generating mechanical force that flaps the wings.

In previous designs, when the actuator’s movements reach the extremely high frequencies needed for flight, the devices often start buckling. That reduces the power and efficiency of the robot. The new transmissions inhibit this bending-buckling motion, which reduces the strain on the artificial muscles and enables them to apply more force to flap the wings.

Another new design involves a long wing hinge that reduces torsional stress experienced during the flapping-wing motion. Fabricating the hinge, which is about 2 centimeters long but just 200 microns in diameter, was among their greatest challenges.

“If you have even a tiny alignment issue during the fabrication process, the wing hinge will be slanted instead of rectangular, which affects the wing kinematics,” Chen says.

After many attempts, the researchers perfected a multistep laser-cutting process that enabled them to precisely fabricate each wing hinge.

With all four units in place, the new robotic insect can hover for more than 1,000 seconds, which equates to almost 17 minutes, without showing any degradation of flight precision.

“When my student Nemo was performing that flight, he said it was the slowest 1,000 seconds he had spent in his entire life. The experiment was extremely nerve-racking,” Chen says.

The new robot also reached an average speed of 35 centimeters per second, the fastest flight researchers have reported, while performing body rolls and double flips. It can even precisely track a trajectory that spells M-I-T.

“At the end of the day, we’ve shown flight that is 100 times longer than anyone else in the field has been able to do, so this is an extremely exciting result,” he says.

From here, Chen and his students want to see how far they can push this new design, with the goal of achieving flight for longer than 10,000 seconds.

They also want to improve the precision of the robots so they could land and take off from the center of a flower. In the long run, the researchers hope to install tiny batteries and sensors onto the aerial robots so they could fly and navigate outside the lab.

“This new robot platform is a major result from our group and leads to many exciting directions. For example, incorporating sensors, batteries, and computing capabilities on this robot will be a central focus in the next three to five years,” Chen says.

This research is funded, in part, by the U.S. National Science Foundation and a Mathworks Fellowship.

Teaching a robot its limits, to complete open-ended tasks safely

If someone advises you to “know your limits,” they’re likely suggesting you do things like exercise in moderation. To a robot, though, the motto represents learning constraints, or limitations of a specific task within the machine’s environment, to do chores safely and correctly.

For instance, imagine asking a robot to clean your kitchen when it doesn’t understand the physics of its surroundings. How can the machine generate a practical multistep plan to ensure the room is spotless? Large language models (LLMs) can get them close, but if the model is only trained on text, it’s likely to miss out on key specifics about the robot’s physical constraints, like how far it can reach or whether there are nearby obstacles to avoid. Stick to LLMs alone, and you’re likely to end up cleaning pasta stains out of your floorboards.

To guide robots in executing these open-ended tasks, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) used vision models to see what’s near the machine and model its constraints. The team’s strategy involves an LLM sketching up a plan that’s checked in a simulator to ensure it’s safe and realistic. If that sequence of actions is infeasible, the language model will generate a new plan, until it arrives at one that the robot can execute.

This trial-and-error method, which the researchers call “Planning for Robots via Code for Continuous Constraint Satisfaction” (PRoC3S), tests long-horizon plans to ensure they satisfy all constraints, and enables a robot to perform such diverse tasks as writing individual letters, drawing a star, and sorting and placing blocks in different positions. In the future, PRoC3S could help robots complete more intricate chores in dynamic environments like houses, where they may be prompted to do a general chore composed of many steps (like “make me breakfast”).

“LLMs and classical robotics systems like task and motion planners can’t execute these kinds of tasks on their own, but together, their synergy makes open-ended problem-solving possible,” says PhD student Nishanth Kumar SM ’24, co-lead author of a new paper about PRoC3S. “We’re creating a simulation on-the-fly of what’s around the robot and trying out many possible action plans. Vision models help us create a very realistic digital world that enables the robot to reason about feasible actions for each step of a long-horizon plan.”

The team’s work was presented this past month in a paper shown at the Conference on Robot Learning (CoRL) in Munich, Germany.

The researchers’ method uses an LLM pre-trained on text from across the internet. Before asking PRoC3S to do a task, the team provided their language model with a sample task (like drawing a square) that’s related to the target one (drawing a star). The sample task includes a description of the activity, a long-horizon plan, and relevant details about the robot’s environment.

But how did these plans fare in practice? In simulations, PRoC3S successfully drew stars and letters eight out of 10 times each. It also could stack digital blocks in pyramids and lines, and place items with accuracy, like fruits on a plate. Across each of these digital demos, the CSAIL method completed the requested task more consistently than comparable approaches like “LLM3” and “Code as Policies”.

The CSAIL engineers next brought their approach to the real world. Their method developed and executed plans on a robotic arm, teaching it to put blocks in straight lines. PRoC3S also enabled the machine to place blue and red blocks into matching bowls and move all objects near the center of a table.

Kumar and co-lead author Aidan Curtis SM ’23, who’s also a PhD student working in CSAIL, say these findings indicate how an LLM can develop safer plans that humans can trust to work in practice. The researchers envision a home robot that can be given a more general request (like “bring me some chips”) and reliably figure out the specific steps needed to execute it. PRoC3S could help a robot test out plans in an identical digital environment to find a working course of action — and more importantly, bring you a tasty snack.

For future work, the researchers aim to improve results using a more advanced physics simulator and to expand to more elaborate longer-horizon tasks via more scalable data-search techniques. Moreover, they plan to apply PRoC3S to mobile robots such as a quadruped for tasks that include walking and scanning surroundings.

“Using foundation models like ChatGPT to control robot actions can lead to unsafe or incorrect behaviors due to hallucinations,” says The AI Institute researcher Eric Rosen, who isn’t involved in the research. “PRoC3S tackles this issue by leveraging foundation models for high-level task guidance, while employing AI techniques that explicitly reason about the world to ensure verifiably safe and correct actions. This combination of planning-based and data-driven approaches may be key to developing robots capable of understanding and reliably performing a broader range of tasks than currently possible.”

Kumar and Curtis’ co-authors are also CSAIL affiliates: MIT undergraduate researcher Jing Cao and MIT Department of Electrical Engineering and Computer Science professors Leslie Pack Kaelbling and Tomás Lozano-Pérez. Their work was supported, in part, by the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research, the Army Research Office, MIT Quest for Intelligence, and The AI Institute.

Algorithms and AI for a better world

Amid the benefits that algorithmic decision-making and artificial intelligence offer — including revolutionizing speed, efficiency, and predictive ability in a vast range of fields — Manish Raghavan is working to mitigate associated risks, while also seeking opportunities to apply the technologies to help with preexisting social concerns.

“I ultimately want my research to push towards better solutions to long-standing societal problems,” says Raghavan, the Drew Houston Career Development Professor in MIT’s Sloan School of Management and the Department of Electrical Engineering and Computer Science and a principal investigator at the Laboratory for Information and Decision Systems (LIDS).

A good example of Raghavan’s intention can be found in his exploration of the use AI in hiring.

Raghavan says, “It’s hard to argue that hiring practices historically have been particularly good or worth preserving, and tools that learn from historical data inherit all of the biases and mistakes that humans have made in the past.”

Here, however, Raghavan cites a potential opportunity.

“It’s always been hard to measure discrimination,” he says, adding, “AI-driven systems are sometimes easier to observe and measure than humans, and one goal of my work is to understand how we might leverage this improved visibility to come up with new ways to figure out when systems are behaving badly.”

Growing up in the San Francisco Bay Area with parents who both have computer science degrees, Raghavan says he originally wanted to be a doctor. Just before starting college, though, his love of math and computing called him to follow his family example into computer science. After spending a summer as an undergraduate doing research at Cornell University with Jon Kleinberg, professor of computer science and information science, he decided he wanted to earn his PhD there, writing his thesis on “The Societal Impacts of Algorithmic Decision-Making.”

Raghavan won awards for his work, including a National Science Foundation Graduate Research Fellowships Program award, a Microsoft Research PhD Fellowship, and the Cornell University Department of Computer Science PhD Dissertation Award.

In 2022, he joined the MIT faculty.

Perhaps hearkening back to his early interest in medicine, Raghavan has done research on whether the determinations of a highly accurate algorithmic screening tool used in triage of patients with gastrointestinal bleeding, known as the Glasgow-Blatchford Score (GBS), are improved with complementary expert physician advice.

“The GBS is roughly as good as humans on average, but that doesn’t mean that there aren’t individual patients, or small groups of patients, where the GBS is wrong and doctors are likely to be right,” he says. “Our hope is that we can identify these patients ahead of time so that doctors’ feedback is particularly valuable there.”

Raghavan has also worked on how online platforms affect their users, considering how social media algorithms observe the content a user chooses and then show them more of that same kind of content. The difficulty, Raghavan says, is that users may be choosing what they view in the same way they might grab bag of potato chips, which are of course delicious but not all that nutritious. The experience may be satisfying in the moment, but it can leave the user feeling slightly sick.

Raghavan and his colleagues have developed a model of how a user with conflicting desires — for immediate gratification versus a wish of longer-term satisfaction — interacts with a platform. The model demonstrates how a platform’s design can be changed to encourage a more wholesome experience. The model won the Exemplary Applied Modeling Track Paper Award at the 2022 Association for Computing Machinery Conference on Economics and Computation.

“Long-term satisfaction is ultimately important, even if all you care about is a company’s interests,” Raghavan says. “If we can start to build evidence that user and corporate interests are more aligned, my hope is that we can push for healthier platforms without needing to resolve conflicts of interest between users and platforms. Of course, this is idealistic. But my sense is that enough people at these companies believe there’s room to make everyone happier, and they just lack the conceptual and technical tools to make it happen.”

Regarding his process of coming up with ideas for such tools and concepts for how to best apply computational techniques, Raghavan says his best ideas come to him when he’s been thinking about a problem off and on for a time. He would advise his students, he says, to follow his example of putting a very difficult problem away for a day and then coming back to it.

“Things are often better the next day,” he says.

When he’s not puzzling out a problem or teaching, Raghavan can often be found outdoors on a soccer field, as a coach of the Harvard Men’s Soccer Club, a position he cherishes.

“I can’t procrastinate if I know I’ll have to spend the evening at the field, and it gives me something to look forward to at the end of the day,” he says. “I try to have things in my schedule that seem at least as important to me as work to put those challenges and setbacks into context.”

As Raghavan considers how to apply computational technologies to best serve our world, he says he finds the most exciting thing going on his field is the idea that AI will open up new insights into “humans and human society.”

“I’m hoping,” he says, “that we can use it to better understand ourselves.”

Karl K. Berggren named faculty head of electrical engineering in EECS

Karl K. Berggren, the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering at MIT, has been named the new faculty head of electrical engineering in the MIT Department of Electrical Engineering and Computer Science (EECS), effective January 15.

“Karl’s exceptional interdisciplinary research combining electrical engineering, physics, and materials science, coupled with his experience working with industry and government organizations makes him an ideal fit to head electrical engineering. I’m confident electrical engineering will continue to grow under his leadership,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science.

“Karl has made an incredible impact as a researcher and educator over his two decades in EECS. Students and faculty colleagues praise his thoughtful approach to teaching, and the care with which he oversaw the teaching labs in his prior role as Undergraduate Lab Officer for the department. He will undoubtedly be an excellent leader, bringing his passion for education and collaborative spirit to this new role,” adds Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

Berggren joins the leadership of EECS, which jointly reports to the MIT Schwarzman College of Computing and the School of Engineering. The largest academic department at MIT, EECS was reorganized in 2019 as part of the formation of the college into three overlapping sub-units in electrical engineering (EE), computer science (CS), and artificial intelligence and decision-making (AI+D). The restructuring has enabled each of the three sub-units to concentrate on faculty recruitment, mentoring, promotion, academic programs, and community building in coordination with the others.

A member of the EECS faculty since 2003, Berggren has taught a range of subjects in the department, including Digital Communications, Circuits and Electronics, Fundamentals of Programming, Applied Quantum and Statistical Physics, Introduction to EECS via Interconnected Embedded Systems, Introduction to Quantum Systems Engineering, and Introduction to Nanofabrication. Before joining EECS, Berggren worked as a staff member at MIT Lincoln Lab for seven years. Berggren also maintains an active consulting practice and has experience working with industrial and government organizations.

Berggren’s current research focuses on superconductive circuits, electronic devices, single-photon detectors for quantum applications, and electron-optical systems. He heads the Quantum Nanostructures and Nanofabrication Group which develops nanofabrication technology at the few-nanometer length scale. The group uses these technologies to push the envelope of what is possible with photonic and electrical devices, focusing on superconductive and free-electron devices.

Berggren has received numerous prestigious awards and honors throughout his career. Most recently, he was named an MIT MacVicar Fellow in 2024. Berggren is also a fellow of the AAAS, IEEE and the Kavli Foundation, and a recipient of the 2015 Paul T. Forman Team Engineering Award from the Optical Society of America (now Optica). In 2016, he received a Bose Fellowship and was also a recipient of the EECS Department’s Frank Quick Innovation Fellowship and the Burgess (‘52) & Elizabeth Jamieson Award for Excellence in Teaching.

Berggren succeeds Joel Voldman who has served as the inaugural Electrical Engineering Faculty Head since January 2020.

“Joel has been in leadership roles since 2018, when he was named Associate Department Head of EECS. I am deeply grateful to him for his invaluable contributions to EECS since that time,” says Asu Ozdaglar, MathWorks Professor and head of EECS, who also serves as the deputy dean of the MIT Schwarzman College of Computing. “I look forward to working with Karl now and continuing along the amazing path we embarked on in 2019.”

Fast control methods enable record-setting fidelity in superconducting qubit

Quantum computing promises to solve complex problems exponentially faster than a classical computer, by using the principles of quantum mechanics to encode and manipulate information in quantum bits (qubits).

Qubits are the building blocks of a quantum computer. One challenge to scaling, however, is that qubits are highly sensitive to background noise and control imperfections, which introduce errors into the quantum operations and ultimately limit the complexity and duration of a quantum algorithm. To improve the situation, MIT researchers and researchers worldwide have continually focused on improving qubit performance. 

In new work, using a superconducting qubit called fluxonium, MIT researchers in the Department of Physics, the Research Laboratory of Electronics (RLE), and the Department of Electrical Engineering and Computer Science (EECS) developed two new control techniques to achieve a world-record single-qubit fidelity of 99.998 percent. This result complements then-MIT researcher Leon Ding’s demonstration last year of a 99.92 percent two-qubit gate fidelity

Left to right: Leon Ding, William Oliver, and David Rower

The paper’s senior authors are David Rower PhD ’24, a recent physics postdoc in MIT’s Engineering Quantum Systems (EQuS) group and now a research scientist at the Google Quantum AI laboratory; Leon Ding PhD ’23 from EQuS, now leading the Calibration team at Atlantic Quantum; and William D. Oliver, the Henry Ellis Warren Professor of EECS and professor of physics, leader of EQuS, director of the Center for Quantum Engineering, and RLE associate director. The paper recently appeared in the journal PRX Quantum.

Decoherence and counter-rotating errors

A major challenge with quantum computation is decoherence, a process by which qubits lose their quantum information. For platforms such as superconducting qubits, decoherence stands in the way of realizing higher-fidelity quantum gates.

Quantum computers need to achieve high gate fidelities in order to implement sustained computation through protocols like quantum error correction. The higher the gate fidelity, the easier it is to realize practical quantum computing.

MIT researchers are developing techniques to make quantum gates, the basic operations of a quantum computer, as fast as possible in order to reduce the impact of decoherence. However, as gates get faster, another type of error, arising from counter-rotating dynamics, can be introduced because of the way qubits are controlled using electromagnetic waves. 

Single-qubit gates are usually implemented with a resonant pulse, which induces Rabi oscillations between the qubit states. When the pulses are too fast, however, “Rabi gates” are not so consistent, due to unwanted errors from counter-rotating effects. The faster the gate, the more the counter-rotating error is manifest. For low-frequency qubits such as fluxonium, counter-rotating errors limit the fidelity of fast gates.

“Getting rid of these errors was a fun challenge for us,” says Rower. “Initially, Leon had the idea to utilize circularly polarized microwave drives, analogous to circularly polarized light, but realized by controlling the relative phase of charge and flux drives of a superconducting qubit. Such a circularly polarized drive would ideally be immune to counter-rotating errors.”

While Ding’s idea worked immediately, the fidelities achieved with circularly polarized drives were not as high as expected from coherence measurements.

“Eventually, we stumbled on a beautifully simple idea,” says Rower. “If we applied pulses at exactly the right times, we should be able to make counter-rotating errors consistent from pulse-to-pulse. This would make the counter-rotating errors correctable. Even better, they would be automatically accounted for with our usual Rabi gate calibrations!”

They called this idea “commensurate pulses,” since the pulses needed to be applied at times commensurate with intervals determined by the qubit frequency through its inverse, the time period. Commensurate pulses are defined simply by timing constraints and can be applied to a single linear qubit drive. In contrast, circularly polarized microwaves require two drives and some extra calibration.

“I had much fun developing the commensurate technique,” says Rower. “It was simple, we understood why it worked so well, and it should be portable to any qubit suffering from counter-rotating errors!”

“This project makes it clear that counter-rotating errors can be dealt with easily. This is a wonderful thing for low-frequency qubits such as fluxonium, which are looking more and more promising for quantum computing.”

Fluxonium’s promise

Fluxonium is a type of superconducting qubit made up of a capacitor and Josephson junction; unlike transmon qubits, however, fluxonium also includes a large “superinductor,” which by design helps protect the qubit from environmental noise. This results in performing logical operations, or gates, with greater accuracy.

Despite having higher coherence, however, fluxonium has a lower qubit frequency that is generally associated with proportionally longer gates.

“Here, we’ve demonstrated a gate that is among the fastest and highest-fidelity across all superconducting qubits,” says Ding. “Our experiments really show that fluxonium is a qubit that supports both interesting physical explorations and also absolutely delivers in terms of engineering performance.”

With further research, they hope to reveal new limitations and yield even faster and higher-fidelity gates.

“Counter-rotating dynamics have been understudied in the context of superconducting quantum computing because of how well the rotating-wave approximation holds in common scenarios,” says Ding. “Our paper shows how to precisely calibrate fast, low-frequency gates where the rotating-wave approximation does not hold.”

Physics and engineering team up

“This is a wonderful example of the type of work we like to do in EQuS, because it leverages fundamental concepts in both physics and electrical engineering to achieve a better outcome,” says Oliver. “It builds on our earlier work with non-adiabatic qubit control, applies it to a new qubit — fluxonium — and makes a beautiful connection with counter-rotating dynamics.”

The science and engineering teams enabled the high fidelity in two ways. First, the team demonstrated “commensurate” (synchronous) non-adiabatic control, which goes beyond the standard “rotating wave approximation” of standard Rabi approaches. This leverages ideas that won the 2023 Nobel Prize in Physics for ultrafast “attosecond” pulses of light.

Secondly, they demonstrated it using an analog to circularly polarized light. Rather than a physical electromagnetic field with a rotating polarization vector in real x-y space, they realized a synthetic version of circularly polarized light using the qubit’s x-y space, which in this case corresponds to its magnetic flux and electric charge.

The combination of a new take on an existing qubit design (fluxonium) and the application of advanced control methods applied to an understanding of the underlying physics enabled this result.

Platform-independent and requiring no additional calibration overhead, this work establishes straightforward strategies for mitigating counter-rotating effects from strong drives in circuit quantum electrodynamics and other platforms, which the researchers expect to be helpful in the effort to realize high-fidelity control for fault-tolerant quantum computing.

Adds Oliver, “With the recent announcement of Google’s Willow quantum chip that demonstrated quantum error correction beyond threshold for the first time, this is a timely result, as we have pushed performance even higher. Higher-performant qubits will lead to lower overhead requirements for implementing error correction.”  

Other researchers on the paper are RLE’s Helin ZhangMax Hays, Patrick M. Harrington, Ilan T. RosenSimon GustavssonKyle SerniakJeffrey A. Grover, and Junyoung An, who is also with EECS; and MIT Lincoln Laboratory’s Jeffrey M. Gertler, Thomas M. Hazard, Bethany M. Niedzielski, and Mollie E. Schwartz.

This research was funded, in part, by the U.S. Army Research Office, the U.S. Department of Energy Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage, U.S. Air Force, the U.S. Office of the Director of National Intelligence, and the U.S. National Science Foundation.  

2024-25 EECS Student Award Roundup

This ongoing listing of awards and recognitions won by our students is added to all year, beginning in September.

Soroush Araei received the ISSCC 2024 Jack Kilby Outstanding Student Paper Award at IEEE International Solid-State Circuits Conference 2025.

Yiming Chen ’24 was named a 2025 Rhodes China Scholar.

Ezekiel Daye received the Laya and Jerome B. Wiesner Student Art Award by the MIT Office of the Arts.

Anushka Nair was named a 2025 Rhodes Scholar.

David Oluigbo was named a 2025 Rhodes Scholar.

Lara Ozkan was named a 2025 Marshall Scholar.

Sam Vinu-Srivatsan was named a 2025 Brooke Owens Fellow.

Daniela Rus named to French National Academy of Medicine

Daniela Rus, a distinguished computer scientist and professor at the Massachusetts Institute of Technology (MIT), has been honored with induction into the prestigious Académie Nationale de Médecine (ANM) as a foreign member on January 7, 2025. As the Director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Daniela leads over 1,700 researchers in pioneering innovations to advance computing and improve global well-being.

In her acceptance speech, Daniela highlighted her passion for interdisciplinary collaboration and the transformative potential of artificial intelligence (AI) and robotics in medicine. Her groundbreaking work spans critical areas such as developing AI systems to support surgeons and help prevent mistakes, innovating non-invasive surgical procedures, and democratizing access to proton therapy through robotic solutions.

Daniela also emphasized the importance of international cooperation in medical research. Her leadership in organizing joint colloquia between MIT, ANM, and the Health Data Hub in 2020 and 2022 showcased the shared vision of France and the United States in advancing AI applications in medicine. These collaborations have fostered transatlantic dialogue and innovation, addressing challenges and opportunities in the future of healthcare.

Becoming a member of the ANM represents a significant milestone for Daniela Rus, reflecting her dedication to applications of AI for the good of humanity and her strong ties to France’s scientific community.

Daniela Rus’s recognition underscores the growing intersection of computer science and medicine, and her work continues to inspire global collaboration to shape the future of healthcare innovation.

Other recent awards earned by Rus include the 2024 John Scott Award, and the 2025 Edison Medal from the Institute of Electrical and Electronics Engineers (IEEE). Past recipients of the Edison Medal include Alexander Graham Bell and Nikola Tesla.

How hard is it to prevent recurring blackouts in Puerto Rico?

Researchers at MIT’s Laboratory for Information and Decision Systems (LIDS) have shown that using decision-making software and dynamic monitoring of weather and energy use can significantly improve resiliency in the face of weather-related outages, and can also help to efficiently integrate renewable energy sources into the grid.

The researchers point out that the system they suggest might have prevented or at least lessened the kind of widespread power outage that Puerto Rico experienced last week by providing analysis to guide rerouting of power through different lines and thus limit the spread of the outage.

The computer platform, which the researchers describe as DyMonDS, for Dynamic Monitoring and Decision Systems, can be used to enhance the existing operating and planning practices used in the electric industry. The platform supports interactive information exchange and decision-making between the grid operators and grid-edge users — all the distributed power sources, storage systems and software that contribute to the grid. It also supports optimization of available resources and controllable grid equipment as system conditions vary. It further lends itself to implementing cooperative decision-making by different utility- and non-utility-owned electric power grid users, including portfolios of mixed resources, users, and storage. Operating and planning the interactions of the end-to-end high-voltage transmission grid with local distribution grids and microgrids represents another major potential use of this platform.

This general approach was illustrated using a set of publicly-available data on both meteorology and details of electricity production and distribution in Puerto Rico. An extended AC Optimal Power Flow software developed by SmartGridz Inc. is used for system-level optimization of controllable equipment. This provides real-time guidance for deciding how much power, and through which transmission lines, should be channeled by adjusting plant dispatch and voltage-related set points, and in extreme cases, where to reduce or cut power in order to maintain physically-implementable service for as many customers as possible. The team found that the use of such a system can help to ensure that the greatest number of critical services maintain power even during a hurricane, and at the same time can lead to a substantial decrease in the need for construction of new power plants thanks to more efficient use of existing resources.

The findings are described in a paper in the journal Foundations and Trends in Electric Energy Systems, by MIT LIDS researchers Marija Ilic and Laurentiu Anton, along with recent alumna Ramapathi Jaddivada.

“Using this software,” Ilic says, they show that “even during bad weather, if you predict equipment failures, and by using that information exchange, you can localize the effect of equipment failures and still serve a lot of customers, 50 percent of customers, when otherwise things would black out.”

Anton says that “the way many grids today are operated is sub-optimal.” As a result, “we showed how much better they could do even under normal conditions, without any failures, by utilizing this software.” The savings resulting from this optimization, under everyday conditions, could be in the tens of percents, they say.

The way utility systems plan currently, Ilic says, “usually the standard is that they have to build enough capacity and operate in real time so that if one large piece of equipment fails, like a large generator or transmission line, you still serve customers in an uninterrupted way. That’s what’s called N-minus-1.” Under this policy, if one major component of the system fails, they should be able to maintain service for at least 30 minutes. That system allows utilities to plan for how much reserve generating capacity they need to have on hand. That’s expensive, Ilic points out, because it means maintaining this reserve capacity all the time, even under normal operating conditions when it’s not needed.

In addition, “right now there are no criteria for what I call N-minus-K,” she says. If bad weather causes five pieces of equipment to fail at once, “there is no software to help utilities decide what to schedule” in terms of keeping the most customers, and the most important services such as hospitals and emergency services, provided with power. They showed that even with 50 percent of the infrastructure out of commission, it would still be possible to keep power flowing to a large proportion of customers.

Their work on analyzing the power situation in Puerto Rico started after the island had been devastated by hurricanes Irma and Maria. Most of the electric generation capacity is in the south, yet the largest loads are in San Juan, in the north, and Mayaguez in the west. When transmission lines get knocked down, a lot of rerouting of power needs to happen quickly.

With the new systems, “the software finds the optimal adjustments for set points,” for example, changing voltages can allow for power to be redirected through less-congested lines, or can be increased to lessen power losses, Anton says.

The software also helps in the long-term planning for the grid. As many fossil-fuel power plants are scheduled to be decommissioned soon in Puerto Rico, as they are in many other places, planning for how to replace that power without having to resort to greenhouse gas-emitting sources is a key to achieving carbon-reduction goals. And by analyzing usage patterns, the software can guide the placement of new renewable power sources where they can most efficiently provide power where and when it’s needed.

As plants are retired or as components are affected by weather, “We wanted to ensure the dispatchability of power when the load changes,” Anton says, “but also when crucial components are lost, to ensure the robustness at each step of the retirement schedule.”

One thing they found was that “if you look at how much generating capacity exists, it’s more than the peak load, even after you retire a few fossil plants,” Ilic says. “But it’s hard to deliver.” Strategic planning of new distribution lines could make a big difference.

Jaddivada, director of innovation at SmartGridz, says that “we evaluated different possible architectures in Puerto Rico, and we showed the ability of this software to ensure uninterrupted electricity service. This is the most important challenge utilities have today. They have to go through a computationally tedious process to make sure the grid functions for any possible outage in the system. And that can be done in a much more efficient way through the software that the company  developed.”

The project was a collaborative effort between the MIT LIDS researchers and others at MIT Lincoln Laboratory, the Pacific Northwest National Laboratory, with overall help of SmartGridz software.