AI accelerates problem-solving in complex scenarios

While Santa Claus may have a magical sleigh and nine plucky reindeer to help him deliver presents, for companies like FedEx, the optimization problem of efficiently routing holiday packages is so complicated that they often employ specialized software to find a solution.

This software, called a mixed-integer linear programming (MILP) solver, splits a massive optimization problem into smaller pieces and uses generic algorithms to try and find the best solution. However, the solver could take hours — or even days — to arrive at a solution.

The process is so onerous that a company often must stop the software partway through, accepting a solution that is not ideal but the best that could be generated in a set amount of time.

Researchers from MIT and ETH Zurich used machine learning to speed things up.

They identified a key intermediate step in MILP solvers that has so many potential solutions it takes an enormous amount of time to unravel, which slows the entire process. The researchers employed a filtering technique to simplify this step, then used machine learning to find the optimal solution for a specific type of problem.

Their data-driven approach enables a company to use its own data to tailor a general-purpose MILP solver to the problem at hand.

This new technique sped up MILP solvers between 30 and 70 percent, without any drop in accuracy. One could use this method to obtain an optimal solution more quickly or, for especially complex problems, a better solution in a tractable amount of time.

This approach could be used wherever MILP solvers are employed, such as by ride-hailing services, electric grid operators, vaccination distributors, or any entity faced with a thorny resource-allocation problem.

“Sometimes, in a field like optimization, it is very common for folks to think of solutions as either purely machine learning or purely classical. I am a firm believer that we want to get the best of both worlds, and this is a really strong instantiation of that hybrid approach,” says senior author Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).

Wu wrote the paper with co-lead authors Sirui Li, an IDSS graduate student, and Wenbin Ouyang, a CEE graduate student; as well as Max Paulus, a graduate student at ETH Zurich. The research will be presented at the Conference on Neural Information Processing Systems.

Tough to solve

MILP problems have an exponential number of potential solutions. For instance, say a traveling salesperson wants to find the shortest path to visit several cities and then return to their city of origin. If there are many cities which could be visited in any order, the number of potential solutions might be greater than the number of atoms in the universe.  

“These problems are called NP-hard, which means it is very unlikely there is an efficient algorithm to solve them. When the problem is big enough, we can only hope to achieve some suboptimal performance,” Wu explains.

An MILP solver employs an array of techniques and practical tricks that can achieve reasonable solutions in a tractable amount of time.

A typical solver uses a divide-and-conquer approach, first splitting the space of potential solutions into smaller pieces with a technique called branching. Then, the solver employs a technique called cutting to tighten up these smaller pieces so they can be searched faster.

Cutting uses a set of rules that tighten the search space without removing any feasible solutions. These rules are generated by a few dozen algorithms, known as separators, that have been created for different kinds of MILP problems. 

Wu and her team found that the process of identifying the ideal combination of separator algorithms to use is, in itself, a problem with an exponential number of solutions.

“Separator management is a core part of every solver, but this is an underappreciated aspect of the problem space. One of the contributions of this work is identifying the problem of separator management as a machine learning task to begin with,” she says.

Shrinking the solution space

She and her collaborators devised a filtering mechanism that reduces this separator search space from more than 130,000 potential combinations to around 20 options. This filtering mechanism draws on the principle of diminishing marginal returns, which says that the most benefit would come from a small set of algorithms, and adding additional algorithms won’t bring much extra improvement.

Then they use a machine-learning model to pick the best combination of algorithms from among the 20 remaining options.

This model is trained with a dataset specific to the user’s optimization problem, so it learns to choose algorithms that best suit the user’s particular task. Since a company like FedEx has solved routing problems many times before, using real data gleaned from past experience should lead to better solutions than starting from scratch each time.

The model’s iterative learning process, known as contextual bandits, a form of reinforcement learning, involves picking a potential solution, getting feedback on how good it was, and then trying again to find a better solution.

This data-driven approach accelerated MILP solvers between 30 and 70 percent without any drop in accuracy. Moreover, the speedup was similar when they applied it to a simpler, open-source solver and a more powerful, commercial solver.

In the future, Wu and her collaborators want to apply this approach to even more complex MILP problems, where gathering labeled data to train the model could be especially challenging. Perhaps they can train the model on a smaller dataset and then tweak it to tackle a much larger optimization problem, she says. The researchers are also interested in interpreting the learned model to better understand the effectiveness of different separator algorithms.

This research is supported, in part, by Mathworks, the National Science Foundation (NSF), the MIT Amazon Science Hub, and MIT’s Research Support Committee.

Students pitch transformative ideas in generative AI at MIT Ignite competition

This semester, students and postdocs across MIT were invited to submit ideas for the first-ever MIT Ignite: Generative AI Entrepreneurship Competition. Over 100 teams submitted proposals for startups that utilize generative artificial intelligence technologies to develop solutions across a diverse range of disciplines including human health, climate change, education, and workforce dynamics.

On Oct. 30, 12 finalists pitched their ideas in front of a panel of expert judges and a packed room in Samberg Conference Center.

“MIT has a responsibility to help shape a future of AI innovation that is broadly beneficial — and to do that, we need a lot of great ideas. So, we turned to a pretty reliable source of great ideas: MIT’s highly entrepreneurial students and postdocs,” said MIT President Sally Kornbluth in her opening remarks at the event. 

The MIT Ignite event is part of a broader focus on generative AI at MIT put forth by Kornbluth. This fall, across the Institute, researchers and students are exploring opportunities to contribute their knowledge on generative AI, identifying new applications, minimizing risks, and employing it for the benefit of society. This event — co-organized by the MIT-IBM Watson AI Lab and the Martin Trust Center for MIT Entrepreneurship, and supported by MIT’s School of Engineering and the MIT Sloan School of Management — inspired young researchers to contribute to the dialogue and innovate in generative AI.

Serving as co-chairs for the event were Aude Oliva, MIT director of the MIT-IBM Watson AI Lab and a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Bill Aulet, the Ethernet Inventors Professor of the Practice at the MIT Sloan School of Management and director of the Martin Trust Center; and Dina Katabi, the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science, director of the Center for Wireless Networks and Mobile Computing, and a CSAIL principal investigator.

Twelve teams of students and postdocs were competing for a number of prizes, including five MIT Ignite Flagship Prizes of $15,000 each, a special first-year undergraduate student team Flagship Prize, and runner-up prizes. All prizes were provided by the MIT-IBM AI Watson Lab. Teams were judged on their project’s innovative applications of generative AI, feasibility, potential for real-world impact, and the quality of presentation.

After the 12 teams showcased their technology, its potential to address an issue, and the team’s ability to execute the plan, a panel of judges deliberated. As the audience waited for the results, remarks were made by Mark Gorenberg ’76, chair of the MIT Corporation; Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science; and David Schmittlein, the John C. Head III Dean and professor of marketing at the MIT Sloan School of Management. The student winners included:

MIT Ignite Flagship Prizes

eMote
 (Philip Cherner, Julia Sebastien, Caroline Lige Zhang, and Daeun Yoo): Sometimes identifying and expressing emotions is difficult, particularly for those on the alexithymia spectrum; further, therapy can be expensive. eMote’s app allows users to identify their emotions, visualize them as art using the co-creative process of generative AI, and reflect on them through journaling, thereby assisting school counselors and therapists.

LeGT.ai (Julie Shi, Jessica Yuan, and Yubing Cui): Legal processes around immigration can be complicated and costly. LeGT.ai aims to democratize legal knowledge. Using a platform with a large language model, prompt engineering, and semantic search, the team will streamline a chatbot for completion, research, and drafting of documents for firms, as well as improve pre-screening and initial consultations.

Sunona (Emmi Mills, Selin Kocalar, Srihitha Dasari, and Karun Kaushik): About half of a doctor’s day is consumed by medical documentation and clinical notes. To address this, Sunona harnesses audio transcription and a large language model to transform audio from a doctor’s visit into notes and feature extraction, affording providers more time in their day.

UltraNeuro (Mahdi Ramadan, Adam Gosztolai, Alaa Khaddaj, and Samara Khater): For about one in seven adults, spinal cord injury, stroke, or disease will induce motor impairment and/or paralysis. UltraNeuro’s neuroprosthetics will help patients to regain some of their daily abilities without invasive brain implants. Their technology leverages an electroencephalogram, smart sensors, and a multimodal AI system (muscle EMG, computer vision, eye movements) trained on thousands of movements to plan precise limb movements.

UrsaTech (Rui Zhou, Jerry Shan, Kate Wang, Alan He, and Rita Zhang): Education today is marked by disparities and overburdened educators. UrsaTech’s platform uses a multimodal large language model and diffusion models to create lessons, dynamic content, and assessments to assist teachers and learners. The system also has immersive learning with AI agents for active learning for online and offline use.

First-Year Undergraduate Student Team MIT Ignite Flagship Prize

Alikorn (April Ren and Ayush Nayak): Drug discovery accounts for significant biotech costs. Alikorn’s large language model-powered platform aims to streamline the process of creating and simulating new molecules, using a generative adversarial network, a Monte-Carlo algorithm to vet the most promising candidates, and a physics simulation to determine the chemical properties.

Runner-up Prizes

Autonomous Cyber (James “Patrick” O’Brien, Madeline Linde, Rafael Turner, and Bohdan Volyanyuk): Code security audits require expertise and are expensive. “Fuzzing” code — injecting invalid or unexpected inputs to reveal software vulnerabilities — can make software significantly safer. Autonomous Cyber’s system leverages large language models to automatically integrate “fuzzers” into databases.

Gen EGM (Noah Bagazinski and Kristen Edwards): Making informed socioeconomic development policies requires evidence and data. Gen EGM’s large language model system expedites the process by examining and analyzing literature, and then produces an evidence gap map (EGM), suggesting potential impact areas.

Mattr AI (Leandra Tejedor, Katie Chen, and Eden Adler): Datasets that are used to train AI models often have issues of diversity, equity, and completeness. Mattr AI addresses this with generative AI with a large language model and stable diffusion models to augment datasets.

Neuroscreen (Andrew Lu, Chonghua Xue, and Grant Robinson): Screening patients to potentially join a dementia clinical trial is costly, often takes years, and mostly results in an ineligibility. Neuroscreen employs AI to more quickly assess patients’ dementia causes, leading to more successful enrollment in clinical trials and treatment of conditions.

The Data Provenance Initiative (Naana Obeng-Marnu, Jad Kabbara, Shayne Longpre, William Brannon, and Robert Mahari): Datasets that are used to train AI models, particularly large language models, often have missing or incorrect metadata, causing concern for legal and ethical issues. The Data Provenance Initiative uses AI-assisted annotation to audit datasets, tracking the lineage and legal status of data, improving data transparency, legality, and ethical concerns around data.

Theia (Jenny Yao, Hongze Bo, Jin Li, Ao Qu, and Hugo Huang): Scientific research, and online dialogue around it, often occurs in silos. Theia’s platform aims to bring these walls down. Generative AI technology will summarize papers and help to guide research directions, providing a service for scholars as well as the broader scientific community.

After the MIT Ignite competition, all 12 teams selected to present were invited to a networking event as an immediate first step to making their ideas and prototypes a reality. Additionally, they were invited to further develop their ideas with the support of the Martin Trust Center for MIT Entrepreneurship through StartMIT or MIT Fuse and the MIT-IBM Watson AI Lab.

“In the months since I’ve arrived [at MIT], I’ve learned a lot about how MIT folks think about entrepreneurship and how it’s really built into everything that everyone at the Institute does, from first-year students to faculty to alumni — they are really motivated to get their ideas out into the world,” said President Kornbluth. “Entrepreneurship is an essential element for our goal of organizing for positive impact.”

New method uses crowdsourced feedback to help train robots

To teach an AI agent a new task, like how to open a kitchen cabinet, researchers often use reinforcement learning — a trial-and-error process where the agent is rewarded for taking actions that get it closer to the goal.

In many instances, a human expert must carefully design a reward function, which is an incentive mechanism that gives the agent motivation to explore. The human expert must iteratively update that reward function as the agent explores and tries different actions. This can be time-consuming, inefficient, and difficult to scale up, especially when the task is complex and involves many steps.

Researchers from MIT, Harvard University, and the University of Washington have developed a new reinforcement learning approach that doesn’t rely on an expertly designed reward function. Instead, it leverages crowdsourced feedback, gathered from many nonexpert users, to guide the agent as it learns to reach its goal.

While some other methods also attempt to utilize nonexpert feedback, this new approach enables the AI agent to learn more quickly, despite the fact that data crowdsourced from users are often full of errors. These noisy data might cause other methods to fail.

In addition, this new approach allows feedback to be gathered asynchronously, so nonexpert users around the world can contribute to teaching the agent.

“One of the most time-consuming and challenging parts in designing a robotic agent today is engineering the reward function. Today reward functions are designed by expert researchers — a paradigm that is not scalable if we want to teach our robots many different tasks. Our work proposes a way to scale robot learning by crowdsourcing the design of reward function and by making it possible for nonexperts to provide useful feedback,” says Pulkit Agrawal, an assistant professor in the MIT Department of Electrical Engineering and Computer Science (EECS) who leads the Improbable AI Lab in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

In the future, this method could help a robot learn to perform specific tasks in a user’s home quickly, without the owner needing to show the robot physical examples of each task. The robot could explore on its own, with crowdsourced nonexpert feedback guiding its exploration.

“In our method, the reward function guides the agent to what it should explore, instead of telling it exactly what it should do to complete the task. So, even if the human supervision is somewhat inaccurate and noisy, the agent is still able to explore, which helps it learn much better,” explains lead author Marcel Torne ’23, a research assistant in the Improbable AI Lab.

Torne is joined on the paper by his MIT advisor, Agrawal; senior author Abhishek Gupta, assistant professor at the University of Washington; as well as others at the University of Washington and MIT. The research will be presented at the Conference on Neural Information Processing Systems next month.

Noisy feedback

One way to gather user feedback for reinforcement learning is to show a user two photos of states achieved by the agent, and then ask that user which state is closer to a goal. For instance, perhaps a robot’s goal is to open a kitchen cabinet. One image might show that the robot opened the cabinet, while the second might show that it opened the microwave. A user would pick the photo of the “better” state.

Some previous approaches try to use this crowdsourced, binary feedback to optimize a reward function that the agent would use to learn the task. However, because nonexperts are likely to make mistakes, the reward function can become very noisy, so the agent might get stuck and never reach its goal.

“Basically, the agent would take the reward function too seriously. It would try to match the reward function perfectly. So, instead of directly optimizing over the reward function, we just use it to tell the robot which areas it should be exploring,” Torne says.

He and his collaborators decoupled the process into two separate parts, each directed by its own algorithm. They call their new reinforcement learning method HuGE (Human Guided Exploration).

On one side, a goal selector algorithm is continuously updated with crowdsourced human feedback. The feedback is not used as a reward function, but rather to guide the agent’s exploration. In a sense, the nonexpert users drop breadcrumbs that incrementally lead the agent toward its goal.

On the other side, the agent explores on its own, in a self-supervised manner guided by the goal selector. It collects images or videos of actions that it tries, which are then sent to humans and used to update the goal selector.

This narrows down the area for the agent to explore, leading it to more promising areas that are closer to its goal. But if there is no feedback, or if feedback takes a while to arrive, the agent will keep learning on its own, albeit in a slower manner. This enables feedback to be gathered infrequently and asynchronously.

“The exploration loop can keep going autonomously, because it is just going to explore and learn new things. And then when you get some better signal, it is going to explore in more concrete ways. You can just keep them turning at their own pace,” adds Torne.

And because the feedback is just gently guiding the agent’s behavior, it will eventually learn to complete the task even if users provide incorrect answers.

Faster learning

The researchers tested this method on a number of simulated and real-world tasks. In simulation, they used HuGE to effectively learn tasks with long sequences of actions, such as stacking blocks in a particular order or navigating a large maze.

In real-world tests, they utilized HuGE to train robotic arms to draw the letter “U” and pick and place objects. For these tests, they crowdsourced data from 109 nonexpert users in 13 different countries spanning three continents.

In real-world and simulated experiments, HuGE helped agents learn to achieve the goal faster than other methods.

The researchers also found that data crowdsourced from nonexperts yielded better performance than synthetic data, which were produced and labeled by the researchers. For nonexpert users, labeling 30 images or videos took fewer than two minutes.

“This makes it very promising in terms of being able to scale up this method,” Torne adds.

In a related paper, which the researchers presented at the recent Conference on Robot Learning, they enhanced HuGE so an AI agent can learn to perform the task, and then autonomously reset the environment to continue learning. For instance, if the agent learns to open a cabinet, the method also guides the agent to close the cabinet.

“Now we can have it learn completely autonomously without needing human resets,” he says.

The researchers also emphasize that, in this and other learning approaches, it is critical to ensure that AI agents are aligned with human values.

In the future, they want to continue refining HuGE so the agent can learn from other forms of communication, such as natural language and physical interactions with the robot. They are also interested in applying this method to teach multiple agents at once.

This research is funded, in part, by the MIT-IBM Watson AI Lab.

Celebrating five years of MIT.nano

There is vast opportunity for nanoscale innovation to transform the world in positive ways — expressed MIT.nano Director Vladimir Bulović as he posed two questions to attendees at the start of the inaugural Nano Summit: “Where are we heading? And what is the next big thing we can develop?”

“The answer to that puts into perspective our main purpose — and that is to change the world,” Bulović, the Fariborz Maseeh Professor of Emerging Technologies, told an audience of more than 325 in-person and 150 virtual participants gathered for an exploration of nano-related research at MIT and a celebration of MIT.nano’s fifth anniversary.

Over a decade ago, MIT embarked on a massive project for the ultra-small — building an advanced facility to support research at the nanoscale. Construction of MIT.nano in the heart of MIT’s campus, a process compared to assembling a ship in a bottle, began in 2015, and the facility launched in October 2018.

Fast forward five years: MIT.nano now contains nearly 170 tools and instruments serving more than 1,200 trained researchers. These individuals come from over 300 principal investigator labs, representing more than 50 MIT departments, labs, and centers. The facility also serves external users from industry, other academic institutions, and over 130 startup and multinational companies.

A cross section of these faculty and researchers joined industry partners and MIT community members to kick off the first Nano Summit, which is expected to become an annual flagship event for MIT.nano and its industry consortium. Held on Oct. 24, the inaugural conference was co-hosted by the MIT Industrial Liaison Program.

Six topical sessions highlighted recent developments in quantum science and engineering, materials, advanced electronics, energy, biology, and immersive data technology. The Nano Summit also featured startup ventures and an art exhibition.

Watch the videos here.

Seeing and manipulating at the nanoscale — and beyond

“We need to develop new ways of building the next generation of materials,” said Frances Ross, the TDK Professor in Materials Science and Engineering (DMSE). “We need to use electron microscopy to help us understand not only what the structure is after it’s built, but how it came to be. I think the next few years in this piece of the nano realm are going to be really amazing.”

Speakers in the session “The Next Materials Revolution,” chaired by MIT.nano co-director for Characterization.nano and associate professor in DMSE James LeBeau, highlighted areas in which cutting-edge microscopy provides insights into the behavior of functional materials at the nanoscale, from anti-ferroelectrics to thin-film photovoltaics and 2D materials. They shared images and videos collected using the instruments in MIT.nano’s characterization suites, which were specifically designed and constructed to minimize mechanical-vibrational and electro-magnetic interference.

Later, in the “Biology and Human Health” session chaired by Boris Magasanik Professor of Biology Thomas Schwartz, biologists echoed the materials scientists, stressing the importance of the ultra-quiet, low-vibration environment in Characterization.nano to obtain high-resolution images of biological structures.

“Why is MIT.nano important for us?” asked Schwartz. “An important element of biology is to understand the structure of biology macromolecules. We want to get to an atomic resolution of these structures. CryoEM (cryo-electron microscopy) is an excellent method for this. In order to enable the resolution revolution, we had to get these instruments to MIT. For that, MIT.nano was fantastic.”

Seychelle Vos, the Robert A. Swanson (1969) Career Development Professor of Life Sciences, shared CryoEM images from her lab’s work, followed by biology Associate Professor Joey Davis who spoke about image processing. When asked about the next stage for CryoEM, Davis said he’s most excited about in-situ tomography, noting that there are new instruments being designed that will improve the current labor-intensive process.

To chart the future of energy, chemistry associate professor Yogi Surendranath is also using MIT.nano to see what is happening at the nanoscale in his research to use renewable electricity to change carbon dioxide into fuel.

“MIT.nano has played an immense role, not only in facilitating our ability to make nanostructures, but also to understand nanostructures through advanced imaging capabilities,” said Surendranath. “I see a lot of the future of MIT.nano around the question of how nanostructures evolve and change under the conditions that are relevant to their function. The tools at MIT.nano can help us sort that out.”

Tech transfer and quantum computing

The “Advanced Electronics” session chaired by Jesús del Alamo, the Donner Professor of Science in the Department of Electrical Engineering and Computer Science (EECS), brought together industry partners and MIT faculty for a panel discussion on the future of semiconductors and microelectronics. “Excellence in innovation is not enough, we also need to be excellent in transferring these to the marketplace,” said del Alamo. On this point, panelists spoke about strengthening the industry-university connection, as well as the importance of collaborative research environments and of access to advanced facilities, such as MIT.nano, for these environments to thrive.

The session came on the heels of a startup exhibit in which eleven START.nano companies presented their technologies in health, energy, climate, and virtual reality, among other topics. START.nano, MIT.nano’s hard-tech accelerator, provides participants use of MIT.nano’s facilities at a discounted rate and access to MIT’s startup ecosystem. The program aims to ease hard-tech startups’ transition from the lab to the marketplace, surviving common “valleys of death” as they move from idea to prototype to scaling up.

When asked about the state of quantum computing in the “Quantum Science and Engineering” session, physics professor Aram Harrow related his response to these startup challenges. “There are quite a few valleys to cross — there are the technical valleys, and then also the commercial valleys.” He spoke about scaling superconducting qubits and qubits made of suspended trapped ions, and the need for more scalable architectures, which we have the ingredients for, he said, but putting everything together is quite challenging.

Throughout the session, William Oliver, professor of physics and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science, asked the panelists how MIT.nano can address challenges in assembly and scalability in quantum science.

“To harness the power of students to innovate, you really need to allow them to get their hands dirty, try new things, try all their crazy ideas, before this goes into a foundry-level process,” responded Kevin O’Brien, associate professor in EECS. “That’s what my group has been working on at MIT.nano, building these superconducting quantum processors using the state-of-the art fabrication techniques in MIT.nano.”

Connecting the digital to the physical

In his reflections on the semiconductor industry, Douglas Carlson, senior vice president for technology at MACOM, stressed connecting the digital world to real-world application. Later, in the “Immersive Data Technology” session, MIT.nano associate director Brian Anthony explained how, at the MIT.nano Immersion Lab, researchers are doing just that.

“We think about and facilitate work that has the human immersed between hardware, data, and experience,” said Anthony, principal research scientist in mechanical engineering. He spoke about using the capabilities of the Immersion Lab to apply immersive technologies to different areas — health, sports, performance, manufacturing, and education, among others. Speakers in this session gave specific examples in hardware, pediatric health, and opera.

Anthony connected this third pillar of MIT.nano to the fab and characterization facilities, highlighting how the Immersion Lab supports work conducted in other parts of the building. The Immersion Lab’s strength, he said, is taking novel work being developed inside MIT.nano and bringing it up to the human scale to think about applications and uses.

Artworks that are scientifically inspired

The Nano Summit closed with a reception at MIT.nano where guests could explore the facility and gaze through the cleanroom windows, where users were actively conducting research. Attendees were encouraged to visit an exhibition on MIT.nano’s first- and second-floor galleries featuring work by students from the MIT Program in Art, Culture, and Technology (ACT) who were invited to utilize MIT.nano’s tool sets and environments as inspiration for art.

In his closing remarks, Bulović reflected on the community of people who keep MIT.nano running and who are using the tools to advance their research. “Today we are celebrating the facility and all the work that has been done over the last five years to bring it to where it is today. It is there to function not just as a space, but as an essential part of MIT’s mission in research, innovation, and education. I hope that all of us here today take away a deep appreciation and admiration for those who are leading the journey into the nano age.”

Judy Hoyt, a pioneer in semiconductor research, is remembered. 

Judy Hoyt, seen here in her office in front of a photo of Stanford's campus, wears a red turtleneck and black blazer.

Professor Judy Hoyt, a pioneer in semiconductor research, passed away on August 6, 2023.  She was 65 years old.

Hoyt is known well for her groundbreaking work with strained silicon semiconductor materials, work which helped greatly decrease the size of integrated circuits. Her most recognized contribution was the first demonstration of the incorporation of lattice strain as a means to enhance performance in scaled silicon devices, a key concept behind the continuation of Moore’s Law roadmap for the last twenty years. This contribution has informed virtually every high-performance chip manufactured today, leading directly to the growth of both the $500-billion-dollar semiconductor industry and the multi-trillion-dollar electronics market. 

Hoyt’s contributions earned her the 2011 IEEE Andrew S. Grove Award (together with Eugene Fitzgerald) and the 2018 University Research Award by the Semiconductor Industry Association in collaboration with the Semiconductor Research Corporation. 

Born on Jan. 5, 1958, Hoyt was a native of Garden City in Long Island, NY.  She was not only a talented musician (simultaneously leading her high school band and a swing jazz band) but also a dedicated student, who earned the rank of valedictorian before going on to earn her undergraduate degree in Physics and Applied Mathematics at UC Berkeley in 1980, and her MS and PhD degrees in Applied Physics at Stanford University in 1983 and 1987, respectively.

After graduation, she stayed on at Stanford as Research Associate and then Senior Research Associate before joining the faculty of the MIT Department of Electrical Engineering and Computer Science (EEES) as Professor in 2000. Additionally, she served as an Associate Director within the Microsystems Technology Laboratories (MTL) at MIT from 2005-2018.   At MIT, she was a key contributor in the running of the Microsystems Technology Laboratories (MTL), and an effective proponent and key contributor to the configuration and design of the new MIT.nano building. Throughout her academic career, Hoyt was a dedicated teacher and mentor to her students at both Stanford and MIT, many of whom went on to distinguished careers in the semiconductor industry. 

Outside of MIT, she was an avid cyclist who loved the outdoors, and animals; her lifelong love of music sustained her as well. 

All at MIT who knew Hoyt will remember her as a gentle soul and a caring friend whose puckish humor and unassuming demeanor hid a stern wisdom, unimpeachable sense of responsibility, and passionate loyalty to her students and her family. She is survived by sister Barbara, brothers Robert and John, and her father George, as well as longtime close friends and colleagues Conor Rafferty and Dimitri Antoniadis.

Contributions in Hoyt’s memory can be made to St. Jude’s Hospital or the Jimmy Fund in Boston.

Rewarding excellence in open data

The second annual MIT Prize for Open Data, which included a $2,500 cash prize, was recently awarded to 10 individual and group research projects. Presented jointly by the School of Science and the MIT Libraries, the prize highlights the value of open data — research data that is openly accessible and reusable — at the Institute. The prize winners and 12 honorable mention recipients were honored at the Open Data @ MIT event held Oct. 24 at Hayden Library. 

Conceived by Chris Bourg, director of MIT Libraries, and Rebecca Saxe, associate dean of the School of Science and the John W. Jarve (1978) Professor of Brain and Cognitive Sciences, the prize program was launched in 2022. It recognizes MIT-affiliated researchers who use or share open data, create infrastructure for open data sharing, or theorize about open data. Nominations were solicited from across the Institute, with a focus on trainees: undergraduate and graduate students, postdocs, and research staff. 

“The prize is explicitly aimed at early-career researchers,” says Bourg. “Supporting and encouraging the next generation of researchers will help ensure that the future of scholarship is characterized by a norm of open sharing.”

The 2023 awards were presented at a celebratory event held during International Open Access Week. Winners gave five-minute presentations on their projects and the role that open data plays in their research. The program also included remarks from Bourg and Anne White, School of Engineering Distinguished Professor of Engineering, vice provost, and associate vice president for research administration. White reflected on the ways in which MIT has demonstrated its values with the open sharing of research and scholarship and acknowledged the efforts of the honorees and advocates gathered at the event: “Thank you for the active role you’re all playing in building a culture of openness in research,” she said. “It benefits us all.” 

Winners were chosen from more than 80 nominees, representing all five MIT schools, the MIT Schwarzman College of Computing, and several research centers across the Institute. A committee composed of faculty, staff, and graduate students made the selections:

  • Hammaad Adam, graduate student in the Institute for Data, Systems, and Society, accepted on behalf of the team behind Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first ever multi-center dataset dedicated to the organ procurement process. ORCHID provides the first opportunity to quantitatively analyze organ procurement organization decisions and identify operational inefficiencies.
  • Adam Atanas, postdoc in the Department of Brain and Cognitive Sciences (BCS), and Jungsoo Kim, graduate student in BCS, created WormWideWeb.org. The site, allowing researchers to easily browse and download C. elegans whole-brain datasets, will be useful to C. elegans neuroscientists and theoretical/computational neuroscientists.
     
  • Paul Berube, research scientist in the Department of Civil and Environmental Engineering, and Steven Biller, assistant professor of biological sciences at Wellesley College, won for “Unlocking Marine Microbiomes with Open Data.” Open data of genomes and metagenomes for marine ecosystems, with a focus on cyanobacteria, leverage the power of contemporaneous data from GEOTRACES and other long-standing ocean time-series programs to provide underlying information to answer questions about marine ecosystem function.
     
  • Jack CavanaghSarah Kopper, and Diana Horvath of the Abdul Latif Jameel Poverty Action Lab (J-PAL) were recognized for J-PAL’s Data Publication Infrastructure, which includes a trusted repository of open-access datasets, a dedicated team of data curators, and coding tools and training materials to help other teams publish data in an efficient and ethical manner.
     
  • Jerome Patrick Cruz, graduate student in the Department of Political Science, won for OpenAudit, leveraging advances in natural language processing and machine learning to make data in public audit reports more usable for academics and policy researchers, as well as governance practitioners, watchdogs, and reformers. This work was done in collaboration with colleagues at Ateneo de Manila University in the Philippines.
     
  • Undergraduate student Daniel Kurlander created a tool for planetary scientists to rapidly access and filter images of the comet 67P/Churyumov-Gerasimenko. The web-based tool enables searches by location and other properties, does not require a time-intensive download of a massive dataset, allows analysis of the data independent of the speed of one’s computer, and does not require installation of a complex set of programs.
     
  • Halie Olson, postdoc in BCS, was recognized for sharing data from a functional magnetic resonance imaging (fMRI) study on language processing. The study used video clips from “Sesame Street” in which researchers manipulated the comprehensibility of the speech stream, allowing them to isolate a “language response” in the brain.
  • Thomas González Roberts, graduate student in the Department of Aeronautics and Astronautics, won for the International Telecommunication Union Compliance Assessment Monitor. This tool combats the heritage of secrecy in outer space operations by creating human- and machine-readable datasets that succinctly describe the international agreements that govern satellite operations.
     
  • Melissa Kline Struhl, research scientist in BCS, was recognized for Children Helping Science, a free, open-source platform for remote studies with babies and children that makes it possible for researchers at more than 100 institutions to conduct reproducible studies.
     
  • JS Tan, graduate student in the Department of Urban Studies and Planning, developed the Collective Action in Tech Archive in collaboration with Nataliya Nedzhvetskaya of the University of California at Berkeley. It is an open database of all publicly recorded collective actions taken by workers in the global tech industry. 

A complete list of winning projects and honorable mentions, including links to the research data, is available on the MIT Libraries website.

Computational imaging researcher attended a lecture, found her career

Soon after Kristina Monakhova started graduate school, she attended a lecture by Professor Laura Waller ’04, MEng ’05, PhD ’10, director of the University of California at Berkeley’s Computational Imaging Lab, who described a kind of computational microscopy with extremely high image resolution.

“The talk blew me away,” says Monakhova, who is currently an MIT-Boeing Distinguished Postdoctoral Fellow in MIT’s Department of Electrical Engineering and Computer Science. “It definitely changed my trajectory and put me on the path to where I am now. I knew right away that this is what I wanted to do. It was the perfect combination of signal processing, hardware, and algorithms, and I could use it to make more capable imaging sensors for diverse applications.”

Today, Monakhova’s research involves creating cameras and microscopes designed to produce not high-resolution images for human consumption, but rather information-dense images to be used by algorithms. She aspires to combine imaging system physics with deep learning.

She points out that the purpose of cameras has been fundamentally changed by automation. In many contexts, “people don’t look at the images; algorithms do,” she explains.

A good example of when the data in an image is more important than its visual representation or sharpness is in skin cancer diagnosis, where measuring specific light wavelengths using a hyperspectral camera can help determine whether a certain skin lesion is cancerous and, if so, malignant. While hyperspectral cameras generally cost more than $20,000, Monakhova has designed a cheap computational camera that could be adapted for such diagnosis.

Monakhova says she inherited her early academic ambition from her mother, who brought her to this country from Russia when she was 4 years old.

“My mother is my role model and inspiration. She immigrated to the U.S. as a single mother and raised me while completing her PhD in electrical engineering,” Monakhova says. “I remember spending my elementary school holidays sitting in her classes, drawing. She tried to get me excited about math and science as a child — and I guess she succeeded!”

By middle school, Monakhova had discovered her interest in engineering after joining a robotics team. When many years later she started graduate school at UC Berkeley, she chose robotics as her first lab, although Waller’s computational microscopy lecture drew her away to Waller’s lab and to her current field of research.

Starting in the MIT Postdoctoral Fellowship Program for Engineering Excellence in fall 2022, Monakhova experienced another life-changing event.

“My daughter was born on the first day of work at MIT, making for a particularly exciting first day,” she says.

Born four weeks early, the baby required an elaborate system of feeding, a process that took almost two hours — and needed to be repeated in three-hour increments, which left the parents just one out of every three hours to do everything else.

“The first four or five months were a whirlwind of challenges and emotions and doctor appointments,” Monakhova says.

Despite those challenges, the new mother continued with her fellowship. Knowing that a postdoc is often a bridge to a faculty position, she took special advantage of a series of program presentations focused on what it’s like to be a professor and the academic job search process. Although the presentations took place while she was on maternity leave and she wasn’t required to participate, Monakhova still attended via Zoom.

“I could call in and listen while breastfeeding my newborn infant,” she says. “I went on the academic job market, and this series was useful to help me get my job materials together and prepare for my interviews.”

Monakhova is currently in her second year of the MIT Postdoctoral Fellowship Program for Engineering Excellence. In fall 2024, she will start as assistant professor at Cornell University’s Department of Computer Science. Photo credit: Gretchen Ertl.

Monakhova says she is “thankful that MIT has a relatively good maternity and family leave policy, as well as crucial resources, such as lactation rooms, back-up daycare, and a fantastic on-campus daycare program with financial aid available. Without these resources and support, I would have had to quit my career. In order to attract and retain women in science and engineering, we need family-friendly policies that don’t penalize women for having babies.”

By June, Monakhova had landed a position as an assistant professor at Cornell University’s Department of Computer Science. Having deferred the appointment, she’ll start in fall 2024.

Referring to her upcoming work as a professor and lab leader at Cornell, Monakhova says, “I’m particularly excited to try to set up a collaborative, friendly lab culture where mental health and work-life balance are prioritized, and failure is seen as an important step in the research process.”

Throughout her academic career, Monakhova says community has been extremely important. The MIT Postdoctoral Fellowship Program for Engineering Excellence, which was designed to develop the next generation of faculty leaders and help guide MIT’s School of Engineering toward supporting more women and others who are underrepresented in engineering, allowed her to explore new research questions in a different area and work with “some amazing MIT students on some exciting projects.”

“I believe it’s important to help each other out and create a welcoming environment where everyone has the support and resources they need to thrive,” says Monakhova, who has an exemplary record of mentoring and giving back. “Research and breakthroughs don’t happen in isolation — they’re the result of teams and communities of people working together.”

Student Spotlight: Isabella Pedraza Piñeros


We’re debuting a new series of short interviews on the MIT EECS site, called Student Spotlights. Each Spotlight will feature a student answering their choice of questions about themselves and life at MIT. Our first subject, Isabella Pedraza Piñeros, is a first-year MEng student in the Department of EECS; she graduated with her bachelor’s degree in Computer Science and Engineering in the spring of 2023.

What’s your favorite building or room within MIT, and what’s special about it to you?

Have you ever been to building 14? No matter the season, day, or time, that’s where I’m at my happiest. The folks at Hayden Cafe are always down for a chat in French, the indoor/outdoor area near the courtyard is a perfect spot for some top-notch daydreaming, and the comfortable couches paired with practical tables make it the best spot to chill with a laptop and maybe get some work done… or just kick back and people-watch!

What’s your favorite food found on, or near, campus?

I absolutely love Beantown Taqueria. I can always walk in and speak Spanish, enjoy they’re delicious (and free!) café de olla, and have some of the best Mexican food in Boston. By the way, they also give students discounts!

Tell me about one conversation that changed the trajectory of your life.

I never imagined that a conversation I would have for the first assignment of a class could change my life – but it did. A class I took called 6.1040 (Software Studio) encouraged us to interview people with perspectives diverse and different to our own as a means to understand how to enhance Twitter’s design and user interactions. It was through this assignment that I met Jerry Berrier, the then Vice President of the Visually Impaired and Blind User Group (VIBUG) and an enthusiastic Twitter user. Jerry, who regularly shared his experiences with birding by ear on Twitter, showed me how he used Twitter with a screen reader and the difficulties in doing so. This encounter sparked my passion for accessible software design, and lead to my involvement with the MIT Visualization Group, under the mentorship of Arvind Satyanarayan and Jonathan Zong. Today, my current research and thesis are focused on using large language models (LLMs) to not only make data visualizations more accessible, but also to enhance how we interpret and engage with them. The goal is is to bridge the gap between complex data sets and the diverse needs of users, ensuring that everyone—regardless of ability—can benefit from the insights that data visualizations can provide. Reflecting on it now, I still can’t believe how a single conversation for a routine assignment evolved into a profound catalyst, significantly altering the direction of my academic and professional path.

If you had to teach a really in-depth class about one niche topic, what would you pick?

It would have to be the chemistry of skincare! I would love to help teach others on how to research novel compounds, what kind of ingredients best suit their personal preferences, and how to differentiate between clinically proven claims and false advertising. I always find myself spending hours learning about a product’s molecular structure and chemical composition, and would love to teach others about this too.

Accelerating AI tasks while preserving data security

With the proliferation of computationally intensive machine-learning applications, such as chatbots that perform real-time language translation, device manufacturers often incorporate specialized hardware components to rapidly move and process the massive amounts of data these systems demand.

Choosing the best design for these components, known as deep neural network accelerators, is challenging because they can have an enormous range of design options. This difficult problem becomes even thornier when a designer seeks to add cryptographic operations to keep data safe from attackers.

Now, MIT researchers have developed a search engine that can efficiently identify optimal designs for deep neural network accelerators, that preserve data security while boosting performance.

Their search tool, known as SecureLoop, is designed to consider how the addition of data encryption and authentication measures will impact the performance and energy usage of the accelerator chip. An engineer could use this tool to obtain the optimal design of an accelerator tailored to their neural network and machine-learning task.

When compared to conventional scheduling techniques that don’t consider security, SecureLoop can improve performance of accelerator designs while keeping data protected.  

Using SecureLoop could help a user improve the speed and performance of demanding AI applications, such as autonomous driving or medical image classification, while ensuring sensitive user data remains safe from some types of attacks.

“If you are interested in doing a computation where you are going to preserve the security of the data, the rules that we used before for finding the optimal design are now broken. So all of that optimization needs to be customized for this new, more complicated set of constraints. And that is what [lead author] Kyungmi has done in this paper,” says Joel Emer, an MIT professor of the practice in computer science and electrical engineering and co-author of a paper on SecureLoop.

Emer is joined on the paper by lead author Kyungmi Lee, an electrical engineering and computer science graduate student; Mengjia Yan, the Homer A. Burnell Career Development Assistant Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. The research will be presented at the IEEE/ACM International Symposium on Microarchitecture.

“The community passively accepted that adding cryptographic operations to an accelerator will introduce overhead. They thought it would introduce only a small variance in the design trade-off space. But, this is a misconception. In fact, cryptographic operations can significantly distort the design space of energy-efficient accelerators. Kyungmi did a fantastic job identifying this issue,” Yan adds.

Secure acceleration

deep neural network consists of many layers of interconnected nodes that process data. Typically, the output of one layer becomes the input of the next layer. Data are grouped into units called tiles for processing and transfer between off-chip memory and the accelerator. Each layer of the neural network can have its own data tiling configuration.

A deep neural network accelerator is a processor with an array of computational units that parallelizes operations, like multiplication, in each layer of the network. The accelerator schedule describes how data are moved and processed.

Since space on an accelerator chip is at a premium, most data are stored in off-chip memory and fetched by the accelerator when needed. But because data are stored off-chip, they are vulnerable to an attacker who could steal information or change some values, causing the neural network to malfunction.

“As a chip manufacturer, you can’t guarantee the security of external devices or the overall operating system,” Lee explains.

Manufacturers can protect data by adding authenticated encryption to the accelerator. Encryption scrambles the data using a secret key. Then authentication cuts the data into uniform chunks and assigns a cryptographic hash to each chunk of data, which is stored along with the data chunk in off-chip memory.

When the accelerator fetches an encrypted chunk of data, known as an authentication block, it uses a secret key to recover and verify the original data before processing it.

But the sizes of authentication blocks and tiles of data don’t match up, so there could be multiple tiles in one block, or a tile could be split between two blocks. The accelerator can’t arbitrarily grab a fraction of an authentication block, so it may end up grabbing extra data, which uses additional energy and slows down computation.

Plus, the accelerator still must run the cryptographic operation on each authentication block, adding even more computational cost.

An efficient search engine

With SecureLoop, the MIT researchers sought a method that could identify the fastest and most energy efficient accelerator schedule — one that minimizes the number of times the device needs to access off-chip memory to grab extra blocks of data because of encryption and authentication.  

They began by augmenting an existing search engine Emer and his collaborators previously developed, called Timeloop. First, they added a model that could account for the additional computation needed for encryption and authentication.

Then, they reformulated the search problem into a simple mathematical expression, which enables SecureLoop to find the ideal authentical block size in a much more efficient manner than searching through all possible options.

“Depending on how you assign this block, the amount of unnecessary traffic might increase or decrease. If you assign the cryptographic block cleverly, then you can just fetch a small amount of additional data,” Lee says.

Finally, they incorporated a heuristic technique that ensures SecureLoop identifies a schedule which maximizes the performance of the entire deep neural network, rather than only a single layer.

At the end, the search engine outputs an accelerator schedule, which includes the data tiling strategy and the size of the authentication blocks, that provides the best possible speed and energy efficiency for a specific neural network.

“The design spaces for these accelerators are huge. What Kyungmi did was figure out some very pragmatic ways to make that search tractable so she could find good solutions without needing to exhaustively search the space,” says Emer.

When tested in a simulator, SecureLoop identified schedules that were up to 33.2 percent faster and exhibited 50.2 percent better energy delay product (a metric related to energy efficiency) than other methods that didn’t consider security.

The researchers also used SecureLoop to explore how the design space for accelerators changes when security is considered. They learned that allocating a bit more of the chip’s area for the cryptographic engine and sacrificing some space for on-chip memory can lead to better performance, Lee says.

In the future, the researchers want to use SecureLoop to find accelerator designs that are resilient to side-channel attacks, which occur when an attacker has access to physical hardware. For instance, an attacker could monitor the power consumption pattern of a device to obtain secret information, even if the data have been encrypted. They are also extending SecureLoop so it could be applied to other kinds of computation.

This work is funded, in part, by Samsung Electronics and the Korea Foundation for Advanced Studies.

Forging climate connections across the Institute

Climate change is the ultimate cross-cutting issue: Not limited to any one discipline, it ranges across science, technology, policy, culture, human behavior, and well beyond. The response to it likewise requires an all-of-MIT effort.

Now, to strengthen such an effort, a new grant program spearheaded by the Climate Nucleus, the faculty committee charged with the oversight and implementation of Fast Forward: MIT’s Climate Action Plan for the Decade, aims to build up MIT’s climate leadership capacity while also supporting innovative scholarship on diverse climate-related topics and forging new connections across the Institute.

Called the Fast Forward Faculty Fund (F^4 for short), the program has named its first cohort of six faculty members after issuing its inaugural call for proposals in April 2023. The cohort will come together throughout the year for climate leadership development programming and networking. The program provides financial support for graduate students who will work with the faculty members on the projects — the students will also participate in leadership-building activities — as well as $50,000 in flexible, discretionary funding to be used to support related activities. 

“Climate change is a crisis that truly touches every single person on the planet,” says Noelle Selin, co-chair of the nucleus and interim director of the Institute for Data, Systems, and Society. “It’s therefore essential that we build capacity for every member of the MIT community to make sense of the problem and help address it. Through the Fast Forward Faculty Fund, our aim is to have a cohort of climate ambassadors who can embed climate everywhere at the Institute.”

F^4 supports both faculty who would like to begin doing climate-related work, as well as faculty members who are interested in deepening their work on climate. The program has the core goal of developing cohorts of F^4 faculty and graduate students who, in addition to conducting their own research, will become climate leaders at MIT, proactively looking for ways to forge new climate connections across schools, departments, and disciplines.

One of the projects, “Climate Crisis and Real Estate: Science-based Mitigation and Adaptation Strategies,” led by Professor Siqi Zheng of the MIT Center for Real Estate in collaboration with colleagues from the MIT Sloan School of Management, focuses on the roughly 40 percent of carbon dioxide emissions that come from the buildings and real estate sector. Zheng notes that this sector has been slow to respond to climate change, but says that is starting to change, thanks in part to the rising awareness of climate risks and new local regulations aimed at reducing emissions from buildings.

Using a data-driven approach, the project seeks to understand the efficient and equitable market incentives, technology solutions, and public policies that are most effective at transforming the real estate industry. Johnattan Ontiveros, a graduate student in the Technology and Policy Program, is working with Zheng on the project.

“We were thrilled at the incredible response we received from the MIT faculty to our call for proposals, which speaks volumes about the depth and breadth of interest in climate at MIT,” says Anne White, nucleus co-chair and vice provost and associate vice president for research. “This program makes good on key commitments of the Fast Forward plan, supporting cutting-edge new work by faculty and graduate students while helping to deepen the bench of climate leaders at MIT.”

During the 2023-24 academic year, the F^4 faculty and graduate student cohorts will come together to discuss their projects, explore opportunities for collaboration, participate in climate leadership development, and think proactively about how to deepen interdisciplinary connections among MIT community members interested in climate change.

The six inaugural F^4 awardees are:

Professor Tristan Brown, History Section: Humanistic Approaches to the Climate Crisis  

With this project, Brown aims to create a new community of practice around narrative-centric approaches to environmental and climate issues. Part of a broader humanities initiative at MIT, it brings together a global working group of interdisciplinary scholars, including Serguei Saavedra (Department of Civil and Environmental Engineering) and Or Porath (Tel Aviv University; Religion), collectively focused on examining the historical and present links between sacred places and biodiversity for the purposes of helping governments and nongovernmental organizations formulate better sustainability goals. Boyd Ruamcharoen, a PhD student in the History, Anthropology, and Science, Technology, and Society (HASTS) program, will work with Brown on this project.

Professor Kerri Cahoy, departments of Aeronautics and Astronautics and Earth, Atmospheric and Planetary Sciences (AeroAstro): Onboard Autonomous AI-driven Satellite Sensor Fusion for Coastal Region Monitoring

The motivation for this project is the need for much better data collection from satellites, where technology can be “20 years behind,” says Cahoy. As part of this project, Cahoy will pursue research in the area of autonomous artificial intelligence-enabled rapid sensor fusion (which combines data from different sensors, such as radar and cameras) onboard satellites to improve understanding of the impacts of climate change, specifically sea-level rise and hurricanes and flooding in coastal regions. Graduate students Madeline Anderson, a PhD student in electrical engineering and computer science (EECS), and Mary Dahl, a PhD student in AeroAstro, will work with Cahoy on this project.

Professor Priya Donti, Department of Electrical Engineering and Computer Science: Robust Reinforcement Learning for High-Renewables Power Grids 

With renewables like wind and solar making up a growing share of electricity generation on power grids, Donti’s project focuses on improving control methods for these distributed sources of electricity. The research will aim to create a realistic representation of the characteristics of power grid operations, and eventually inform scalable operational improvements in power systems. It will “give power systems operators faith that, OK, this conceptually is good, but it also actually works on this grid,” says Donti. PhD candidate Ana Rivera from EECS is the F^4 graduate student on the project.

Professor Jason Jackson, Department of Urban Studies and Planning (DUSP): Political Economy of the Climate Crisis: Institutions, Power and Global Governance

This project takes a political economy approach to the climate crisis, offering a distinct lens to examine, first, the political governance challenge of mobilizing climate action and designing new institutional mechanisms to address the global and intergenerational distributional aspects of climate change; second, the economic challenge of devising new institutional approaches to equitably finance climate action; and third, the cultural challenge — and opportunity — of empowering an adaptive socio-cultural ecology through traditional knowledge and local-level social networks to achieve environmental resilience. Graduate students Chen Chu and Mrinalini Penumaka, both PhD students in DUSP, are working with Jackson on the project.

Professor Haruko Wainwright, departments of Nuclear Science and Engineering (NSE) and Civil and Environmental Engineering: Low-cost Environmental Monitoring Network Technologies in Rural Communities for Addressing Climate Justice 

This project will establish a community-based climate and environmental monitoring network in addition to a data visualization and analysis infrastructure in rural marginalized communities to better understand and address climate justice issues. The project team plans to work with rural communities in Alaska to install low-cost air and water quality, weather, and soil sensors. Graduate students Kay Whiteaker, an MS candidate in NSE, and Amandeep Singh, and MS candidate in System Design and Management at Sloan, are working with Wainwright on the project, as is David McGee, professor in earth, atmospheric, and planetary sciences.

Professor Siqi Zheng, MIT Center for Real Estate and DUSP: Climate Crisis and Real Estate: Science-based Mitigation and Adaptation Strategies 

See the text above for the details on this project.