For more open and equitable public discussions on social media, try “meronymity”

Have you ever felt reluctant to share ideas during a meeting because you feared judgment from senior colleagues? You’re not alone. Research has shown this pervasive issue can lead to a lack of diversity in public discourse, especially when junior members of a community don’t speak up because they feel intimidated.

Anonymous communication can alleviate that fear and empower individuals to speak their minds, but anonymity also eliminates important social context and can quickly skew too far in the other direction, leading to toxic or hateful speech.

MIT researchers addressed these issues by designing a framework for identity disclosure in public conversations that falls somewhere in the middle, using a concept called “meronymity.”

Meronymity (from the Greek words for “partial” and “name”) allows people in a public discussion space to selectively reveal only relevant, verified aspects of their identity.

The researchers implemented meronymity in a communication system they built called LiTweeture, which is aimed at helping junior scholars use social media to ask research questions.

In LiTweeture, users can reveal a few professional facts, such as their academic affiliation or expertise in a certain field, which lends credibility to their questions or answers while shielding their exact identity.

Users have the flexibility to choose what they reveal about themselves each time they compose a social media post. They can also leverage existing relationships for endorsements that help queries reach experts they otherwise might be reluctant to contact.

During a monthlong study, junior academics who tested LiTweeture said meronymous communication made them feel more comfortable asking questions and encouraged them to engage with senior scholars on social media.

And while this study focused on academia, meronymous communication could be applied to any community or discussion space, says electrical engineering and computer science graduate student Nouran Soliman.

“With meronymity, we wanted to strike a balance between credibility and social inhibition. How can we make people feel more comfortable contributing and leveraging this rich community while still having some accountability?” says Soliman, lead author of a paper on meronymity.

Soliman wrote the paper with her advisor and senior author David Karger, professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), as well as others at the Semantic Scholar Team at Allen Institute for AI, the University of Washington, and Carnegie Mellon University. The research will be presented at the ACM Conference on Human Factors in Computing Systems.

Breaking down social barriers

The researchers began by conducting an initial study with 20 scholars to better understand the motivations and social barriers they face when engaging online with other academics.

They found that, while academics find X (formerly called Twitter) and Mastodon to be key resources when seeking help with research, they were often reluctant to ask for, discuss, or share recommendations.

Many respondents worried asking for help would make them appear to be unknowledgeable about a certain subject or feared public embarrassment if their posts were ignored.

The researchers developed LiTweeture to enable scholars to selectively present relevant facets of their identity when using social media to ask for research help.

But such identity markers, or “meronyms,” only give someone credibility if they are verified. So the researchers connected LiTweeture to Semantic Scholar, a web service which creates verified academic profiles for scholars detailing their education, affiliations, and publication history.

LiTweeture uses someone’s Semantic Scholar profile to automatically generate a set of meronyms they can choose to include with each social media post they compose. A meronym might be something like, “third-year graduate student at a research institution who has five publications at computer science conferences.”

A user writes a query and chooses the meronyms to appear with this specific post. LiTweeture then posts the query and meryonyms to X and Mastodon.

The user can also identify desired responders — perhaps certain researchers with relevant expertise — who will receive the query through a direct social media message or email. Users can personalize their meronyms for these experts, perhaps mentioning common colleagues or similar research projects.

Sharing social capital

They can also leverage connections by sharing their full identity with individuals who serve as public endorsers, such as an academic advisor or lab mate. Endorsements can encourage experts to respond to the asker’s query.

“The endorsement lets a senior figure donate some of their social capital to people who don’t have as much of it,” Karger says.

In addition, users can recruit close colleagues and peers to be helpers who are willing to repost their query so it reaches a wider audience.

Responders can answer queries using meronyms, which encourages potentially shy academics to offer their expertise, Soliman says.

The researchers tested LiTweeture during a field study with 13 junior academics who were tasked with writing and responding to queries. Participants said meronymous interactions gave them confidence when asking for help and provided high-quality recommendations.

Participants also used meronyms to seek a certain kind of answer. For instance, a user might disclose their publication history to signal that they are not seeking the most basic recommendations. When responding, individuals used identity signals to reflect their level of confidence in a recommendation, for example by disclosing their expertise.

“That implicit signaling was really interesting to see. I was also very excited to see that people wanted to connect with others based on their identity signals. This sense of relation also motivated some responders to make more effort when answering questions,” Soliman says.

Now that they have built a framework around academia, the researchers want to apply meronymity to other online communities and general social media conversations, especially those around issues where there is a lot of conflict, like politics. But to do that, they will need to find an effective, scalable way for people to present verified aspects of their identities.

“I think this is a tool that could be very helpful in many communities. But we have to figure out how to thread the needle on social inhibition. How can we create an environment where everyone feels safe speaking up, but also preserve enough accountability to discourage bad behavior? says Karger.

“Meronymity is not just a concept; it’s a novel technique that subtly blends aspects of identity and anonymity, creating a platform where credibility and privacy coexist. It changes digital communications by allowing safe engagement without full exposure, addressing the traditional anonymity-accountability trade-off. Its impact reaches beyond academia, fostering inclusivity and trust in digital interactions,” says Saiph Savage, assistant professor and director of the Civic A.I. Lab in the Khoury College of Computer Science at Northeastern University, and who was not involved with this work.

This research was funded, in part, by Semantic Scholar.

Four MIT faculty named 2023 AAAS Fellows

Four MIT faculty members have been elected as fellows of the American Association for the Advancement of Science (AAAS).

The 2023 class of AAAS Fellows includes 502 scientists, engineers, and innovators across 24 scientific disciplines, who are being recognized for their scientifically and socially distinguished achievements.  

Bevin Engelward initiated her scientific journey at Yale University under the mentorship of Thomas Steitz; following this, she pursued her doctoral studies at the Harvard School of Public Health under Leona Samson. In 1997, she became a faculty member at MIT, contributing to the establishment of the Department of Biological Engineering. Engelward’s research focuses on understanding DNA sequence rearrangements and developing innovative technologies for detecting genomic damage, all aimed at enhancing global public health initiatives.

William Oliver is the Henry Ellis Warren Professor of Electrical Engineering and Computer Science with a joint appointment in the Department of Physics, and was recently a Lincoln Laboratory Fellow. He serves as director of the Center for Quantum Engineering and associate director of the Research Laboratory of Electronics, and is a member of the National Quantum Initiative Advisory Committee. His research spans the materials growth, fabrication, 3D integration, design, control, and measurement of superconducting qubits and their use in small-scale quantum processors. He also develops cryogenic packaging and control electronics involving cryogenic complementary metal-oxide-semiconductors and single-flux quantum digital logic.

Daniel Rothman is a professor of geophysics in the Department of Earth, Atmospheric, and Planetary Sciences and co-director of the MIT Lorenz Center, a privately funded interdisciplinary research center devoted to learning how climate works. As a theoretical scientist, Rothman studies how the organization of the natural world emerges from the interactions of life and the physical environment. Using mathematics and statistical and nonlinear physics, he builds models that predict or explain observational data, contributing to our understanding of the dynamics of the carbon cycle and climate, instabilities and tipping points in the Earth system, and the dynamical organization of the microbial biosphere.

Vladan Vuletić is the Lester Wolfe Professor of Physics. His research areas include ultracold atoms, laser cooling, large-scale quantum entanglement, quantum optics, precision tests of physics beyond the Standard Model, and quantum simulation and computing with trapped neutral atoms. His Experimental Atomic Physics Group is also affiliated with the MIT-Harvard Center for Ultracold Atoms and the Research Laboratory of Electronics. In 2020, his group showed that the precision of current atomic clocks could be improved by entangling the atoms — a quantum phenomenon by which particles are coerced to behave in a collective, highly correlated state. 

To build a better AI helper, start by modeling the irrational behavior of humans

To build AI systems that can collaborate effectively with humans, it helps to have a good model of human behavior to start with. But humans tend to behave suboptimally when making decisions.

This irrationality, which is especially difficult to model, often boils down to computational constraints. A human can’t spend decades thinking about the ideal solution to a single problem.

Researchers at MIT and the University of Washington developed a way to model the behavior of an agent, whether human or machine, that accounts for the unknown computational constraints that may hamper the agent’s problem-solving abilities.

Their model can automatically infer an agent’s computational constraints by seeing just a few traces of their previous actions. The result, an agent’s so-called “inference budget,” can be used to predict that agent’s future behavior.

In a new paper, the researchers demonstrate how their method can be used to infer someone’s navigation goals from prior routes and to predict players’ subsequent moves in chess matches. Their technique matches or outperforms another popular method for modeling this type of decision-making.

Ultimately, this work could help scientists teach AI systems how humans behave, which could enable these systems to respond better to their human collaborators. Being able to understand a human’s behavior, and then to infer their goals from that behavior, could make an AI assistant much more useful, says Athul Paul Jacob, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

“If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have. Being able to model human behavior is an important step toward building an AI agent that can actually help that human,” he says.

Jacob wrote the paper with Abhishek Gupta, assistant professor at the University of Washington, and senior author Jacob Andreas, associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the International Conference on Learning Representations.

Modeling behavior

Researchers have been building computational models of human behavior for decades. Many prior approaches try to account for suboptimal decision-making by adding noise to the model. Instead of the agent always choosing the correct option, the model might have that agent make the correct choice 95 percent of the time.

However, these methods can fail to capture the fact that humans do not alwaysbehave suboptimally in the same way.

Others at MIT have also studied more effective ways to plan and infer goals in the face of suboptimal decision-making.

To build their model, Jacob and his collaborators drew inspiration from prior studies of chess players. They noticed that players took less time to think before acting when making simple moves and that stronger players tended to spend more time planning than weaker ones in challenging matches.

“At the end of the day, we saw that the depth of the planning, or how long someone thinks about the problem, is a really good proxy of how humans behave,” Jacob says.

They built a framework that could infer an agent’s depth of planning from prior actions and use that information to model the agent’s decision-making process.

The first step in their method involves running an algorithm for a set amount of time to solve the problem being studied. For instance, if they are studying a chess match, they might let the chess-playing algorithm run for a certain number of steps. At the end, the researchers can see the decisions the algorithm made at each step.

Their model compares these decisions to the behaviors of an agent solving the same problem. It will align the agent’s decisions with the algorithm’s decisions and identify the step where the agent stopped planning.

From this, the model can determine the agent’s inference budget, or how long that agent will plan for this problem. It can use the inference budget to predict how that agent would react when solving a similar problem.

An interpretable solution

This method can be very efficient because the researchers can access the full set of decisions made by the problem-solving algorithm without doing any extra work. This framework could also be applied to any problem that can be solved with a particular class of algorithms.

“For me, the most striking thing was the fact that this inference budget is very interpretable. It is saying tougher problems require more planning or being a strong player means planning for longer. When we first set out to do this, we didn’t think that our algorithm would be able to pick up on those behaviors naturally,” Jacob says.

The researchers tested their approach in three different modeling tasks: inferring navigation goals from previous routes, guessing someone’s communicative intent from their verbal cues, and predicting subsequent moves in human-human chess matches.

Their method either matched or outperformed a popular alternative in each experiment. Moreover, the researchers saw that their model of human behavior matched up well with measures of player skill (in chess matches) and task difficulty.

Moving forward, the researchers want to use this approach to model the planning process in other domains, such as reinforcement learning (a trial-and-error method commonly used in robotics). In the long run, they intend to keep building on this work toward the larger goal of developing more effective AI collaborators.

This work was supported, in part, by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

A crossroads for computing at MIT

On Vassar Street, in the heart of MIT’s campus, the MIT Stephen A. Schwarzman College of Computing recently opened the doors to its new headquarters in Building 45. The building’s central location and welcoming design will help form a new cluster of connectivity at MIT and enable the space to have a multifaceted role. 

“The college has a broad mandate for computing across MIT,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “The building is designed to be the computing crossroads of the campus. It’s a place to bring a mix of people together to connect, engage, and catalyze collaborations in computing, and a home to a related set of computing research groups from multiple departments and labs.”

“Computing is the defining technology of our time and it will continue to be, well into the future,” says MIT President Sally Kornbluth. “As the people of MIT make progress in high-impact fields from AI to climate, this fantastic new building will enable collaboration across computing, engineering, biological science, economics, and countless other fields, encouraging the cross-pollination of ideas that inspires us to generate fresh solutions. The college has opened its doors at just the right time.”

Designed to engage the campus community and the public, the first two floors of the building feature multiple convening areas, including a 60-seat classroom, a 250-seat lecture hall, and an assortment of spaces for studying and social interactions. Photo credit: Dave Burke/SOM

A physical embodiment

An approximately 178,000 square foot eight-floor structure, the building is designed to be a physical embodiment of the MIT Schwarzman College of Computing’s three-fold mission: strengthen core computer science and artificial intelligence; infuse the forefront of computing with disciplines across MIT; and advance social, ethical, and policy dimensions of computing.

Oriented for the campus community and the public to come in and engage with the college, the first two floors of the building encompass multiple convening areas, including a 60-seat classroom, a 250-seat lecture hall, and an assortment of spaces for studying and social interactions.

Academic activity has commenced in both the lecture hall and classroom this semester with 13 classes for undergraduate and graduate students. Subjects include 6.C35/6.C85 (Interactive Data Visualization and Society), a class taught by faculty from the departments of Electrical Engineering and Computer Science (EECS) and Urban Studies and Planning. The class was created as part of the Common Ground for Computing Education, a cross-cutting initiative of the college that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.

“The new college building is catering not only to educational and research needs, but also fostering extensive community connections. It has been particularly exciting to see faculty teaching classes in the building and the lobby bustling with students on any given day, engrossed in their studies or just enjoying the space while taking a break,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of EECS.

The building will also accommodate 50 computing research groups, which correspond to the number of new faculty the college is hiring — 25 in core computing positions and 25 in shared positions with departments at MIT. These groups bring together a mix of new and existing teams in related research areas spanning floors four through seven of the building.

In mid-January, the initial two dozen research groups moved into the building, including faculty from the departments of EECS; Aeronautics and Astronautics; Brain and Cognitive Sciences; Mechanical Engineering; and Economics who are affiliated with the Computer Science and Artificial Intelligence Laboratory and the Laboratory for Information and Decision Systems. The research groups form a coherent overall cluster in deep learning and generative AI, natural language processing, computer vision, robotics, reinforcement learning, game theoretic methods, and societal impact of AI.

More will follow suit, including some of the 10 faculty who have been hired into shared positions by the college with the departments of Brain and Cognitive Sciences; Chemical Engineering; Comparative Media Studies and Writing; Earth, Atmospheric and Planetary Sciences; Music and Theater Arts; Mechanical Engineering; Nuclear Science and Engineering; Political Science; and the MIT Sloan School of Management.

“I eagerly anticipate the building’s expansion of opportunities, facilitating the development of even deeper connections the college has made so far spanning all five schools,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

Other college programs and activities that are being supported in the building include the MIT Quest for Intelligence, Center for Computational Science and Engineering, and MIT-IBM Watson AI Lab. There are also dedicated areas for the dean’s office, as well as for the cross-cutting areas of the college — the Social and Ethical Responsibilities of Computing, Common Ground, and Special Semester Topics in Computing, a new experimental program designed to bring MIT researchers and visitors together in a common space for a semester around areas of interest.

Additional spaces include conference rooms on the third floor that are available for use by any college unit. These rooms are accessible to both residents and nonresidents of the building to host weekly group meetings or other computing-related activities.

For the MIT community at large, the building’s main event space, along with three conference rooms, is available for meetings, events, and conferences. Located eight stories high on the top floor with striking views across Cambridge and Boston and of the Great Dome, the event space is already in demand with bookings through next fall, and has quickly become a popular destination on campus.

The college inaugurated the event space over the January Independent Activities Period, welcoming students, faculty, and visitors to the building for Expanding Horizons in Computing — a weeklong series of bootcamps, workshops, short talks, panels, and roundtable discussions. Organized by various MIT faculty, the 12 sessions in the series delved into exciting areas of computing and AI, with topics ranging from security, intelligence, and deep learning to design, sustainability, and policy.

The MIT Schwarzman College of Computing welcomed students, faculty, and visitors to its new building during the January Independent Activities Period for a weeklong series of bootcamps, workshops, short talks, panels, and roundtable discussions. Held in the building’s main event space on the eighth floor, the sessions, all organized by various MIT faculty, delved into exciting areas of computing and AI, with topics ranging from security, intelligence, and deep learning to design, sustainability, and policy. Photo credit: Gretchen Ertl.

Form and function

Designed by Skidmore, Owings & Merrill, the state-of-the-art space for education, research, and collaboration took shape over four years of design and construction.

“In the design of a new multifunctional building like this, I view my job as the dean being to make sure that the building fulfills the functional needs of the college mission,” says Huttenlocher. “I think what has been most rewarding for me, now that the building is finished, is to see its form supporting its wide range of intended functions.”

In keeping with MIT’s commitment to environmental sustainability, the building is designed to meet Leadership in Energy and Environmental Design (LEED) Gold certification. The final review with the U.S. Green Building Council is tracking toward a Platinum certification.

The glass shingles on the building’s south-facing side serve a dual purpose in that they allow abundant natural light in and form a double-skin façade constructed of interlocking units that create a deep sealed cavity, which is anticipated to notably lower energy consumption.

Other sustainability features include embodied carbon tracking, on-site stormwater management, fixtures that reduce indoor potable water usage, and a large green roof. The building is also the first to utilize heat from a newly completed utilities plant built on top of Building 42, which converted conventional steam-based distributed systems into more efficient hot-water systems. This conversion significantly enhances the building’s capacity to deliver more efficient medium-temperature hot water across the entire facility.

Grand unveiling

dedication ceremony for the building is planned for the spring.

The momentous event will mark the official completion and opening of the new building and celebrate the culmination of hard work, commitment, and collaboration in bringing it to fruition.

It will also celebrate the 2018 foundational gift that established the college from Stephen A. Schwarzman, the chair, CEO, and co-founder of Blackstone, the global asset management and financial services firm. In addition, it will acknowledge Sebastian Man ’79, SM ’80, the first donor to support the building after Schwarzman. Man’s gift will be recognized with the naming of a key space in the building that will enrich the academic and research activities of the MIT Schwarzman College of Computing and the Institute.

Engineering household robots to have a little common sense

From wiping up spills to serving up food, robots are being taught to carry out increasingly complicated household tasks. Many such home-bot trainees are learning through imitation; they are programmed to copy the motions that a human physically guides them through.

It turns out that robots are excellent mimics. But unless engineers also program them to adjust to every possible bump and nudge, robots don’t necessarily know how to handle these situations, short of starting their task from the top.

Now MIT engineers are aiming to give robots a bit of common sense when faced with situations that push them off their trained path. They’ve developed a method that connects robot motion data with the “common sense knowledge” of large language models, or LLMs.

Their approach enables a robot to logically parse many given household task into subtasks, and to physically adjust to disruptions within a subtask so that the robot can move on without having to go back and start a task from scratch — and without engineers having to explicitly program fixes for every possible failure along the way.   

A robotic hand tries to scoop up red marbles and put them into another bowl while a researcher’s hand frequently disrupts it. The robot eventually succeeds.
Image courtesy of the researchers.

“Imitation learning is a mainstream approach enabling household robots. But if a robot is blindly mimicking a human’s motion trajectories, tiny errors can accumulate and eventually derail the rest of the execution,” says Yanwei Wang, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “With our method, a robot can self-correct execution errors and improve overall task success.”

Wang and his colleagues detail their new approach in a study they will present at the International Conference on Learning Representations (ICLR) in May. The study’s co-authors include EECS graduate students Tsun-Hsuan Wang and Jiayuan Mao, Michael Hagenow, a postdoc in MIT’s Department of Aeronautics and Astronautics (AeroAstro), and Julie Shah, the H.N. Slater Professor in Aeronautics and Astronautics at MIT.

Language task

The researchers illustrate their new approach with a simple chore: scooping marbles from one bowl and pouring them into another. To accomplish this task, engineers would typically move a robot through the motions of scooping and pouring — all in one fluid trajectory. They might do this multiple times, to give the robot a number of human demonstrations to mimic.

“But the human demonstration is one long, continuous trajectory,” Wang says.

The team realized that, while a human might demonstrate a single task in one go, that task depends on a sequence of subtasks, or trajectories. For instance, the robot has to first reach into a bowl before it can scoop, and it must scoop up marbles before moving to the empty bowl, and so forth. If a robot is pushed or nudged to make a mistake during any of these subtasks, its only recourse is to stop and start from the beginning, unless engineers were to explicitly label each subtask and program or collect new demonstrations for the robot to recover from the said failure, to enable a robot to self-correct in the moment.

“That level of planning is very tedious,” Wang says.

Instead, he and his colleagues found some of this work could be done automatically by LLMs. These deep learning models process immense libraries of text, which they use to establish connections between words, sentences, and paragraphs. Through these connections, an LLM can then generate new sentences based on what it has learned about the kind of word that is likely to follow the last.

For their part, the researchers found that in addition to sentences and paragraphs, an LLM can be prompted to produce a logical list of subtasks that would be involved in a given task. For instance, if queried to list the actions involved in scooping marbles from one bowl into another, an LLM might produce a sequence of verbs such as “reach,” “scoop,” “transport,” and “pour.”

“LLMs have a way to tell you how to do each step of a task, in natural language. A human’s continuous demonstration is the embodiment of those steps, in physical space,” Wang says. “And we wanted to connect the two, so that a robot would automatically know what stage it is in a task, and be able to replan and recover on its own.”

Mapping marbles

For their new approach, the team developed an algorithm to automatically connect an LLM’s natural language label for a particular subtask with a robot’s position in physical space or an image that encodes the robot state. Mapping a robot’s physical coordinates, or an image of the robot state, to a natural language label is known as “grounding.” The team’s new algorithm is designed to learn a grounding “classifier,” meaning that it learns to automatically identify what semantic subtask a robot is in — for example, “reach” versus “scoop” — given its physical coordinates or an image view.

“The grounding classifier facilitates this dialogue between what the robot is doing in the physical space and what the LLM knows about the subtasks, and the constraints you have to pay attention to within each subtask,” Wang explains.

The team demonstrated the approach in experiments with a robotic arm that they trained on a marble-scooping task. Experimenters trained the robot by physically guiding it through the task of first reaching into a bowl, scooping up marbles, transporting them over an empty bowl, and pouring them in. After a few demonstrations, the team then used a pretrained LLM and asked the model to list the steps involved in scooping marbles from one bowl to another. The researchers then used their new algorithm to connect the LLM’s defined subtasks with the robot’s motion trajectory data. The algorithm automatically learned to map the robot’s physical coordinates in the trajectories and the corresponding image view to a given subtask.

The team then let the robot carry out the scooping task on its own, using the newly learned grounding classifiers. As the robot moved through the steps of the task, the experimenters pushed and nudged the bot off its path, and knocked marbles off its spoon at various points. Rather than stop and start from the beginning again, or continue blindly with no marbles on its spoon, the bot was able to self-correct, and completed each subtask before moving on to the next. (For instance, it would make sure that it successfully scooped marbles before transporting them to the empty bowl.)

“With our method, when the robot is making mistakes, we don’t need to ask humans to program or give extra demonstrations of how to recover from failures,” Wang says. “That’s super exciting because there’s a huge effort now toward training household robots with data collected on teleoperation systems. Our algorithm can now convert that training data into robust robot behavior that can do complex tasks, despite external perturbations.”

New software enables blind and low-vision users to create interactive, accessible charts

A growing number of tools enable users to make online data representations, like charts, that are accessible for people who are blind or have low vision. However, most tools require an existing visual chart that can then be converted into an accessible format.

This creates barriers that prevent blind and low-vision users from building their own custom data representations, and it can limit their ability to explore and analyze important information.

A team of researchers from MIT and University College London (UCL) wants to change the way people think about accessible data representations.

They created a software system called Umwelt (which means “environment” in German) that can enable blind and low-vision users to build customized, multimodal data representations without needing an initial visual chart.

Umwelt, an authoring environment designed for screen-reader users, incorporates an editor that allows someone to upload a dataset and create a customized representation, such as a scatterplot, that can include three modalities: visualization, textual description, and sonification. Sonification involves converting data into nonspeech audio.

Pictured is an example multi-modal representation of stock data created with Umwelt. It includes a line chart, a sonification (top right), and a multi-level textual description describing various fields. In this example, the user has highlighted “GOOG” for Google and Umwelt will allow them to “hear” the data about Google. Image courtesy of Umwelt.

The system, which can represent a variety of data types, includes a viewer that enables a blind or low-vision user to interactively explore a data representation, seamlessly switching between each modality to interact with data in a different way.

The researchers conducted a study with five expert screen-reader users who found Umwelt to be useful and easy to learn. In addition to offering an interface that empowered them to create data representations — something they said was sorely lacking — the users said Umwelt could facilitate communication between people who rely on different senses.

“We have to remember that blind and low-vision people aren’t isolated. They exist in these contexts where they want to talk to other people about data,” says Jonathan Zong, an electrical engineering and computer science (EECS) graduate student and lead author of a paper introducing Umwelt. “I am hopeful that Umwelt helps shift the way that researchers think about accessible data analysis. Enabling the full participation of blind and low-vision people in data analysis involves seeing visualization as just one piece of this bigger, multisensory puzzle.”

Joining Zong on the paper are fellow EECS graduate students Isabella Pedraza Pineros and Mengzhu “Katie” Chen; Daniel Hajas, a UCL researcher who works with the Global Disability Innovation Hub; and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in the Computer Science and Artificial Intelligence Laboratory. The paper will be presented at the ACM Conference on Human Factors in Computing.

De-centering visualization

The researchers previously developed interactive interfaces that provide a richer experience for screen reader users as they explore accessible data representations. Through that work, they realized most tools for creating such representations involve converting existing visual charts.

Aiming to decenter visual representations in data analysis, Zong and Hajas, who lost his sight at age 16, began co-designing Umwelt more than a year ago.

At the outset, they realized they would need to rethink how to represent the same data using visual, auditory, and textual forms.

“We had to put a common denominator behind the three modalities. By creating this new language for representations, and making the output and input accessible, the whole is greater than the sum of its parts,” says Hajas.

To build Umwelt, they first considered what is unique about the way people use each sense.

For instance, a sighted user can see the overall pattern of a scatterplot and, at the same time, move their eyes to focus on different data points. But for someone listening to a sonification, the experience is linear since data are converted into tones that must be played back one at a time.

“If you are only thinking about directly translating visual features into nonvisual features, then you miss out on the unique strengths and weaknesses of each modality,” Zong adds.

They designed Umwelt to offer flexibility, enabling a user to switch between modalities easily when one would better suit their task at a given time.

To use the editor, one uploads a dataset to Umwelt, which employs heuristics to automatically creates default representations in each modality.

If the dataset contains stock prices for companies, Umwelt might generate a multiseries line chart, a textual structure that groups data by ticker symbol and date, and a sonification that uses tone length to represent the price for each date, arranged by ticker symbol.

The default heuristics are intended to help the user get started.

“In any kind of creative tool, you have a blank-slate effect where it is hard to know how to begin. That is compounded in a multimodal tool because you have to specify things in three different representations,” Zong says.

The editor links interactions across modalities, so if a user changes the textual description, that information is adjusted in the corresponding sonification. Someone could utilize the editor to build a multimodal representation, switch to the viewer for an initial exploration, then return to the editor to make adjustments.

Helping users communicate about data

To test Umwelt, they created a diverse set of multimodal representations, from scatterplots to multiview charts, to ensure the system could effectively represent different data types. Then they put the tool in the hands of five expert screen reader users.

Study participants mostly found Umwelt to be useful for creating, exploring, and discussing data representations. One user said Umwelt was like an “enabler” that decreased the time it took them to analyze data. The users agreed that Umwelt could help them communicate about data more easily with sighted colleagues.

“What stands out about Umwelt is its core philosophy of de-emphasizing the visual in favor of a balanced, multisensory data experience. Often, nonvisual data representations are relegated to the status of secondary considerations, mere add-ons to their visual counterparts. However, visualization is merely one aspect of data representation. I appreciate their efforts in shifting this perception and embracing a more inclusive approach to data science,” says JooYoung Seo, an assistant professor in the School of Information Sciences at the University of Illinois at Urbana-Champagne, who was not involved with this work.

Moving forward, the researchers plan to create an open-source version of Umwelt that others can build upon. They also want to integrate tactile sensing into the software system as an additional modality, enabling the use of tools like refreshable tactile graphics displays.

“In addition to its impact on end users, I am hoping that Umwelt can be a platform for asking scientific questions around how people use and perceive multimodal representations, and how we can improve the design beyond this initial step,” says Zong.

This work was supported, in part, by the National Science Foundation and the MIT Morningside Academy for Design Fellowship.

AI generates high-quality images 30 times faster in a single step

In our current age of artificial intelligence, computers can generate their own “art” by way of diffusion models, iteratively adding structure to a noisy initial state until a clear image or video emerges. Diffusion models have suddenly grabbed a seat at everyone’s table: Enter a few words and experience instantaneous, dopamine-spiking dreamscapes at the intersection of reality and fantasy. Behind the scenes, it involves a complex, time-intensive process requiring numerous iterations for the algorithm to perfect the image.

MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have introduced a new framework that simplifies the multi-step process of traditional diffusion models into a single step, addressing previous limitations. This is done through a type of teacher-student model: teaching a new computer model to mimic the behavior of more complicated, original models that generate images. The approach, known as distribution matching distillation (DMD), retains the quality of the generated images and allows for much faster generation. 

“Our work is a novel method that accelerates current diffusion models such as Stable Diffusion and DALLE-3 by 30 times,” says Tianwei Yin, an MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and the lead researcher on the DMD framework. “This advancement not only significantly reduces computational time but also retains, if not surpasses, the quality of the generated visual content. Theoretically, the approach marries the principles of generative adversarial networks (GANs) with those of diffusion models, achieving visual content generation in a single step — a stark contrast to the hundred steps of iterative refinement required by current diffusion models. It could potentially be a new generative modeling method that excels in speed and quality.”

This single-step diffusion model could enhance design tools, enabling quicker content creation and potentially supporting advancements in drug discovery and 3D modeling, where promptness and efficacy are key.

Distribution dreams

DMD cleverly has two components. First, it uses a regression loss, which anchors the mapping to ensure a coarse organization of the space of images to make training more stable. Next, it uses a distribution matching loss, which ensures that the probability to generate a given image with the student model corresponds to its real-world occurrence frequency. To do this, it leverages two diffusion models that act as guides, helping the system understand the difference between real and generated images and making training the speedy one-step generator possible.

The system achieves faster generation by training a new network to minimize the distribution divergence between its generated images and those from the training dataset used by traditional diffusion models. “Our key insight is to approximate gradients that guide the improvement of the new model using two diffusion models,” says Yin. “In this way, we distill the knowledge of the original, more complex model into the simpler, faster one, while bypassing the notorious instability and mode collapse issues in GANs.” 

Yin and colleagues used pre-trained networks for the new student model, simplifying the process. By copying and fine-tuning parameters from the original models, the team achieved fast training convergence of the new model, which is capable of producing high-quality images with the same architectural foundation. “This enables combining with other system optimizations based on the original architecture to further accelerate the creation process,” adds Yin. 

When put to the test against the usual methods, using a wide range of benchmarks, DMD showed consistent performance. On the popular benchmark of generating images based on specific classes on ImageNet, DMD is the first one-step diffusion technique that churns out pictures pretty much on par with those from the original, more complex models, rocking a super-close Fréchet inception distance (FID) score of just 0.3, which is impressive, since FID is all about judging the quality and diversity of generated images. Furthermore, DMD excels in industrial-scale text-to-image generation and achieves state-of-the-art one-step generation performance. There’s still a slight quality gap when tackling trickier text-to-image applications, suggesting there’s a bit of room for improvement down the line. 

Additionally, the performance of the DMD-generated images is intrinsically linked to the capabilities of the teacher model used during the distillation process. In the current form, which uses Stable Diffusion v1.5 as the teacher model, the student inherits limitations such as rendering detailed depictions of text and small faces, suggesting that DMD-generated images could be further enhanced by more advanced teacher models. 

“Decreasing the number of iterations has been the Holy Grail in diffusion models since their inception,” says Fredo Durand, MIT professor of electrical engineering and computer science, CSAIL principal investigator, and a lead author on the paper. “We are very excited to finally enable single-step image generation, which will dramatically reduce compute costs and accelerate the process.” 

“Finally, a paper that successfully combines the versatility and high visual quality of diffusion models with the real-time performance of GANs,” says Alexei Efros, a professor of electrical engineering and computer science at the University of California at Berkeley who was not involved in this study. “I expect this work to open up fantastic possibilities for high-quality real-time visual editing.” 

Yin and Durand’s fellow authors are MIT electrical engineering and computer science professor and CSAIL principal investigator William T. Freeman, as well as Adobe research scientists Michaël Gharbi SM ’15, PhD ’18; Richard Zhang; Eli Shechtman; and Taesung Park. Their work was supported, in part, by U.S. National Science Foundation grants (including one for the Institute for Artificial Intelligence and Fundamental Interactions), the Singapore Defense Science and Technology Agency, and by funding from Gwangju Institute of Science and Technology and Amazon. Their work will be presented at the Conference on Computer Vision and Pattern Recognition in June.

Optimizing nuclear fuels for next-generation reactors

In 2010, when Ericmoore Jossou was attending college in northern Nigeria, the lights would flicker in and out all day, sometimes lasting only for a couple of hours at a time. The frustrating experience reaffirmed Jossou’s realization that the country’s sporadic energy supply was a problem. It was the beginning of his path toward nuclear engineering.

Because of the energy crisis, “I told myself I was going to find myself in a career that allows me to develop energy technologies that can easily be scaled to meet the energy needs of the world, including my own country,” says Jossou, an assistant professor in a shared position between the departments of Nuclear Science and Engineering (NSE), where is the John Clark Hardwick (1986) Professor, and of Electrical Engineering and Computer Science.

Today, Jossou uses computer simulations for rational materials design, AI-aided purposeful development of cladding materials and fuels for next-generation nuclear reactors. As one of the shared faculty hires between the MIT Schwarzman College of Computing and departments across MIT, his appointment recognizes his commitment to computing for climate and the environment.

A well-rounded education in Nigeria

Growing up in Lagos, Jossou knew education was about more than just bookish knowledge, so he was eager to travel and experience other cultures. He would start in his own backyard by traveling across the Niger river and enrolling in Ahmadu Bello University in northern Nigeria. Moving from the south was a cultural education with a different language and different foods. It was here that Jossou got to try and love tuwo shinkafa, a northern Nigerian rice-based specialty, for the first time.

After his undergraduate studies, armed with a bachelor’s degree in chemistry, Jossou was among a small cohort selected for a specialty master’s training program funded by the World Bank Institute and African Development Bank. The program at the African University of Science and Technology in Abuja, Nigeria, is a pan-African venture dedicated to nurturing homegrown science talent on the continent. Visiting professors from around the world taught intensive three-week courses, an experience which felt like drinking from a fire hose. The program widened Jossou’s views and he set his sights on a doctoral program with an emphasis on clean energy systems.

A pivot to nuclear science

While in Nigeria, Jossou learned of Professor Jerzy Szpunar at the University of Saskatchewan in Canada, who was looking for a student researcher to explore fuels and alloys for nuclear reactors. Before then, Jossou was lukewarm on nuclear energy, but the research sounded fascinating. The Fukushima, Japan, incident was recently in the rearview mirror and Jossou remembered his early determination to address his own country’s energy crisis. He was sold on the idea and graduated with a doctoral degree from the University of Saskatchewan on an international dean’s scholarship.

Jossou’s postdoctoral work registered a brief stint at Brookhaven National Laboratory as staff scientist. He leaped at the opportunity to join MIT NSE as a way of realizing his research interest and teaching future engineers. “I would really like to conduct cutting-edge research in nuclear materials design and to pass on my knowledge to the next generation of scientists and engineers and there’s no better place to do that than at MIT,” Jossou says.

Merging material science and computational modeling

Jossou’s doctoral work on designing nuclear fuels for next-generation reactors forms the basis of research his lab is pursuing at MIT NSE. Nuclear reactors that were built in the 1950s and ’60s are getting a makeover in terms of improved accident tolerance. Reactors are not confined to one kind, either: We have micro reactors and are now considering ones using metallic nuclear fuels, Jossou points out. The diversity of options is enough to keep researchers busy testing materials fit for cladding, the lining that prevents corrosion of the fuel and release of radioactive fission products into the surrounding reactor coolant.

The team is also investigating fuels that improve burn-up efficiencies, so they can last longer in the reactor. An intriguing approach has been to immobilize the gas bubbles that arise from the fission process, so they don’t grow and degrade the fuel.

Since joining MIT in July 2023, Jossou is setting up a lab that optimizes the composition of accident-tolerant nuclear fuels. He is leaning on his materials science background and looping computer simulations and artificial intelligence in the mix.

Computer simulations allow the researchers to narrow down the potential field of candidates, optimized for specific parameters, so they can synthesize only the most promising candidates in the lab. And AI’s predictive capabilities guide researchers on which materials composition to consider next. “We no longer depend on serendipity to choose our materials, our lab is based on rational materials design,” Jossou says, “we can rapidly design advanced nuclear fuels.”

Advancing energy causes in Africa

Now that he is at MIT, Jossou admits the view from the outside is different. He now harbors a different perspective on what Africa needs to address some of its challenges. “The starting point to solve our problems is not money; it needs to start with ideas,” he says, “we need to find highly skilled people who can actually solve problems.” That job involves adding economic value to the rich arrays of raw materials that the continent is blessed with. It frustrates Jossou that Niger, a country rich in raw material for uranium, has no nuclear reactors of its own. It ships most of its ore to France. “The path forward is to find a way to refine these materials in Africa and to be able to power the industries on that continent as well,” Jossou says.

Jossou is determined to do his part to eliminate these roadblocks.

Anchored in mentorship, Jossou’s solution aims to train talent from Africa in his own lab. He has applied for a MIT Global Experiences MISTI grant to facilitate travel and research studies for Ghanaian scientists. “The goal is to conduct research in our facility and perhaps add value to indigenous materials,” Jossou says.

Adding value has been a consistent theme of Jossou’s career. He remembers wanting to become a neurosurgeon after reading “Gifted Hands,” moved by the personal story of the author, Ben Carson. As Jossou grew older, however, he realized that becoming a doctor wasn’t necessarily what he wanted. Instead, he was looking to add value. “What I wanted was really to take on a career that allows me to solve a societal problem.” The societal problem of clean and safe energy for all is precisely what Jossou is working on today.

2024 MacVicar Faculty Fellows named

Four outstanding undergraduate teachers and mentors have been named MacVicar Faculty Fellows: professor of electrical engineering and computer science (EECS) Karl Berggren, professor of political science Andrea Campbell, associate professor of music Emily Richmond Pollock, and professor of EECS Vinod Vaikuntanathan.

For more than 30 years, the MacVicar Faculty Fellows Program has recognized exemplary and sustained contributions to undergraduate education at MIT. The program is named in honor of Margaret MacVicar, MIT’s first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program (UROP).

New fellows are chosen each year through a highly competitive nomination process. They receive an annual stipend and are appointed to a 10-year term. Nominations, including letters of support from colleagues, students, and alumni, are reviewed by an advisory committee led by vice chancellor Ian Waitz with final selections made by provost Cynthia Barnhart.

Role models both in and out of the classroom, Berggren, Campbell, Pollock, and Vaikuntanathan join an elite academy of scholars from across the Institute who are committed to curricular innovation; exceptional teaching; collaboration with colleagues; and supporting students through mentorship, leadership, and advising.

Karl Berggren

“It is a great honor to have been selected for this fellowship. It has particularly made me remember the years of dedicated mentoring and support that I’ve received from my colleagues,” says Karl Berggren. “I’ve also learned a great deal over this period from our students by way of their efforts and thoughtful feedback. MIT continuously strives for excellence in undergraduate education, and I feel very lucky to have been part of that effort.”

Karl Berggren is the Joseph F. and Nancy P. Keithley Professor in the Department of EECS. He received his PhD from Harvard University and his BA in physics from Harvard College. Berggren joined MIT in 1996 as a staff member at Lincoln Laboratory before becoming an assistant professor in 2003. He regularly teaches undergraduate EECS offerings including 6.2000, formerly 6.002 (Electrical Circuits), and 6.3400, formerly 6.02 (Introduction to EECS via Communication Networks).

Sahil Pontula ’23 writes“Professor Berggren turned 6.002 from a mere course requirement into a truly memorable experience that shaped my current research interests and provided a unique perspective … He is devoted not just to educating the next generation of engineers, but also to imbuing in them interdisciplinary problem-solving perspectives that push the frontiers of science forward.”

MacVicar Fellow and professor of EECS Jeffrey Lang notes, “His lectures are polished, presented with humor, and well-appreciated by his students.” Senior Tiffany Louieadds: “He connects with us, inspires us, and welcomes us to ask questions in class and in the greater electrical engineering field.”

Berggren is also deeply invested in the art and science of teaching. Tomás Palacios, professor of EECS, says, “Professor Berggren is genuinely interested in continuously improving the educational experience of our students. He approaches this in the same methodological and quantitative way we typically approach research. He is well-versed in the most modern theories about learning and he is always happy to share … relevant books and papers on the subject.”

Lang agrees, noting that Berggren “reads articles and books that examine and discuss how learning occurs so that he can become a more effective teacher.” He goes on to recall a conversation in which Berggren explained a new form of homework grading. Instead of reducing grades for errors that did not render an obviously flawed result, he helps students extract key takeaways from their assignments and come to correct solutions on their own. Lang notes that a key benefit of this approach is that it allows graders to “work much more quickly yet carefully” and “provides them more time to spend on giving helpful feedback.”

Adding to his long list of contributions, Berggren also oversees the EECS teaching labs. Since assuming this role, he has pursued changes to ensure that students feel comfortable and confident using the space for both coursework and outside projects, developing their critical thinking and hands-on skills.

Faculty head and professor of electrical engineering Joel Voldman applauds Berggren’s efforts: “Since [he] has taken over, the labs are now a place for projects of all sorts, with students being trained on various processes, parts being easy to obtain, equipment readily available … His fundamental mantra has been to make a space that serves students, meets them where they’re at, and helps them get to where they want to go.”

Andrea Campbell

Andrea Campbell received her BA in social studies from Harvard University and her MA and PhD in political science from the University of California at Berkeley. She joined MIT’s Department of Political Science in 2005 and is currently the Arthur and Ruth Sloan Professor of Political Science and director of undergraduate studies.

Professor Campbell regularly teaches classes 17.30 (Making Public Policy), 17.315 (Health Policy), and 17.317 (U.S. Social Policy). Her research examines the relationships between public policies, public opinion, and political behavior.

A unique aspect of Campbell’s teaching style is the personal approach she brings. In 17.315, Campbell shared her own experiences following a tragic accident in her family, which highlighted the real-life challenges that many face navigating America’s health care system.

According to David Singer, department head and the Raphael Dorman-Helen Starbuck Professor of Political Science, Campbell “weaves personal experience into her teaching in powerful ways … Her openness about her experience permits students to share their own … thereby strengthening their own engagement with the material.”

Singer goes on to say, “In all of her classes, [she] encourages students to examine policymaking not as a technocratic exercise, or an exercise of optimization, but rather as a process infused with politics. In steering her pedagogy in this way, she shows her students how to understand the identity and interests of different groups in society, where their relative power emanates from, and how the rules and institutions of the U.S. political system shape policy outcomes on critical issues like LGBTQ rights, gun control, military intervention, and health care.”

Students say her classes are incredibly impactful, lingering with them for years to come. Her former teaching assistant, now Harvard professor, Justin de Benedictis-Kessner wrote, “Andrea’s talents have been an enormous asset … I have seen how many of her former undergraduate students have gone on to successful careers adjacent to her field of public policy in large part because of her inspiration.” Echoing this sentiment, Julia H. Ginder ’19 writes, “her lessons and mentorship have impacted my day-to-day life and career trajectory even five years after graduation.”

Campbell’s work set the stage for wide-ranging improvements to the Course 17 curriculum and under her leadership, public policy has become the most popular minor in the department. Singer writes, “She ensures that required classes in political institutions, economics, and substantive policy areas are regularly taught, and she proselytizes … to students about the importance of understanding policymaking, especially to [those] in engineering and sciences who might otherwise overlook this critically important domain.”

Campbell is heavily involved with undergraduate advising at the department, school, and Institute levels. She is a popular sponsor of UROPs and attracts many undergraduate researchers each year. Campbell is also co-chair of the Gender Equity Committee in the School of Humanities, Arts, and Social Sciences (SHASS) and the Subcommittee on the Communication Requirement (SOCR).

“It is clear that Andrea takes undergraduate teaching extraordinarily seriously, not just when designing her own classes, but when leading the undergraduate program in our department,” says Adam Berinsky, the Mitsui Professor of Political Science.

Beyond her many pedagogical and curricular accomplishments, Singer notes: “Andrea’s students consistently tout her extraordinary degree of personal engagement. She takes the time to get to know students, to mentor them outside the classroom, and to keep them energized in the classroom. Many express gratitude for Andrea’s willingness to go the extra mile — by staying late after class, holding extra office hours, and even inviting students to her home for Thanksgiving dinner.”

On receiving this award Campbell writes, “I am so grateful to my colleagues and students for taking the time to nominate me and so honored to be selected. Teaching and mentoring MIT students is such a joy. I am well aware that some students come through my door just to fulfill a requirement. Others come with genuine enthusiasm and interest. Either way, I love watching them discover how fascinating political science is and how relevant politics and policymaking are for their lives and their futures.”

Emily Richmond Pollock

“I am truly thrilled to become a MacVicar Faculty Fellow. Working with the undergraduates at MIT is such a gift in itself. When I teach, I can only strive to match the students’ creativity and commitment with my own,” says Emily Richmond Pollock.

Pollock joined MIT’s faculty in 2012. She received her BA in music from Harvard University in 2006 and her MA and PhD in music history and literature from the University of California at Berkeley in 2008 and 2012. She was awarded MIT’s Arthur C. Smith Award for meaningful contributions and devotion to undergraduate student life and learning in 2019 and the James A. and Ruth Levitan Teaching Award from the SHASS in 2020. She currently serves on the SOCR, the Subcommittee on the HASS requirements, and is the inaugural undergraduate chair in the SHASS.

Pollock is a dedicated mentor and advisor and testimonials highlight her commitment to student well-being and intellectual development. “Professor Emily Richmond Pollock is a kind, intentional, and dedicated teacher and advisor,” says senior Katherine Reisig. “By fostering such a welcoming community, she helps the MIT music department be a better place. It is clear … [she] cares deeply about her students, not only that we are doing well academically, but also that we are succeeding in life and doing well mentally.”

MacVicar Faculty Fellow and associate professor of literature Marah Gubar agrees: “Emily has long served as a role model for how to support the ‘whole student’ in ways that build community, right wrongs, and infuse more humanity and beauty into our campus.”

MIT colleagues and students praise Pollock’s extensive contributions to curriculum development at the introductory and advanced levels. She regularly teaches class 21M.011 (Introduction to Western Music) and courses on opera, symphonic repertoire, and the advanced seminar for music majors. Her lectures provide lively learning experiences in which her students are encouraged to think critically about music and culture, dive into unfamiliar operas with curiosity, and compare dramatic elements across time periods.

“I came away from 21M.011 not only with a better understanding of Western music, but with new curiosities and questions about music’s role in the world. Professor Pollock’s teaching made me want to learn more  it encouraged lifelong discovery, curiosity, and education,” Reisig says.

Associate professor of music and MacVicar Faculty Fellow Patricia Tang writes, “Professor Pollock continues to grow as a leader in pedagogical innovation, transforming the music history curriculum and being a true inspiration to her colleagues in her devotion to her students. Though these subjects existed in the course catalog before Pollock’s arrival, in all cases she has radically transformed them, infusing new energy and excitement into the curriculum.”

Pollock also champions issues of diversity, equity, and inclusion in the arts and is dedicated to making classical music and opera more accessible while maintaining the intellectual prestige applauded by students. She encourages students to embrace lesser-known works and step outside their comfort zone, even taking students to the opera herself. She has a “strong interest in anti-racist pedagogies and decolonizing music curriculum … [her] pedagogical innovations are numerous,” Tang observes.

About her impact as an advisor, Tang notes: “Professor Pollock genuinely loves getting to know her students … it is really her ‘superpower.’ It is her mission to make sure [they] are not just surviving but thriving in their first year.”

Senior Hana Ro agrees: “Under her guidance, my academic journey has been transformed, and I have gained not only a profound understanding of [this] subject matter but also a sense of belonging and encouragement that has been invaluable during my time at MIT.”

Furthermore, Pollock ensures that students never feel isolated or alone. Graduate student Frederick Ajisafe says, “If she knew that a cohort was taking a demanding class, she would check in with us … In all cases, Emily emphasized her belief in our ability to succeed and her willingness to help us get there.”

Vinod Vaikuntanathan

Vinod Vaikuntanathan is a professor in the Department of EECS. He received his bachelor’s degree in computer science from the Indian Institute of Technology Madras in 2003 and his SM and PhD degrees in computer science from MIT in 2005 and 2009. Vaikuntanathan joined the faculty in 2013 and in recognition of his contributions to teaching and service to students, he received the Harold E. Edgerton Faculty Achievement Award in 2017 and the Ruth and Joel Spira Award for Distinguished Teaching in 2016.

Vaikuntanathan has taught all three EECS undergraduate theoretical computer science subjects including 6.1210, formerly 6.006 (Introduction to Algorithms); 6.1200, formerly 6.042 (Mathematics for Computer Science); and 6.1220, formerly 6.046 (Design and Analysis of Algorithms).

Students say his classes are challenging, yet approachable and inclusive. Helen Propson ’24 writes,“He excels at makingcomplex subjects like cryptography accessible and captivating for all students, creating anatmosphere where every student’s input is highly regarded. He embraces questions and leaves students feeling inspired and motivated to tackle challenging problems, fostering a sense of confidence and a belief in their own abilities.” She goes on to say, “He often describes intricate concepts as ‘magical,’ and his enthusiasm is contagious, making the material come alive in the classroom.”

MIT alumna Anne Kim concurs: “His teaching style is characterized by its clarity, enthusiasm, and a genuine passion for the subject matter. In his classes, he managed to distill complex algorithms into digestible concepts, making the material accessible to students with varying levels of expertise.”

Vaikuntanathan has also made significant contributions to the EECS curriculum. In spring 2022, he, along with fellow professors in the department, led an effort to improve 6.046. According to professor of EECS and MacVicar Fellow Srini Devadas, “designing a new lecture for 6.046 is not easy. Each new lecture is, typically, days of prep work, including preparing to … give the lecture itself and writing and testing problem set questions, quiz questions, and quiz practice questions. Vinod did all this with skill, aplomb, and enthusiasm. His contributions have had a permanent and beneficial effect on 6.046.”

Widely known for his work in cryptography, including homomorphic encryption and computational complexity, Vaikuntanathan became the lecturer-in-charge of the department’s first theoretical cryptography offering, 6.875. In addition, as the fields of quantum and post-quantum cryptography have grown, “Vinod has added relevant modules to the syllabus, taking the place of topics which had grown obsolete,” Devadas remarks. “Some professors might see teaching the same class multiple times as a chance to save themselves work by reusing the same materials. Vinod sees teaching 6.875 every fall as an opportunity to keep improving the class.”

Vinod Vaikuntanathan is also a devoted mentor and advisor, assisting with first-year UROPs and encouraging students to take advantage of his “open-door” policy. Kim writes that Professor Vaikuntanathan is benefiting her career still as “his mentorship … extends beyond the classroom through his research” and that he “has mentored and advised dozens of [her] friends in the cryptography space.”

“His encouraging demeanor sets a remarkable example of the kind of teacher every student hopes to encounter during their academic career,” says Propson.

On becoming a MacVicar Faculty Fellow Vaikuntanathan writes, “It is humbling to be in the company of such amazing teachers and mentors, many of whom I have come to think of as my role models. Many thanks to my colleagues and my students for considering me worthy of this honor.”

Priya Donti named AI2050 Early Career Fellow

Photo courtesy the subject.

Assistant Professor Priya Donti has been named an AI2050 Early Career Fellow by Schmidt Sciences, a philanthropic initiative from Eric and Wendy Schmidt aimed at helping to solve hard problems in AI

Priya Donti joined the Department of EECS as an assistant professor in September of 2023. Her work focuses on physics-informed deep learning for forecasting, optimization, and control in high-renewables power grids; additionally, she is the co-founder of Climate Change AI, global nonprofit initiative to catalyze impactful work at the intersection of climate change and machine learning. Donti earned her bachelor’s degree in computer science and mathematics from Harvey Mudd College and her PhD in computer science and public policy from Carnegie Mellon University. She was honored with the MIT Technology Review Innovators Under 35 Award and the ACM SIGEnergy Doctoral Dissertation Award. She has also been honored as a U.S. Department of Energy Computational Science Graduate Fellow, Siebel Scholar, NSF Graduate Research Fellow, and Thomas J. Watson Fellow.


Conceived and co-chaired by Eric Schmidt and James Manyika, AI2050 stems in part from issues raised in the bestselling book, The Age of AI: And our Human Future, co-authored by Schmidt, Henry Kissinger, and Schwarzman College of Computing Dean Dan Huttenlocher. The initiative is grounded in the following motivating question: It’s 2050. AI has turned out to be hugely beneficial to society. What happened? What are the most important problems we solved and the opportunities and possibilities we realized to ensure this outcome? AI2050 aims to support exceptional people working on key opportunities and hard problems that are critical to get right for society to benefit from AI.