The Thriving Stars of AI

A panel of early-career researchers join Asu Ozdaglar, Department Head of MIT EECS, and Professor Carol Espy-Wilson of the University of Maryland at College Park for the Thriving Stars of AI summit. Photo credit David Sella

From fake news on social media to bias in facial recognition software, the negative effects of technology are hard to ignore. But these effects aren’t inevitable. Researchers in artificial intelligence (AI) and machine learning (ML) are actively tackling these problems, putting social outcomes at the forefront of their research. 

On May 13, Thriving Stars hosted a research summit on the social implications of AI and ML, featuring talks from four early-career researchers: Sarah Cen, Irene Chen, Danielle Olson-Getzen, and Shibani Santurkar. The speakers discussed how AI can be both helpful and detrimental in different contexts, from healthcare to social media to video games. Following the talks, Professor Carol Espy-Wilson of the University of Maryland at College Park joined the researchers on a panel, moderated by Professor Asu Ozdaglar, MIT EECS Department Head. 

Aude Oliva, director of strategic industry engagement in the MIT Schwarzman College of Computing, served as the afternoon’s host. Photo credit: David Sella.

Shibani Santurkar, who earned her PhD at MIT last year and is now a postdoc at Stanford University, is tracking down the culprits behind AI’s negative effects. To do this, she’s breaking down how machine learning models, or systems, are developed. “It’s not that these models have some kind of bug in them,” she said, “It’s more systemic.” She examines each step of the development pipeline, or process – which includes collecting example data for a particular application, training models to make predictions from the data, and evaluating whether the models are performing properly – and then pinpoints places that need improvement. Now, Santurkar is “rethinking the machine learning pipeline” by figuring out how to “bake” social values, such as privacy and fairness, into these models. “We really need to rethink not only how these models are making their decisions but also how we want them to make their decisions,” she said.

Irene Chen, a current MIT PhD student, also grapples with systemic problems with machine learning applications in healthcare. In this space, additional challenges arise. “Medical data is super messy,” she said. Patient data is often incomplete, with patient visits being spaced out in time and even patient histories having holes. With incomplete data, researchers struggle to train machine learning models that can accurately diagnose patients or devise treatment plans. Moreover, by using data from an already flawed system, these models can inadvertently “create and magnify bias in healthcare,” Chen said. 

Irene Chen interspersed her talk with lively personal anecdotes from her time at MIT. Photo credit: David Sella

For example, models can skew towards demographics that appear more often in the training data, such as patients with better health insurance who may seek healthcare more frequently. “I often feel like doing machine learning for healthcare is doing machine learning on hard mode,” she said. Through her research, Chen is working to concretely understand these challenges to help make ethical and equitable AI for healthcare.

The summit’s speakers also discussed how they’re addressing social concerns with AI through interdisciplinary research. Sarah Cen, a current MIT PhD student, is combining strategies in engineering, economics, and public policy to develop protections for users of online platforms, such as social media. 

“We need guardrails in place,” said Sarah Cen of the social media platforms which millions of users rely upon. Photo credit: David Sella.

With so many stakeholders on these platforms, ranging from users to advertisers to the platforms themselves, it’s a juggling act to balance everyone’s demands. But users are often the ones who take the hit, suffering poor mental health or being led astray by misinformation. “The problem right now is that we just don’t have the infrastructure necessary [to protect users],” Cen said. “[We need] guardrails in place.” She is currently tackling this problem with a two-pronged approach, looking for technological and legislative solutions.

Danielle Olson-Getzen, who earned her PhD at MIT in spring 2021, draws upon her background in computer science and journalism to build people-centric technology. During her PhD, she worked to improve the diversity of avatars used in video games and virtual reality. To do this, she developed a design framework to help developers be mindful about how their algorithms generate an avatar’s race. “Representations within the stories around us have a tremendous impact on the way we live, work, and play,” she said. In fact, her experience growing up without a relatable STEM role model almost led her to pursue an entirely different career. 

Danielle Olson-Getzen, the first ever AI/ML human factors researcher at Apple, shared her perspective on the power of storytelling. Photo credit: David Sella.

Now, Olson-Getzen is the first AI/ML human factors researcher at Apple, where she researches people’s experiences with AI and uses their stories to inform AI design. “By listening to stories, we can better center humans in the AI development process,” she said.  

The summit’s speakers also stressed that despite the hazards of AI, the technology can lead to positive social outcomes. For example, in healthcare, AI can help uncover victims of domestic violence who might otherwise be afraid to speak up. By analyzing radiology scans, AI can flag clinicians if patients’ old injuries indicate a history of such violence, Chen said. Santurkar also provided a more meta example, where researchers can “use machine learning algorithms as a tool to understand the bias in our data” for training AI. 

The Thriving Stars of AI research summit was open to the public, attracting attendees both within and outside the MIT community. Many attendees were researchers in fields outside of AI who wanted a glimpse into their colleagues’ thoughts. “[I was curious] how they see the world and solve problems,” said Lakshita Boora, a PhD student in organizational behavior at Michigan State University, who added an MIT visit to her Boston vacation when she realized she could attend the summit. 

Shibani Santurkar is working to “bake in” human values such as fairness and equity into machine learning models. Photo credit: David Sella.

In the venue’s cozy atmosphere, audience members felt connected with the speakers, especially when they shared their personal stories. For Sara Pidò, a visiting PhD student in computer science at MIT, a memorable part of the summit was hearing the speakers’ challenges with balancing their professional and personal obligations. “I learned I’m not alone in this,” she said. Whenever the speakers shared their achievements – such as when Chen announced her upcoming job as an assistant professor at the University of California at Berkeley – the room immediately exploded with applause and cheers. “It felt good to see people succeeding and reaching their goals,” said Katia Oussar, a MS/PhD student in computer science at the University of Massachusetts Lowell.

The Thriving Stars of AI research summit is part of the Thriving Stars initiative to improve gender representation in the PhD program in EECS. Through this summit, researchers of underrepresented genders gained a platform to share their research and perspectives on a critical societal issue. In this way, the summit furthers the Thriving Stars mission “to ensure that the field of computing and information science fully represents the spectrum of humanity and is sensitive to our needs, our differing perspectives, and our very many shared challenges,” Ozdaglar said. Thriving Stars plans to host more research summits in the future to tackle other important technological challenges in our society.

As the four AI summit speakers move forward in their careers, they plan to carefully consider the double-edged social implications of AI in their research. “Technology has this ability to magnify what’s already there,” Chen said. “It can make people even more productive; it can also exacerbate existing biases.” With researchers being mindful of the hazards of AI and actively tackling its negative effects, the future for AI is hopeful.

Media Inquiries

Journalists seeking information about EECS, or interviews with EECS faculty members, should email eecs-communications@mit.edu.

Please note: The EECS Communications Office only handles media inquiries related to MIT’s Department of Electrical Engineering & Computer Science. Please visit other school, department, laboratory, or center websites to locate their dedicated media-relations teams.