Laboratory for Information and Decision Systems (LIDS)

FILTER
Selected:

Tracking gene expression changes through cell lineage progression with PORCELAN

February 19, 2025

A new method for detecting gene-expression patterns linked to lineage progression, providing a powerful tool for studying cell state memory across biological systems.

Algorithms and AI for a better world

January 16, 2025

Assistant Professor Manish Raghavan wants computational techniques to help solve societal problems.

How hard is it to prevent recurring blackouts in Puerto Rico?

January 10, 2025

Using the island as a model, researchers demonstrate the “DyMonDS” framework can improve resiliency to extreme weather and ease the integration of new resources.

Researchers reduce bias in AI models while preserving or improving accuracy

December 11, 2024

A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.

Beery, Farina, Ghassemi, Kim named AI2050 Early Career Fellows

December 10, 2024

The new crop of AI2050 Early Career Fellows was announced Dec. 10th.

MIT researchers develop an efficient approach for training more reliable reinforcement learning models, focusing on complex tasks that involve variability. Image credits: MIT News; iStock

MIT researchers develop an efficient way to train more reliable AI agents

December 6, 2024

The technique could make AI systems better at complex tasks that involve variability.

Improving health, one machine learning system at a time

November 26, 2024

Marzyeh Ghassemi works to ensure health-care models are trained to be robust and fair.

A causal theory for studying the cause-and-effect relationships of genes

November 12, 2024

By sidestepping the need for costly interventions, a new method could potentially reveal gene regulatory programs, paving the way for targeted treatments.

Despite its impressive output, generative AI doesn’t have a coherent understanding of the world

November 6, 2024

Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.

Study: AI could lead to inconsistent outcomes in home surveillance

October 2, 2024

Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.