
This new metric for measuring uncertainty could flag hallucinations and help users know whether to trust an AI model.

Department of EECS Announces 2025 Promotions
The Department is delighted to announce the following promotions to Associate, and Full, Professor.

Personalization features can make LLMs more agreeable
The context of long-term conversations can cause an LLM to begin mirroring the user’s viewpoints, possibly reducing accuracy or creating a virtual echo-chamber.

New control system teaches soft robots the art of staying safe
MIT CSAIL and LIDS researchers developed a mathematically grounded system that lets soft robots deform, adapt, and interact with people and objects, without violating safety limits.

New research detects hidden evidence of mistaken correlations — and provides a method to improve accuracy.

3 Questions: How AI could optimize the power grid
While the growing energy demands of AI are worrying, some techniques can also help make power grids cleaner and more efficient.

New method improves the reliability of statistical estimations
The technique can help scientists in economics, public health, and other fields understand whether to trust the results of their experiments.

Prognostic tool could help clinicians identify high-risk cancer patients
Using a versatile problem-solving framework, researchers show how early relapse in lymphoma patients influences their chance for survival.

With insect-like speed and agility, the tiny robot could someday aid in search-and-rescue missions.

Researchers discover a shortcoming that makes LLMs less reliable
Large language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.