
Researchers discover a shortcoming that makes LLMs less reliable
Large language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.

Charting the future of AI, from safer answers to faster thinking
MIT PhD students who interned with the MIT-IBM Watson AI Lab Summer Program are pushing AI tools to be more flexible, efficient, and grounded in truth.

This is your brain without sleep
New research shows attention lapses due to sleep deprivation coincide with a flushing of fluid from the brain — a process that normally occurs during sleep.

Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.

Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.

AI in health should be regulated, but don’t forget about the algorithms, researchers say
In a recent commentary, a team from MIT, Equality AI, and Boston University highlights the gaps in regulation for AI models and non-AI algorithms in health care.

A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.

Beery, Farina, Ghassemi, Kim named AI2050 Early Career Fellows
The new crop of AI2050 Early Career Fellows was announced Dec. 10th.

Improving health, one machine learning system at a time
Marzyeh Ghassemi works to ensure health-care models are trained to be robust and fair.

3 questions: Leveraging insights to enable clinical outcomes
Thomas Heldt, associate director of IMES, describes how he collaborates closely with MIT colleagues and others at Boston-area hospitals.