large language models

FILTER
Selected:

Personalization features can make LLMs more agreeable

February 18, 2026

The context of long-term conversations can cause an LLM to begin mirroring the user’s viewpoints, possibly reducing accuracy or creating a virtual echo-chamber.

Enabling small language models to solve complex reasoning tasks

December 16, 2025

The “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.

Researchers discover a shortcoming that makes LLMs less reliable

December 1, 2025

Large language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.

Teaching large language models how to absorb new knowledge

November 13, 2025

With a new method developed at MIT, an LLM behaves more like a student, writing notes that it studies to memorize new information.

MIT researchers propose a new model for legible, modular software

November 12, 2025

The coding framework uses modular concepts and simple synchronization rules to make software clearer, safer, and easier for LLMs to generate.

How to build AI scaling laws for efficient LLM training and budget maximization

September 17, 2025

MIT-IBM Watson AI Lab researchers have developed a universal guide for estimating how large language models will perform based on smaller models in the same family.

The unique, mathematical shortcuts language models use to predict dynamic scenarios

July 21, 2025

Language models follow changing situations using clever arithmetic, instead of sequential tracking. By controlling when these approaches are used, engineers could improve the systems’ capabilities.

LLMs factor in unrelated information when recommending medical treatments

June 25, 2025

Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.

Unpacking the bias of large language models

June 20, 2025

In a new study, researchers discover the root cause of a type of bias in LLMs, paving the way for more accurate and reliable AI systems.

Study shows vision-language models can’t handle queries with negation words

May 14, 2025

Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.