FILTER
Selected:

Unpacking the bias of large language models

June 20, 2025

In a new study, researchers discover the root cause of a type of bias in LLMs, paving the way for more accurate and reliable AI systems.

Study reveals AI chatbots can detect race, but racial bias reduces response empathy

January 6, 2025

Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.

Despite its impressive output, generative AI doesn’t have a coherent understanding of the world

November 6, 2024

Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.

Study: AI could lead to inconsistent outcomes in home surveillance

October 2, 2024

Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.