Despite its impressive output, generative AI doesn’t have a coherent understanding of the world
Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.
Making it easier to verify an AI model’s responses
By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
Enhancing LLM collaboration for smarter, more efficient solutions
“Co-LLM” algorithm helps a general-purpose AI model collaborate with an expert large language model by combining the best parts of both answers, leading to more factual responses.
Method prevents an AI model from being overconfident about wrong answers
More efficient than other approaches, the “Thermometer” technique could help someone know when they should trust a large language model.
In controlled experiments, MIT CSAIL researchers discover simulations of reality developing deep within LLMs, indicating an understanding of language beyond simple mimicry.
The approach can detect anomalies in data recorded over time, without the need for any training.
Associate Professor Jonathan Ragan-Kelley optimizes how computer graphics and images are processed for the hardware of today and tomorrow.
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
Engineering household robots to have a little common sense
With help from a large language model, MIT engineers enabled robots to self-correct after missteps and carry on with their chores.
Student Spotlight: Isabella Pedraza Piñeros
Our first subject, Isabella Pedraza Piñeros, is a first-year MEng student in the Department of EECS; she graduated with her degree in Computer Science and Engineering in the spring of 2023.