Building tomorrow's robots that can handle changing environments and unknowns will require designing a system that constantly calculates these uncertainties. EECS professors and principal investigators with the Computer Science and Artificial Intelligence Lab (CSAIL) Leslie Kaelbling and Tomas Lozano Perez have recently submitted their work on this area of artificial intelligence for publication in the International Journal of Robotics Research. [Image left, courtesy Allegra Boverman and Christine Daniloff, MIT News]
Read more in the March 27, 2013 MIT News Office article by Helen Knight titled "Knowing the unknown. Researchers work to build robots’ awareness of their own limitations," also posted below in its entirety.
Robot butlers that tidy your house or cook you a meal have long been the dream of science-fiction writers and artificial intelligence researchers alike.
But if robots are ever going to move effectively around our constantly changing homes or workspaces performing such complex tasks, they will need to be more aware of their own limitations, according to researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Most successful robots today tend to be used either in fixed, carefully controlled environments, such as manufacturing plants, or for performing fairly simple tasks such as vacuuming a room, says Leslie Pack Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT.
Carrying out complicated sequences of actions in a cluttered, dynamic environment such as a home will require robots to be more aware of what they do not know, and therefore need to find out, Kaelbling says. That is because a robot cannot simply look around the kitchen and determine where all the containers are stored, for example, or what you would prefer to eat for dinner. To find these things out, it needs to open the cupboards and look inside, or ask a question.
“I would like to make a robot that could go into your kitchen for the first time, having been in other kitchens before but not yours, and put the groceries away,” Kaelbling says.
And in a paper recently accepted for publication in the International Journal of Robotics Research, she and CSAIL colleague Tomas Lozano-Perez describe a system designed to do just that, by constantly calculating the robot’s level of uncertainty about a given task, such as the whereabouts of an object, or its own location within the room.
The system is based on a module called the state estimation component, which calculates the probability of any given object being what or where the robot thinks it is. In this way, if the robot is not sufficiently certain that an object is the one it is looking for, because the probability of it being that object is too low, it knows it needs to gather more information before taking any action, Kaelbling says.
So, for example, if the robot were trying to pick up a box of cereal from a shelf, it might decide its uncertainty about the position of the object was too high to attempt grasping it. Instead, it would first take a closer look at the object, in order to get a better idea of its exact location, Kaelbling says. “It’s thinking always about its own belief about the world, and how to change its belief, by taking actions that will either gather more information or change the state of the world.”
The system also simplifies the process of developing a strategy for performing a given task by making up its plan in stages as it goes along, using what the team calls hierarchical planning in the now.
“There is this idea in AI that we’re very worried about having an optimal plan, so we’re going to compute very hard for a long time, to ensure we have a complete strategy formulated before we begin execution,” Kaelbling says.
But in many cases, particularly if the environment is new to the robot, it cannot know enough about the area to make such a detailed plan in advance, she says.
So instead the system makes a plan for the first stage of its task and begins executing this before it has come up with a strategy for the rest of the exercise. That means that instead of one big complicated strategy, which consumes a considerable amount of computing power and time, the robot can make many smaller plans as it goes along.
The drawback to this process is that it can lead the robot into making silly mistakes, such as picking up a plate and moving it over to the table without realizing that it first needs to clear some room to put it down, Kaelbling says.
But such small mistakes may be a price worth paying for more capable robots, she says: “As we try to get robots to do bigger and more complicated things in more variable environments, we will have to settle for some amount of suboptimality.”
In addition to household robots, the system could also be used to build more flexible industrial devices, or in disaster relief, Kaelbling says.
Ronald Parr, an associate professor of computer science at Duke University, says much existing work on robot planning tends to be fragmented into different groups working on particular, specialized problems. In contrast, the work of Kaelbling and Lozano-Perez breaks down the walls that exist between these subgroups, and uses hierarchical planning to address the computational challenges that arise when attempting to develop a more general-purpose, problem-solving system. “What’s more, it is demonstrated on a practical, general-purpose robotic platform that could be used for domestic or factory work,” Parr says.