MIT, Harvard Researchers Examine Cognitive Theories to Clarify Robot Explanations

Researchers analyzed dozens of research papers to better understand how humans interact with robots.

Getty Images


Researchers will be presenting their work next week at the IEEE Conference on Human-Robot Interaction.
MIT and Harvard researchers have published a new report that uses cognitive theory to explain how humans can better work with robots.

A new report by researchers at the Massachusetts Institute of Technology and Harvard University describes how cognitive science and psychology could help humans collaborate with robots more effectively.

The researchers examined 35 research papers that focused on humans teaching robots new behaviors. In their analysis, the researchers kept in mind two theories—“the analogical transfer theory” and “the variation of theory of learning.”

The “analogical transfer theory” suggests that humans learn by analogy. When humans interact with a new domain or concept, they implicitly look for something familiar they can use to understand the new entity.

The “variation theory of learning,” on the other hand, suggests that strategic variation can reveal concepts that might be difficult for a person to discern otherwise. People go through a four-step process when they interact with a new concept—repetition, contrast, generalization, and variation, according to the theory.

Using examples from these works, the researchers aimed to show how the theories can help people form conceptual models of robots more quickly, accurately, and flexibly. This in turn, could improve their understanding of a robot’s behavior.

Humans who build more accurate mental models of a robot are often better collaborators, which is especially important when humans and robots work together in high-stakes environments, said Serena Booth, a graduate student in the Interactive Robotics Group of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper. This is particularly true in the manufacturing and health care industries. 

“Whether or not we try to help people build conceptual models of robots, they will build them anyway,” she said. “And those conceptual models could be wrong. This can put people in serious danger. It is important that we use everything we can to give that person the best mental model they can build.”

Theoretical approaches should be deliberate

While many research papers incorporated partial elements of one theory, this was most likely due to happenstance, Booth said. Had the researchers consulted these theories at the outset of their work, they may have been able to design more effective experiments.

For instance, when teaching people to interact with a robot, researchers often show many examples of the robot performing the same task. But to build an accurate mental model of that robot, the variation theory suggests that subjects need to see an array of examples of the robot performing the task in different environments, and they also need to see it make mistakes.

“It is very rare in the human-robot interaction literature because it is counterintuitive, but people also need to see negative examples to understand what the robot is not,” Booth said.

These cognitive science theories could also improve physical robot design. If a robotic arm resembles a human arm but moves in ways that are different from human motion, people will struggle to build accurate mental models of the robot, Booth explained.

As suggested by the analogical transfer theory, because people map what they know—a human arm—to the robotic arm, if the movement doesn’t match, people can be confused and have difficulty learning to interact with the robot.

Researchers seek to clarify explanations

Booth and her collaborators also examined theories of how humans learning new concepts can improve instructions for people dealing with unfamiliar robots. This will help build trust in new technologies, she said.

“In explainability, we have a really big problem of confirmation bias,” Booth said. “There are not usually standards around what an explanation is and how a person should use it. As researchers, we often design an explanation method, it looks good to us, and we ship it.”

Instead, the MIT and Harvard scientists suggested that researchers use the learning theories to think about how people will use explanations, which are often generated by robots to communicate the policies they use to make decisions. By providing a curriculum that helps the user understand what an explanation method means and when to use it, but also where it does not apply, they will develop a stronger understanding of a robot’s behavior, Booth says.

Based on their analysis, the team offered a number of recommendations on how research on human-robot teaching can be improved. For instance, they suggested that researchers incorporate analogical transfer theory by guiding people to make appropriate comparisons when they learn to work with a new robot. Providing guidance can ensure that people use fitting analogies, so they aren’t surprised or confused by the robot’s actions, Booth said.

They also suggest including positive and negative examples of robot behavior, and exposing users to how strategic variations of parameters in a robot’s “policy” affect its behavior, eventually across strategically varied environments, can help humans learn better and faster. The robot’s policy is a mathematical function that assigns probabilities to each action the robot can take.

“We’ve been running user studies for years, but we’ve been shooting from the hip in terms of our own intuition as far as what would or would not be helpful to show the human,” said Elena Glassman, an assistant professor of computer science at Harvard’s John A. Paulson School of Engineering and Applied Sciences. “The next step would be to be more rigorous about grounding this work in theories of human cognition.”

Now that this initial literature review using cognitive science theories is complete, Booth said her team plans to test its recommendations by rebuilding some of the experiments she studied and seeing if the theories actually improve human learning.

Scientists to present findings

Booth and her advisor, Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group, co-authored this paper in collaboration with researchers from Harvard. Glassman was the primary advisor on the project.

Harvard co-authors also included graduate student Sanjana Sharma and research assistant Sarah Chung. Their work was supported, in part, by the National Science Foundation.

The research will be presented at the IEEE Conference on Human-Robot Interaction from March 7 to 10 online.


Email Sign Up

Get news, papers, media and research delivered
Stay up-to-date with news and resources you need to do your job. Research industry trends, compare companies and get market intelligence every week with Robotics 24/7. Subscribe to our robotics user email newsletter and we'll keep you informed and up-to-date.

Getty Images

Researchers will be presenting their work next week at the IEEE Conference on Human-Robot Interaction.


Robot Technologies