Abstract
Manually programming robots is difficult, impeding more widespread use of robotic systems. In response, efforts are being made to develop robots that use imitation learning. With such systems a robot learns by watching humans perform tasks. However, most imitation learning systems replicate a demonstrator's actions rather than obtaining a deeper understanding of why those actions occurred. Here we introduce an imitation learning framework based on causal reasoning that infers a demonstrator's intentions. As with imitation learning in people, our approach constructs an explanation for a demonstrator's actions, and generates a plan based on this explanation to carry out the same goals rather than trying to faithfully reproduce the demonstrator's precise motor actions. This enables generalization to new situations. We present novel causal inference algorithms for imitation learning and establish their soundness, completeness and complexity characteristics. Our approach is validated using a physical robot, which successfully learns and generalizes skills involving bimanual manipulation. Human performance on similar skills is reported. Computer experiments using the Monroe Plan Corpus further validate our approach. These results suggest that causal reasoning is an effective unifying principle for imitation learning. Our system provides a platform for exploring neural implementations of this principle in future work.
Original language | English (US) |
---|---|
Pages (from-to) | 177-193 |
Number of pages | 17 |
Journal | IEEE Transactions on Cognitive and Developmental Systems |
Volume | 10 |
Issue number | 2 |
DOIs | |
State | Published - Jun 2018 |
Externally published | Yes |
Keywords
- Abduction
- artificial intelligence (AI)
- cause-effect reasoning
- cognitive robotics
- imitation learning
- parsimonious covering theory (PCT)
ASJC Scopus subject areas
- Software
- Artificial Intelligence