Intelligent task-level grasp mapping for robot control

Comas Jordà, Josep Maria
In the future, robots will enter our everyday lives to help us with various tasks. For a complete integration and cooperation with humans, these robots need to be able to acquire new skills. Sensor capabilities for navigation in real human environments and intelligent interaction with humans are some of the key challenges. Learning by demonstration systems focus on the problem of human robot interaction, and let the human teach the robot by demonstrating the task using his own hands. In this thesis, we present a solution to a subproblem within the learning by demonstration field, namely human-robot grasp mapping. Robot grasping of objects in a home or office environment is challenging problem. Programming by demonstration systems, can give important skills for aiding the robot in the grasping task. The thesis presents two techniques for human-robot grasp mapping, direct robot imitation from human demonstrator and intelligent grasp imitation. In intelligent grasp mapping, the robot takes the size and shape of the object into consideration, while for direct mapping, only the pose of the human hand is available. These are evaluated in a simulated environment on several robot platforms. The results show that knowing the object shape and size for a grasping task improves the robot precision and performance ​
This document is licensed under a Creative Commons:Attribution - Non commercial - No Derivate Works (by-nc-nd) Creative Commons by-nc-nd3.0