Intelligent task-level grasp mapping for robot control
dc.contributor
dc.contributor.author
dc.contributor.other
dc.date.accessioned
2013-02-28T10:06:22Z
dc.date.available
2013-02-28T10:06:22Z
dc.date.issued
2006-06
dc.identifier.uri
dc.description.abstract
In the future, robots will enter our everyday lives to help us with various tasks.
For a complete integration and cooperation with humans, these robots need
to be able to acquire new skills. Sensor capabilities for navigation in real human
environments and intelligent interaction with humans are some of the key
challenges.
Learning by demonstration systems focus on the problem of human robot
interaction, and let the human teach the robot by demonstrating the task using
his own hands. In this thesis, we present a solution to a subproblem within the
learning by demonstration field, namely human-robot grasp mapping. Robot
grasping of objects in a home or office environment is challenging problem.
Programming by demonstration systems, can give important skills for aiding
the robot in the grasping task.
The thesis presents two techniques for human-robot grasp mapping, direct
robot imitation from human demonstrator and intelligent grasp imitation. In
intelligent grasp mapping, the robot takes the size and shape of the object into
consideration, while for direct mapping, only the pose of the human hand is
available.
These are evaluated in a simulated environment on several robot platforms.
The results show that knowing the object shape and size for a grasping task
improves the robot precision and performance
dc.format.mimetype
application/pdf
application/zip
dc.language.iso
eng
dc.relation.ispartofseries
Enginyeria Tècnica. Informàtica de Sistemes (ETIS)
dc.rights
Attribution-NonCommercial-NoDerivs 3.0 Spain
dc.rights.uri
dc.title
Intelligent task-level grasp mapping for robot control
dc.type
info:eu-repo/semantics/bachelorThesis
dc.rights.accessRights
info:eu-repo/semantics/openAccess
dc.type.version
info:eu-repo/semantics/draft