Deep Learning on Meshes for Grasp Segmentation and Quality Assessment
Text Complet
Compartir
In recent years, the research community has paid a great deal of attention to deep learn-
ing, mostly in LLMs (Large Language Models) and the vision domain utilizing natural
and medical images. However, relatively little research has been conducted on deep
learning for polygonal meshes. Polygonal meshes are an effective way of representing 3D
shapes since they explicitly express both shape surface and topology but are not limited
to uniformity to portray both large flat expanses and sharp, complex details. Hence,
polygonal meshes serve crucial roles in robotic applications such as grasping. The field
of robotic grasping has been an active area of research for decades, with many successful
approaches relying on hand-designed heuristics and feature engineering. Unfortunately,
generating suitable grasps and measuring the quality of grasping on a broad scale with
existing methods is rather expensive.
Hence, this master’s thesis implements an end-to-end pipeline for producing optimal
grasps for robotic arms by segmenting edges with a deep learning network, MeshCNN.
Shapenet-Core, a well-known 3D dataset, has been utilized for this purpose. Although
there are fifty-five categories available in the Shapenet dataset, this thesis develops a
training algorithm for five categories that are considered graspable objects for a parallel
jaw gripper. Prior to the training phase, 3D models are converted into watertight mesh
models using a robust approach for generating watertight 2-manifold surfaces. For data
annotation, a grasp sampler algorithm is developed. Finally, the performances of the
implemented scheme are quantitatively and qualitatively evaluated.