Automatic Visual Bag-of-Words for Online Robot Navigation and Mapping

Texto Completo
AutomaticVisualBag.pdf embargoed access
Solicita copia
Al rellenar este formulario estáis solicitando una copia del artículo, depositado en el repositorio institucional (DUGiDocs), a su autor o al autor principal del artículo. Será el mismo autor quien decidirá enviar una copia del documento a quien lo solicite si lo considera oportuno. En todo caso, la Biblioteca de la UdG no interviene en este proceso ya que no está autorizada a facilitar artículos cuando éstos son de acceso restringido.
Compartir
Detecting already-visited regions based on their visual appearance helps reduce drift and position uncertainties in robot navigation and mapping. Inspired from content-based image retrieval, an efficient approach is the use of visual vocabularies to measure similarities between images. This way, images corresponding to the same scene region can be associated. State-of-theart proposals that address this topic use prebuilt vocabularies that generally require a priori knowledge of the environment. We propose a novel method for appearance-based navigation and mapping where the visual vocabularies are built online, thus eliminating the need for prebuilt data. We also show that the proposed technique allows efficient loop-closure detection, even at small vocabulary sizes, resulting in a higher computational efficiency ​
​Tots els drets reservats