Automatic Visual Bag-of-Words for Online Robot Navigation and Mapping

Full Text
AutomaticVisualBag.pdf embargoed access
Request a copy
When filling the form you are requesting a copy of the article, that is deposited in the institutional repository (DUGiDocs), at the autor or main autor of the article. It will be the same author who decides to give a copy of the document to the person who requests it, if it considers it appropriate. In any case, the UdG Library doesn’t take part in this process because it is not authorized to provide restricted articles.
Share
Detecting already-visited regions based on their visual appearance helps reduce drift and position uncertainties in robot navigation and mapping. Inspired from content-based image retrieval, an efficient approach is the use of visual vocabularies to measure similarities between images. This way, images corresponding to the same scene region can be associated. State-of-theart proposals that address this topic use prebuilt vocabularies that generally require a priori knowledge of the environment. We propose a novel method for appearance-based navigation and mapping where the visual vocabularies are built online, thus eliminating the need for prebuilt data. We also show that the proposed technique allows efficient loop-closure detection, even at small vocabulary sizes, resulting in a higher computational efficiency ​
​Tots els drets reservats