N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating Networks - Université de Picardie Jules Verne Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating Networks

Résumé

Monocular depth estimation has been a popular area of research for several years, especially since selfsupervised networks have shown increasingly good results in bridging the gap with supervised and stereo methods. However, these approaches focus their interest on dense 3D reconstruction and sometimes on tiny details that are superfluous for autonomous navigation. In this paper, we propose to address this issue by estimating the navigation map under a quadtree representation. The objective is to create an adaptive depth map prediction that only extract details that are essential for the obstacle avoidance. Other 3D space which leaves large room for navigation will be provided with approximate distance. Experiment on KITTI dataset shows that our method can significantly reduce the number of output information without major loss of accuracy.
Fichier principal
Vignette du fichier
ICRA_2022-4.pdf (3.67 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03588304 , version 1 (24-02-2022)

Identifiants

Citer

Daniel Braun, Olivier Morel, Pascal Vasseur, Cédric Demonceaux. N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating Networks. IEEE International Conference on Robotics and Automation, ICRA'22, May 2022, Philadelphia, United States. ⟨10.1109/ICRA46639.2022.9812362⟩. ⟨hal-03588304⟩
162 Consultations
73 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More