V4RL Code Releases
- Code: Event-based Feature Tracking
- Code: COVINS
- Code: Multi-robot Coordination for Autonomous Navigation in Partially Unknown Environments
- Aerial Single-view Depth Completion: Code + Datasets + Simulator
- Code: CCM-SLAM
- Code: Real-time Mesh-based Scene Estimation
- Code: Visual-Inertial Relative Pose Estimation for Aerial Vehicles
V4RL Setup Release
Wide-baseline Place Recognition Dataset
This dataset provides sequences from two synthetic and two real outdoor scenes with visual, inertial, and ground-truth information, specifically recorded for place recognition tasks. The real datasets exhibit not only significant challenges in viewpoint, but also illumination and situational changes. The synthetic datasets are recorded on scenes generated with photogrammetric reconstruction with camera trajectories generated with a physical simulator of a small aircraft to produce realistic aerial sequences exhibiting viewpoint changes, which at times are very strong (i.e. wide-baseline). To the best of our knowledge these synthetic datasets are the first to isolate the challenge of viewpoint changes from all others (i.e. variations in scale, scene dynamicity and illumination), aiding the benchmarking of place recognition methods with respect to viewpoint tolerance.
The real datasets were recorded with a side-looking camera from both aerial and handheld setups in order to revisit the same scene from very different viewpoints. All the real sequences were recorded using a high-quality visual-inertial sensor providing monocular, grayscale, global-shutter images at 20 Hz and time-synchronized inertial measurements. The synthetic datasets contain visual and inertial measurements reproducing the same setup as in the real datasets. This dataset is freely available and can be downloaded here.
Users of this dataset are asked to cite the following paper, where it was originally made publicly available:
Fabiola Maffra, Lucas Teixeira, Zetao Chen and Margarita Chli, “Real-time Wide-baseline Place Recognition using Depth Completion”, in IEEE Robotics and Automation Letters, 2019. DOI Research Collection