Research
-
V4RL Datasets
-
V4RL Code Releases
- Code: MemCDT
- Code: COVINS
- Code: Event-based Feature Tracking
- Code: Multi-robot Coordination for Autonomous Navigation in Partially Unknown Environments
- Aerial Single-view Depth Completion: Code + Datasets + Simulator
- Code: CCM-SLAM
- Code: Real-time Mesh-based Scene Estimation
- Code: Visual-Inertial Relative Pose Estimation for Aerial Vehicles
- Code: COVINS-G
-
Projects
-
V4RL Setup Release
Code: COVINS-G
COVINS-G: A Generic Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping
Version 2.0 (Support for COVINS-G: A Generic Back-end for Collaborative Visual-Inertial SLAM)
COVINS is an accurate, scalable, and versatile visual-inertial collaborative SLAM system, that enables a group of agents to simultaneously co-localize and jointly map an environment. COVINS provides a server back-end for collaborative SLAM, running on a local machine or a remote cloud instance, generating collaborative estimates from map data contributed by different agents running Visual-Inertial Odomety (VIO) and sharing their map with the back-end.
With the COVINS-G release, we make the server back-end flexible enabling compatibility with any arbitrary VIO/Stereo front-end, including, for example, off-the-shelf cameras with odometry capabilities, such as the Realsense T265.
We provide guidance and examples of how to run COVINS and COVINS-G on the EuRoC dataset, as well as information beyond basic deployment, for example how the COVINS back-end can be deployed on a remote cloud computing instance. Instructions on running COVINS-G with different cameras such as Intel Realsense D455 and T265 Tracking camera, as well as different frontends like VINS-Fusion, ORB-SLAM3, and SVO-pro are also provided.
The COVINS-G software is publicly available and can be accessed via link.
Users of this software are asked to cite the following article, where it was introduced:
Manthan Patel, Marco Karrer, Philipp Bänninger and Margarita Chli. "COVINS-G: A Generic Back-end for Collaborative Visual-Inertial SLAM". IEEE International Conference on Robotics and Automation (ICRA), 2023.
All the related papers leading to this implementation are detailed here.
The video below shows example experiments presented in this work.