Research
-
V4RL Datasets
-
V4RL Code Releases
- Code: MemCDT
- Code: COVINS
- Code: Event-based Feature Tracking
- Code: Multi-robot Coordination for Autonomous Navigation in Partially Unknown Environments
- Aerial Single-view Depth Completion: Code + Datasets + Simulator
- Code: CCM-SLAM
- Code: Real-time Mesh-based Scene Estimation
- Code: Visual-Inertial Relative Pose Estimation for Aerial Vehicles
- Code: COVINS-G
-
Projects
-
V4RL Setup Release
Code: COVINS
COVINS: A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping
COVINS is an accurate, scalable, and versatile visual-inertial collaborative SLAM system, that enables a group of robotic agents (e.g. small drones) equipped with visual and inertial sensing, to simultaneously co-localize and jointly map an environment. In this framework, each agent runs a Visual-Inertial Odometry (VIO) algorithm onboard, sharing newly acquired map information with the server through the COVINS communication interface. The centralized server back-end accumulates and maintains the data contributed by all agents, and joins any overlapping map data into a shared global SLAM estimate online during the mission. The COVINS server back-end can run on a local PC or a remote cloud server, and provides a communication module that allows agents to dynamically join the collaborative SLAM estimation process, and to interface custom keyframe-based VIO systems with COVINS to generate collaborative estimates.
The COVINS software is publicly available and can be accessed via link.
Users of this software are asked to cite the following article, where it was introduced:
Patrik Schmuck, Thomas Ziegler, Marco Karrer, Jonathan Perraudin and Margarita Chli, "COVINS: Visual-Inertial SLAM for Centralized Collaboration" in Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021.
(Research Collection)
The video below shows example experiments presented in this work.