Research
-
V4RL Datasets
-
V4RL Code Releases
- Code: MemCDT
- Code: COVINS
- Code: Event-based Feature Tracking
- Code: Multi-robot Coordination for Autonomous Navigation in Partially Unknown Environments
- Aerial Single-view Depth Completion: Code + Datasets + Simulator
- Code: CCM-SLAM
- Code: Real-time Mesh-based Scene Estimation
- Code: Visual-Inertial Relative Pose Estimation for Aerial Vehicles
- Code: COVINS-G
-
Projects
-
V4RL Setup Release
Code: CCM-SLAM
Centralized Collaborative Monocular SLAM for Robotic Teams
This is a centralized collaborative SLAM framework for a team of robots, each equipped with monocular camera. These robots can employ CCM-SLAM to co-localize, while building a 3D map of their surroundings in real-time, in a collaborative fashion by sharing their experiences of the environment with each other. Without any assumptions on the robots' prior constellation of poses, CCM-SLAM enables collaborative SLAM, joining any robots' maps that experience overlap and enabling sharing of information amongst them. While there is built-in tolerance to network delays, in the worst case scenario of total communication loss, the robots are still able to run monocular odometry onboard their own processors. The software for CCM-SLAM is publicly available and can be accessed from link.
Users of this software are kindly asked to cite at least one of the following papers, where it was introduced:
Patrik Schmuck and Margarita Chli, "Multi-UAV Collaborative Monocular SLAM" in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2017. DOI Research Collection
Patrik Schmuck and Margarita Chli, "CCM-SLAM: Robust and Efficient Centralized Collaborative Monocular SLAM for Robotic Teams" in Journal of Field Robotics (JFR), 2019. DOI Research Collection