At the Vision for Robotics Lab (V4RL), led by Prof. Dr. Margarita Chli, we research & develop vision-based perception capabilities for robots drawing inspiration from disciplines, such as Computer Vision, Machine Learning and Neuroscience. As some of the most agile and challenging platforms, we have a focus on small Unmanned Aerial Vehicles (UAVs).
With limited computational, power and weight capabilities, the choice of sensors & algorithms able to run onboard and in real-time is the real challenge we are faced with; albeit at the promise of high impact in the use of such systems in real tasks, such as archaeological site digitisation, search-and-rescue or industrial inspection.
Studying the collaboration among different robots and analysing the image stream and all sensor cues as they are being captured, we aim to achieve a timely and rich enough understanding of the robots' surroundings in order to provide them with the information necessary to automate their navigation and their interaction with their environment.
The V4RL team has achieved some world-firsts in robotic perception, such as the first vision-based autonomous flight of a small helicopter, and the demonstration of collaborative robotic perception for a small swarm of drones. For many of these works, there is open-source software and datasets that you can try out. The full list of publications can be accessed here, and all V4RL videos here.