Patrik Schmuck
Loading...
6 results
Search Results
Publications 1 - 6 of 6
- COVINS: Visual-Inertial SLAM for Centralized CollaborationItem type: Conference Paper
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)Schmuck, Patrik; Ziegler, Thomas; Karrer, Marco; et al. (2021)Collaborative SLAM enables a group of agents to simultaneously co-localize and jointly map an environment, thus paving the way to wide-ranging applications of multi-robot perception and multi-user AR experiences by eliminating the need for external infrastructure or pre-built maps. This article presents COVINS, a novel collab- orative SLAM system, that enables multi-agent, scalable SLAM in large environments and for large teams of more than 10 agents. The paradigm here is that each agent runs visual-inertial odomety independently onboard in order to ensure its autonomy, while shar- ing map information with the COVINS server back-end running on a powerful local PC or a remote cloud server. The server back- end establishes an accurate collaborative global estimate from the contributed data, refining the joint estimate by means of place recog- nition, global optimization and removal of redundant data, in order to ensure an accurate, but also efficient SLAM process. A thorough evaluation of COVINS reveals increased accuracy of the collab- orative SLAM estimates, as well as efficiency in both removing redundant information and reducing the coordination overhead, and demonstrates successful operation in a large-scale mission with 12 agents jointly performing SLAM. - Learning Deep Descriptors with Scale-Aware Triplet NetworksItem type: Conference Paper
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Keller, Michel; Chen, Zetao; Maffra, Fabiola; et al. (2018)Research on learning suitable feature descriptors for Computer Vision has recently shifted to deep learning where the biggest challenge lies with the formulation of appropriate loss functions, especially since the descriptors to be learned are not known at training time. While approaches such as Siamese and triplet losses have been applied with success, it is still not well understood what makes a good loss function. In this spirit, this work demonstrates that many commonly used losses suffer from a range of problems. Based on this analysis, we introduce mixed-context losses and scale-aware sampling, two methods that when combined enable networks to learn consistently scaled descriptors for the first time. - CVI-SLAM — Collaborative Visual-Inertial SLAMItem type: Journal Article
IEEE Robotics and Automation LettersKarrer, Marco; Schmuck, Patrik; Chli, Margarita (2018)With robotic perception constituting the biggest impediment before robots are ready for employment in real missions, the promise of more efficient and robust robotic perception in multiagent, collaborative missions can have a great impact on many robotic applications. Employing an ubiquitous and well-established visual-inertial setup onboard each agent, in this letter, we propose CVI-SLAM, a novel visual-inertial framework for centralized collaborative simultaneous localization and mapping (SLAM). Sharing all information with a central server, each agent outsources computationally expensive tasks, such as global map optimization to relieve onboard resources and passes on measurements to other participating agents, while running visual-inertial odometry onboard to ensure autonomy throughout the mission. Thoroughly analyzing CVI-SLAM, we attest to its accuracy and the improvements arising from the collaboration, and evaluate its scalability in the number of participating agents and applicability in terms of network requirements. - Collaborative Vision-based Simultaneous Localization And Mapping for Robotic TeamsItem type: Doctoral ThesisSchmuck, Patrik (2020)Simultaneous Localization And Mapping (SLAM) constitutes one of the most fundamental problems in robotics, since ego-motion estimation and map-building are key in enabling autonomous navigation. Allowing robots to perform tasks independently, relying on onboard sensing systems only, SLAM forms the basic building block of many applications for mobile robots, such as autonomous vacuum cleaning or package delivery. While most state-of-the-art navigation solutions are designed for a single robot only, robotic collaboration promises increased robustness and efficiency of missions with great potential in applications, such as search-and-rescue or agriculture. Beyond that, algorithms for autonomous navigation form the basis for contemporary Augmented and Virtual Reality applications with mobile devices such as smartphones or head-mounted displays. Multiple robots and other vision-equipped devices, also considered as agents in a multi-agent system, collaborating in a decentralized fashion or centralized by communicating through a server, can substantially boost the efficiency of a mission by dividing up tasks, for example the load of mapping an environment. Furthermore, co-localization of the agents is key for collaborative tasks, such as carrying a load. Sharing information in a robotic team allows avoiding duplicate work, and the access to more than one’s own experiences promises better accuracy of estimates at runtime. With increasing maturity and robustness of single-agent SLAM, multi-robot systems have been gaining growing popularity over the last years. However, such systems are still in their infancy, with only a few works tackling the SLAM problem in a collaborative fashion, often making assumptions imposing substantial limitations to the system, such as perfect network connection or known initial configuration of the agents, or are only tested on pre-recorded datasets or with limited data exchange. The main challenges in collaborative SLAM lie in maintaining a consistent estimate with experiences from multiple contributors, efficient data management to handle the large amount of data collected by all participating agents, and transparency of information throughout the system to exploit the full potential of collaboration. Furthermore, robust data exchange and efficient algorithm design are necessary to enable the applicability of multi-agent SLAM systems in real-world missions. To this end, this doctoral thesis has been investigating collaborative SLAM, focusing on centralized topologies in a vision-based sensor setup. In particular, effective approaches of system architecture for centralized collaborative SLAM have been studied such that data transparency, efficiency and practicality can be maximized. This research has led to the development of two multi-agent SLAM systems, allowing robotic agents to collaboratively perceive their workspace by sharing their experiences of the environment with each other through a central entity. The evaluation on simulated datasets as well as real experiments reveals not only more efficient SLAM estimates in the collaborative case in comparison to the single-agent case as expected, but also more accurate estimates for all agents at runtime, as each agent has access to more than its own data at each instant. Moreover, the computational bottleneck arising from an increasing number of agents contributing with data in the SLAM estimation process is studied, assessing the impact on the scalability of collaborative SLAM. Following this investigation, the last contribution of this thesis aims to mitigate this explosion of data in the system employing well-studied notions in Shannon’s Information Theory to assess the extent of redundancy of information within the SLAM graph before identifying and removing parts of this graph that consume precious computational resources at diminishing returns to the accuracy of the SLAM estimate. The algorithms and systems proposed in this thesis are evaluated in challenging real-world applications as well as state-of-the-art benchmarking datasets, attesting to their practicality and accuracy. While mainly experiments are presented on small aerial robots as a particularly challenging platform for vision-based algorithms, the principles researched in this thesis are applicable to a wide range of platforms equipped with vision sensors. This research opens up new possibilities for multi-agent collaboration and mission control coordinating multiple agents, enabling the team to make the most of each participating agent, for example in a heterogeneous robotic team with agile drones overlooking the scene and ground robots with limited mobility but higher payload and processing power than the airborne robots. Altogether, the contributions of this thesis add to the efficiency, practicality and accuracy of multi-agent SLAM, bringing collaborative SLAM one step closer to real-world deployment
- CCM‐SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teamsItem type: Journal Article
Journal of Field RoboticsSchmuck, Patrik; Chli, Margarita (2019) - Distributed Formation Estimation Via Pairwise Distance MeasurementsItem type: Journal Article
IEEE Robotics and Automation LettersZiegler, Thomas; Karrer, Marco; Schmuck, Patrik; et al. (2021)Continuously and reliably estimating the relative configuration of robotic swarms in real-time constitutes a core functionality when pursuing the autonomy of such a swarm. Relying on external positioning systems, such as GPS or motion tracking systems, can provide the required information, but significantly limits the generality of an approach. In this letter, we target formation estimation for autonomous flights of swarms of small UAVs, as they pose particularly challenging restrictions on onboard resources, while opening up a large variety of practical scenarios for a multi-robot setup. While state-of-the-art has been addressing efficient formation estimation, scalability remains limited to only very few agents that can be handled in real-time, with the workload of each agent depending on the total number of agents in the swarm. Aiming for scalable multi-robot systems, here we propose a distributed formation estimation approach, where the computational load of each agent is decoupled from the swarm size. This approach is implemented in a setup with minimal communication effort, requiring only ego-motion estimates from each agent and pairwise distance measurements between them, constraining their configuration globally. Evaluations on swarms of up to 49 UAVs demonstrate the power of our approach to handle large swarms, while keeping the computational load bounded for individual agents and requiring only little data exchange between two robots.
Publications 1 - 6 of 6