Journal: IEEE Robotics and Automation Letters

Loading...

Abbreviation

Publisher

IEEE

Journal Volumes

ISSN

2377-3766

Description

Search Results

Publications1 - 10 of 286
  • Burri, Jan T.; Vogler, Hannes; Munglani, Gautam; et al. (2019)
    IEEE Robotics and Automation Letters
  • Grandia, Ruben; Pardo, Diego; Buchli, Jonas (2018)
    IEEE Robotics and Automation Letters
    In this work we present a new formulation for learning the dynamics of legged robots performing locomotion tasks. Using sensor data we learn error terms at the level of rigid body dynamics and actuation dynamics. The learning framework deals with the hybrid nature of legged systems given by different contact configurations: We use the projection of the rigid body dynamics into a subspace consistent with the contact constraints. The equations of motion in such subspace do not depend on the contact forces, allowing to formulate a learning problem where force sensor data is not required. Additionally, we propose to use the columns of end-effector Jacobians as basis vectors, obtaining a model that generalizes across contact configurations. Both Locally Weighted Projection Regression and Sparse Gaussian Process Regression are used as supervised learning techniques. As application of the learned model, an inverse dynamics control method is extended. Hardware experiments with a quadruped robot show reduced RMS tracking error and a significant reduction in RMS feedback effort during base-only, walking, and trotting motions.
  • Winkler, Alexander W.; Bellicoso, C. Dario; Hutter, Marco; et al. (2018)
    IEEE Robotics and Automation Letters
    We present a single Trajectory Optimization for- mulation for legged locomotion that automatically determines the gait-sequence, step-timings, footholds, swing-leg motions and 6D body motion over non-flat terrain, without any additional modules. Our phase-based parameterization of feet motion and forces allows to optimize over the discrete gait sequence using only continuous decision variables. The system is represented using a simplified Centroidal dynamics model that is influenced by the feet’s location and forces. We explicitly enforce friction cone constraints, depending on the shape of the terrain. The NLP solver generates highly dynamic motion-plans with full flight-phases for a variety of legged systems with arbitrary morphologies in an efficient manner. We validate the feasibility of the generated plans in simulation and on the real quadruped robot ANYmal. Additionally, the entire solver software TOWR used to generate these motions is made freely available.
  • Geckeler, Christian; Aucone, Emanuele; Schnider, Yannick; et al. (2024)
    IEEE Robotics and Automation Letters
    Covering over a third of all terrestrial land area, forests are crucial environments; as ecosystems, for farming, and for human leisure. However, they are challenging to access for environmental monitoring, for agricultural uses, and for search and rescue applications. To enter, aerial robots need to fly through dense vegetation, where foliage can be pushed aside, but occluded branches pose critical obstacles. Therefore, we propose pixel-wise depth regression of occluded branches using three different U-Net inspired architectures. Given RGB-D input of trees with partially occluded branches, the models estimate depth values of only the wooden parts of the tree. A large photorealistic simulation dataset comprising around 44 K images of nine different tree species is generated, on which the models are trained. Extensive evaluation and analysis of the models on this dataset is shown. To improve network generalization to real-world data, different data augmentation and transformation techniques are performed. The approaches are then also successfully demonstrated on real-world data of broadleaf trees from Swiss temperate forests and a tropical Masoala Rainforest. This work showcases the previously unexplored task of frame-by-frame pixel-based occluded branch depth reconstruction to facilitate robot traversal of forest environments.
  • Kolvenbach, Hendrik; Bärtschi, Christian; Wellhausen, Lorenz; et al. (2019)
    IEEE Robotics and Automation Letters
    Planetary exploration robots encounter challenging terrain during operation. Vision-based approaches have failed to reliably predict soil characteristics in the past, making it necessary to probe the terrain tactilely. We present a robust, haptic inspection approach for a variety of fine, granular media, which are representative of Martian soil. In our approach, the robot uses one limb to perform an impact trajectory, while supporting the main body with the remaining three legs. The resulting vibration, which is recorded by sensors placed in the foot, is decomposed using the discrete wavelet transform and assigned a soil class by a Support Vector Machine. We tested two foot designs and validated the robustness of this approach through the extensive use of an open-source dataset, which we recorded on a specially designed single-foot testbed. A remarkable overall classification accuracy of more than 98% could be achieved despite various introduced disturbances. The contributions of the different sensors to the classification performance are evaluated. Finally, we test the generalization performance on unknown soils and show that the interaction behavior can be anticipated.
  • Huang, Hen-Wei; Chao, Qianwen; Sakar, Mahmut Selman; et al. (2017)
    IEEE Robotics and Automation Letters
  • Schmid, Lukas; Ni, Chao; Zhong, Yuliang; et al. (2022)
    IEEE Robotics and Automation Letters
    Exploration is a fundamental problem in robotics. While sampling-based planners have shown high performance and robustness, they are oftentimes compute intensive and can exhibit high variance. To this end, we propose to learn both components of sampling-based exploration. We present a method to directly learn an underlying informed distribution of views based on the spatial context in the robot's map, and further explore a variety of methods to also learn the information gain of each sample. We show in thorough experimental evaluation that our proposed system improves exploration performance by up to 28% over classical methods, and find that learning the gains in addition to the sampling distribution can provide favorable performance vs. compute trade-offs for compute-constrained systems. We demonstrate in simulation and on a low-cost mobile robot that our system generalizes well to varying environments.
  • Lefkopoulos, Vasileios; Menner, Marcel; Domahidi, Alexander; et al. (2021)
    IEEE Robotics and Automation Letters
  • Faessler, Matthias; Franchi, Antonio; Scaramuzza, Davide (2018)
    IEEE Robotics and Automation Letters
  • Patil, Vaishakh; Liniger, Alexander; Dai, Dengxin; et al. (2022)
    IEEE Robotics and Automation Letters
    Accurate scene depth is fundamental for robot scene understanding as it adds spatial reasoning. However, accurate scene depth often comes at the cost of expensive additional depth sensors. In this letter, we propose to use map-based depth data as an additional input instead of expensive depth sensors. Such an approach is especially appealing in autonomous driving since map-based depth is commonly available from high-definition maps. To validate this approach, we propose a mapping method that works with common autonomous driving datasets and allows for precise localization using a mix of GNSS-INS and image-based techniques. Furthermore, we propose an entirely learnable three-stage network that handles foreground-background mismatches between the map-based prior depth and the actual scene. Finally, we validate the performance of our method in comparison to several baseline methods and SOTA depth completion methods receiving map-based depth as an input. Our method significantly outperforms these methods both in quantitative and qualitative results. Moreover, our method achieves better metric-scale predictions compared to image-only approaches.
Publications1 - 10 of 286