Davide Plozza


Loading...

Last Name

Plozza

First Name

Davide

Organisational unit

01225 - D-ITET Zentr. f. projektbasiertes Lernen / D-ITET Center for Project-Based Learning

Search Results

Publications 1 - 4 of 4
  • Cai, Yuke; Plozza, Davide; Marty, Steven; et al. (2024)
    2024 IEEE International Conference on Omni-layer Intelligent Systems (COINS)
    Time of Flight (ToF) cameras, renowned for their ability to capture real-time 3D information, have become indispensable for agile mobile robotics. These cameras utilize light signals to accurately measure distances, enabling robots to navigate complex environments with precision. Innovative depth cameras, characterized by their compact size and lightweight design, such as the recently released PMD Flexx2, are particularly suited for mobile robots. Capable of achieving high frame rates while capturing depth information, this innovative sensor is suitable for tasks such as robot navigation and terrain mapping. Operating on the ToF measurement principle, the sensor offers multiple benefits over classic stereo-based depth cameras. However, the depth images produced by the camera are subject to noise from multiple sources, complicating their simulation. This paper proposes an accurate quantification and modeling of the non-systematic noise of the PMD Flexx2. We propose models for both axial and lateral noise across various camera modes, assuming Gaussian distributions. Axial noise, modeled as a function of distance and incidence angle, demonstrated a low average Kullback-Leibler (KL) divergence of 0.015 nats, reflecting precise noise characterization. Lateral noise, deviating from a Gaussian distribution, was modeled conservatively, yielding a satisfactory KL divergence of 0.868 nats. These results validate our noise models, crucial for accurately simulating sensor behavior in virtual environments and reducing the sim-to-real gap in learning-based control approaches.
  • Plozza, Davide; Marty, Steven; Scherrer, Cyril; et al. (2025)
    2025 10th IEEE International Workshop on Advances in Sensors and Interfaces (IWASI)
    This paper presents a fully embedded real-time person tracking pipeline for assistive quadrupedal robots supporting safe navigation for visually impaired users. Our approach combines a deep learning-based 2D LiDAR person detector with a lightweight multi-object tracker and integrates it into a Guide Dog Robot (GDR) navigation framework. A novel detection post-processing scheme is proposed, reducing detector latency by 52.38% compared to state-of-the-art voting-based methods while preserving accuracy. The improved latency enables the entire pipeline to operate reliably at 20 Hz on a resource-constrained mobile robotic embedded platform based on the NVIDIA Jetson Xavier NX. The experimental setup shows that our system tracks dynamic obstacles and continuously localizes the user holding the robot’s handle, enabling a dynamic safety footprint for proactive collision avoidance. Under the tested setup, the optimal configuration achieves a MOTA of 83.27% and a user tracking RMSE below 0.2 m on two custom datasets recorded with motion-capture ground truth. Real-world navigation experiments in indoor environments demonstrate effective collision prevention and smooth corrective maneuvers when the user drifts from the default following position. The modular design of the detection, tracking, and planning components ensures flexibility and ease of integration into other robotic platforms. This work contributes a scalable and efficient tracking and navigation solution for human-aware mobile robots operating in dynamic environments, supporting safer human-robot interaction in assistive contexts.
  • Plozza, Davide; Marty, Steven; Scherrer, Cyril; et al. (2024)
    2024 IEEE Sensors Applications Symposium (SAS)
    In the rapidly evolving landscape of autonomous mobile robots, the emphasis on seamless human-robot interactions has shifted towards autonomous decision-making. This paper delves into the intricate challenges associated with robotic autonomy, focusing on navigation in dynamic environments shared with humans. It introduces an embedded real-time tracking pipeline, integrated into a navigation planning framework for effective person tracking and avoidance, adapting a state-of-the-art 2D LiDAR-based human detection network and an efficient multi-object tracker. By addressing the key components of detection, tracking, and planning separately, the proposed approach highlights the modularity and transferability of each component to other applications. Our tracking approach is validated on a quadruped robot equipped with 270 degrees 2D-LiDAR against motion capture system data, with the preferred configuration achieving an average MOTA of 85.45% in three newly recorded datasets, while reliably running in real-time at 20 Hz on the NVIDIA Jetson Xavier NX embedded GPU-accelerated platform. Furthermore, the integrated tracking and avoidance system is evaluated in real-world navigation experiments, demonstrating how accurate person tracking benefits the planner in optimizing the generated trajectories, enhancing its collision avoidance capabilities. This paper contributes to safer human-robot cohabitation, blending recent advances in human detection with responsive planning to navigate shared spaces effectively and securely.
  • Joseph, Paul David; Plozza, Davide; Pascarella, Luca; et al. (2024)
    2024 IEEE Sensors Applications Symposium (SAS)
    Robotic automation represents mankind's next leap. While the industry has already embraced robotization, robot-driven domestic aid is only at the beginning of the revolution. Dealing with unconstrained daily environments is more challenging than automating manufacturing production lines. The growing diffusion of semi-autonomous assistive quadrupedal robots that can handle obstacles triggered the change. Nonetheless, robot control still requires active human supervision, which can be tedious in ordinary surroundings or even impossible in clinical circumstances. This paper tackles the robot control challenge and introduces the deployment of gaze-controlled semi-autonomous assistive quadrupedal robots. We focus on noninvasive gaze-tracking performed on smart glasses to guide a semi-autonomous assistive quadrupedal robot toward the user's focused remote object. Specifically, we propose (i) a preliminary setup to exploit the potential of gaze-tracking in quadrupedal robot control with the prospective of enabling delocalized object grasping and (ii) an assessment of two state-of-the-art image-matching algorithms, considering both accuracy and power footprint. Results show that the proposed solution enables piloting of quadrupedal robots toward a target highlighted by the user gaze with an accuracy of less than 20 cm and an inference time of similar to 200 ms running the algorithms on an on-board embedded computational unit.
Publications 1 - 4 of 4