Daniele Palossi


Loading...

Last Name

Palossi

First Name

Daniele

Organisational unit

03996 - Benini, Luca / Benini, Luca

Search Results

Publications 1 - 10 of 50
  • Cereda, Elia; Bonato, Stefano; Nava, Mirko; et al. (2024)
    Journal of Intelligent & Robotic Systems
    Vision-based deep learning perception fulfills a paramount role in robotics, facilitating solutions to many challenging scenarios, such as acrobatic maneuvers of autonomous unmanned aerial vehicles (UAVs) and robot-assisted high-precision surgery. Control-oriented end-to-end perception approaches, which directly output control variables for the robot, commonly take advantage of the robot’s state estimation as an auxiliary input. When intermediate outputs are estimated and fed to a lower-level controller, i.e., mediated approaches, the robot’s state is commonly used as an input only for egocentric tasks, which estimate physical properties of the robot itself. In this work, we propose to apply a similar approach for the first time – to the best of our knowledge – to non-egocentric mediated tasks, where the estimated outputs refer to an external subject. We prove how our general methodology improves the regression performance of deep convolutional neural networks (CNNs) on a broad class of non-egocentric 3D pose estimation problems, with minimal computational cost. By analyzing three highly-different use cases, spanning from grasping with a robotic arm to following a human subject with a pocket-sized UAV, our results consistently improve the R2 regression metric, up to +0.51, compared to their stateless baselines. Finally, we validate the in-field performance of a closed-loop autonomous cm-scale UAV on the human pose estimation task. Our results show a significant reduction, i.e., 24% on average, on the mean absolute error of our stateful CNN, compared to a State-of-the-Art stateless counterpart.
  • Conti, Francesco; Palossi, Daniele; Andri, Renzo; et al. (2017)
    IEEE Transactions on Human-Machine Systems
  • Crupi, Luca; Butera, Luca; Ferrante, Alberto; et al. (2025)
    Journal on Autonomous Transportation Systems
    Efficient crop production requires early detection of pest outbreaks and timely treatments; we consider a solution based on a fleet of multiple autonomous miniaturized unmanned aerial vehicles (nano-UAVs) to visually detect pests and a single slower heavy vehicle that visits the detected outbreaks to deliver treatments. To cope with the extreme limitations aboard nano-UAVs, e.g., low-resolution sensors and sub-100 mW computational power budget, we design, fine-tune, and optimize a tiny image-based convolutional neural network (CNN) for pest detection. Despite the small size of our CNN (i.e., 0.58 GOps/inference), on our dataset, it scores a mean average precision (mAP) of 0.79 in detecting harmful bugs, i.e., 14% lower mAP but 32× fewer operations than the best-performing CNN in the literature. Our CNN runs in real-time at 6.8 frame/s, requiring 33 mW on a GWT GAP9 System-on-Chip aboard a Crazyflie nano-UAV. Then, to cope with in-field unexpected obstacles, we leverage a global+local path planner based on the A* algorithm. The global path planner determines the best route for the nano-UAV to sweep the entire area, while the local one runs up to 50 Hz aboard our nano-UAV and prevents collision by adjusting the short-distance path. Finally, we demonstrate with in-simulator experiments that once a 25 nano-UAVs fleet has combed a 200 × 200 m vineyard, collected information can be used to plan the best path for the tractor, visiting all and only required hotspots. In this scenario, our efficient transportation system, compared to a traditional single-ground vehicle performing both inspection and treatment, can save up to 20 h working time.
  • Palossi, Daniele; Loquercio, Antonio; Conti, Francesco; et al. (2019)
    IEEE Internet of Things Journal
  • Niculescu, Vlad; Palossi, Daniele; Magno, Michele; et al. (2023)
    IEEE Internet of Things Journal
    Smart interaction between autonomous centimeter-scale unmanned aerial vehicles (i.e., nano-UAVs) and Internet of Things (IoT) sensor nodes is an upcoming high-impact scenario. This work tackles precise 3-D localization of indoor edge nodes with an autonomous nano-UAV without prior knowledge of their position. We employ ultrawideband (UWB) and wake-up radio (WUR) technologies: we perform UWB-based ranging and data exchange between the nano-UAV and the nodes, while the WUR minimizes the sensors' power consumption. UWB-based precise localization requires addressing multiple sources of error, such as UWB-ranging noise and UWB antennas' uneven radiation pattern. The limited computational resources aboard a nano-UAV further complicate this scenario, requiring real-time execution of the localization algorithm within a microcontroller unit (MCU). We propose a novel UWB-based localization system for nano-UAVs, composed by: 1) a lightweight localization algorithm; 2) an optimal flight strategy; and 3) a ranging-error-correction model. Our 3-D flight policy requires only five UWB measurements to feed the localization algorithm, which bounds the localization error within $\mathrm {28 \, \text {c} \text {m} }$ and runs in $\mathrm {1.2 \text {m} \text {s} }$ on a Cortex-M4 MCU. Localization accuracy is improved by an additional 25% thanks to a novel error-correction model. Leveraging the WUR, the entire localization/data-exchange cycle costs only $\mathrm {24 \, \text {m} \text {J} }$ at the sensor node, which is 50 times more energy efficient than the state of the art with comparable localization accuracy.
  • Niculescu, Vlad; Lamberti, Lorenzo; Palossi, Daniele; et al. (2021)
    2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)
    Convolutional neural networks (CNNs) are fueling the advancement of autonomous palm-sized drones, i.e., nano-drones, despite their limited power envelope and onboard processing capabilities. Computationally lighter than traditional geometrical approaches, CNNs are the ideal candidates to predict end-to-end signals directly from the sensor inputs to feed to the onboard flight controller. However, these sophisticated CNNs require significant complexity reduction and fine-grained tuning to be successfully deployed aboard a flying nano-drone. To date, these optimizations are mostly hand-crafted and require error-prone, labor-intensive iterative development flows. This work discusses methodologies and software tools to streamline and automate all the deployment stages on a low-power commercial multicore System-on-Chip. We investigate both an industrial closed-source and an academic open-source tool-set with a field-proofed state-of-the-art CNN for autonomous driving. Our results show a 2 reduction of the memory footprint and a speedup of 1.6 in the inference time, compared to the original hand-crafted CNN, with the same prediction accuracy.
  • Lamberti, Lorenzo; Cereda, Elia; Abbate, Gabriele; et al. (2024)
    IEEE Robotics and Automation Letters
  • Crupi, Luca; Giusti, Alessandro; Palossi, Daniele (2024)
    2024 IEEE International Conference on Robotics and Automation (ICRA)
    Relative drone-to-drone localization is a fundamental building block for any swarm operations. We address this task in the context of miniaturized nano-drones, i.e., similar to 10 cm in diameter, which show an ever-growing interest due to novel use cases enabled by their reduced form factor. The price for their versatility comes with limited onboard resources, i.e., sensors, processing units, and memory, which limits the complexity of the onboard algorithms. A traditional solution to overcome these limitations is represented by lightweight deep learning models directly deployed aboard nano-drones. This work tackles the challenging relative pose estimation between nano-drones using only a gray-scale low-resolution camera and an ultra-low-power System-on-Chip (SoC) hosted onboard. We present a vertically integrated system based on a novel vision-based fully convolutional neural network (FCNN), which runs at 39 Hz within 101mW onboard a Crazyflie nano-drone extended with the GWT GAP8 SoC. We compare our FCNN against three State-of-the-Art (SoA) systems. Considering the best-performing SoA approach, our model results in a R-2 improvement from 32 to 47% on the horizontal image coordinate and from 18 to 55% on the vertical image coordinate, on a real-world dataset of similar to 30 k images. Finally, our in-field tests show a reduction of the average tracking error of 37% compared to a previous SoA work and an endurance performance up to the entire battery lifetime of 4 min.
  • Palossi, Daniele; Furci, Michele; Naldi, Roberto; et al. (2016)
    CF '16 Proceedings of the ACM International Conference on Computing Frontiers
  • Crupi, Luca; Butera, Luca; Ferrante, Alberto; et al. (2024)
    2024 20th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT)
    Smart farming and precision agriculture represent game-changer technologies for efficient and sustainable agribusiness. Miniaturized palm-sized drones can act as flexible smart sensors inspecting crops, looking for early signs of potential pest outbreaking. However, achieving such an ambitious goal requires hardware-software codesign to develop accurate deep learning (DL) detection models while keeping memory and computational needs under an ultra-tight budget, i.e., a few MB on-chip memory and a few 100s mW power envelope. This work presents a novel vertically integrated solution featuring two ultra-low power System-on-Chips (SoCs), i.e., the dual-core STM32H74 and a multi-core GWT GAP9, running two Stateof-the-Art DL models for detecting the Popillia japonica bug. We fine-tune both models for our image-based detection task, quantize them in 8-bit integers, and deploy them on the two SoCs. On the STM32H74, we deploy a FOMO-MobileNetV2 model, achieving a mean average precision (mAP) of 0.66 and running at 16.1 frame/s within 498mW. While on the GAP9 SoC, we deploy a more complex SSDLite-MobileNetV3, which scores an mAP of 0.79 and peaks at 6.8 frame/s within 33mW. Compared to a top-notch RetinaNet-ResNet101-FPN full-precision baseline, which requires 14.9x more memory and 300x more operations per inference, our best model drops only 15% in mAP, paving the way toward autonomous palm-sized drones capable of lightweight and precise pest detection.
Publications 1 - 10 of 50