Michele Magno


Loading...

Last Name

Magno

First Name

Michele

Organisational unit

01225 - D-ITET Zentr. f. projektbasiertes Lernen / D-ITET Center for Project-Based Learning

Search Results

Publications 1 - 10 of 156
  • Joseph, Paul David; Plozza, Davide; Pascarella, Luca; et al. (2024)
    2024 IEEE Sensors Applications Symposium (SAS)
    Robotic automation represents mankind's next leap. While the industry has already embraced robotization, robot-driven domestic aid is only at the beginning of the revolution. Dealing with unconstrained daily environments is more challenging than automating manufacturing production lines. The growing diffusion of semi-autonomous assistive quadrupedal robots that can handle obstacles triggered the change. Nonetheless, robot control still requires active human supervision, which can be tedious in ordinary surroundings or even impossible in clinical circumstances. This paper tackles the robot control challenge and introduces the deployment of gaze-controlled semi-autonomous assistive quadrupedal robots. We focus on noninvasive gaze-tracking performed on smart glasses to guide a semi-autonomous assistive quadrupedal robot toward the user's focused remote object. Specifically, we propose (i) a preliminary setup to exploit the potential of gaze-tracking in quadrupedal robot control with the prospective of enabling delocalized object grasping and (ii) an assessment of two state-of-the-art image-matching algorithms, considering both accuracy and power footprint. Results show that the proposed solution enables piloting of quadrupedal robots toward a target highlighted by the user gaze with an accuracy of less than 20 cm and an inference time of similar to 200 ms running the algorithms on an on-board embedded computational unit.
  • Bian, Sizhen; Liu, Mengxi; Zhou, Bo; et al. (2024)
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
    Due to the fact that roughly sixty percent of the human body is essentially composed of water, the human body is inherently a conductive object, being able to, firstly, form an inherent electric field from the body to the surroundings and secondly, deform the distribution of an existing electric field near the body. Body-area capacitive sensing, also called body-area electric field sensing, is becoming a promising alternative for wearable devices to accomplish certain tasks in human activity recognition (HAR) and human-computer interaction (HCI). Over the last decade, researchers have explored plentiful novel sensing systems backed by the body-area electric field, like the ring-form smart devices for sign language recognition, the room-size capacitive grid for indoor positioning, etc. On the other hand, despite the pervasive exploration of the body-area electric field, a comprehensive survey does not exist for an enlightening guideline. Moreover, the various hardware implementations, applied algorithms, and targeted applications result in a challenging task to achieve a systematic overview of the subject. This paper aims to fill in the gap by comprehensively summarizing the existing works on body-area capacitive sensing so that researchers can have a better view of the current exploration status. To this end, we first sorted the explorations into three domains according to the involved body forms: body-part electric field, whole-body electric field, and body-to-body electric field, and enumerated the state-of-art works in the domains with a detailed survey of the backed sensing tricks and targeted applications. We then summarized the three types of sensing frontends in circuit design, which is the most critical part in body-area capacitive sensing, and analyzed the data processing pipeline categorized into three kinds of approaches. The outcome will benefit researchers for further body-area electric field explorations. Finally, we described the challenges and outlooks of body-area electric sensing, followed by a conclusion, aiming to encourage researchers to further investigations considering the pervasive and promising usage scenarios backed by body-area capacitive sensing.
  • Dikici, Onur; Ghignone, Edoardo; Hu, Cheng; et al. (2025)
    IEEE Robotics and Automation Letters
    Accurate tire modeling is crucial for optimizing autonomous racing vehicles, as State-of-the-Art (SotA) model-based techniques rely on precise knowledge of the vehicle's parameters, yet system identification in dynamic racing conditions is challenging due to varying track and tire conditions. Traditional methods require extensive operational ranges, often impractical in racing scenarios. Machine Learning (ML)-based methods, while improving performance, struggle with generalization and depend on accurate initialization. This paper introduces a novel on-track system identification algorithm, incorporating a Neural Network (NN) for error correction, which is then employed for traditional system identification with virtually generated data. Crucially, the process is iteratively reapplied, with tire parameters updated at each cycle, leading to notable improvements in accuracy in tests on a scaled vehicle. Experiments show that it is possible to learn a tire model without prior knowledge with only 30 seconds of driving data, and 3 seconds of training time. This method demonstrates greater one-step prediction accuracy than the baseline Nonlinear Least Squares (NLS) method under noisy conditions, achieving a 3.3x lower Root Mean Square Error (RMSE), and yields tire models with comparable accuracy to traditional steady-state system identification. Furthermore, unlike steady-state methods requiring large spaces and specific experimental setups, the proposed approach identifies tire parameters directly on a race track in dynamic racing environments.
  • Mächler, Tobias; Müller, Hanna; Wiese, Philip; et al. (2025)
    2025 IEEE Sensors Applications Symposium (SAS)
    The rapid advancement of Internet of Things (IoT) technologies is driving a paradigm shift in smart agriculture. Spectrometers play a crucial role in modern agriculture and, when integrated with robotic mobile platforms such as nano-drones, enable flexible, precise, and cost-effective monitoring of crops and greenhouses. However, until recently, spectrometers were bulky and heavy, preventing their integration into small flying platforms due to weight constraints. In this work, we leverage the latest generation of mini-spectrometers to introduce, for the first time, a nano Unmanned Aerial Vehicle (UAV)-based smart sensor node capable of real-time spectral data sampling and processing. To achieve this, we designed a lightweight sensor shield featuring the C12880MA mini-spectrometer from Hamamatsu Photonics (2.53 cm3, spectral range 340–850 nm), compatible with a small and lightweight (27 g, 10 cm diameter) commercially available nano-drone from Bitcraze. With a total flight mass of 46.26 g, the system remains highly maneuverable and particularly well-suited for confined spaces. Our results show that spectral data can be sampled at 14.32 Hz and with a power consumption of approximately 94 mW. The system was validated for detection of green/red portions during flight, simulating a method to distinguish zones within a greenhouse with ripe and unripe tomatoes (which can be extended to other use-case scenarios), achieving an accuracy of above 95% while providing about 6 minutes of flight time.
  • Bonazzi, Pietro; Vogt, Christian; Jost, Michael; et al. (2025)
    2025 International Joint Conference on Neural Networks (IJCNN)
    Ensuring robust and real-time obstacle avoidance is critical for the safe operation of autonomous robots in dynamic, real-world environments. This paper proposes a neural network framework for predicting the time and collision position of an unmanned aerial vehicle with a dynamic object, using RGB and event-based vision sensors. The proposed architecture consists of two separate encoder branches, one for each modality, followed by fusion by self-attention to improve prediction accuracy.To facilitate benchmarking, we leverage the ABCD [8] dataset collected that enables detailed comparisons of single-modality and fusion-based approaches.At the same prediction throughput of 50Hz, the experimental results show that the fusion-based model offers an improvement in prediction accuracy over single-modality approaches of 1% on average and 10% for distances beyond 0.5m, but comes at the cost of +71% in memory and + 105% in FLOPs. Notably, the event-based model outperforms the RGB model by 4% for position and 26% for time error at a similar computational cost, making it a competitive alternative.Additionally, we evaluate quantized versions of the event-based models, applying 1- to 8-bit quantization to assess the trade-offs between predictive performance and computational efficiency.These findings highlight the trade-offs of multi-modal perception using RGB and event-based cameras in robotic applications.
  • Schulthess, Lukas; Ingolfsson, Thorir Mar; Nölke, Marc; et al. (2023)
    2023 IEEE Biomedical Circuits and Systems Conference (BioCAS)
    In ski jumping, low repetition rates of jumps limit the effectiveness of training. Thus, increasing learning rate within every single jump is key to success. A critical element of athlete training is motor learning, which has been shown to be accelerated by feedback methods. In particular, a fine-grained control of the center of gravity in the in-run is essential. This is because the actual takeoff occurs within a blink of an eye (∼ 300 ms), thus any unbalanced body posture during the in-run will affect flight.This paper presents a smart, compact, and energy-efficient wireless sensor system for real-time performance analysis and biofeedback during ski jumping. The system operates by gauging foot pressures at three distinct points on the insoles of the ski boot at 100 Hz. Foot pressure data can either be directly sent to coaches to improve their feedback, or fed into a Machine Learning (ML) model to give athletes instantaneous in-action feedback using a vibration motor in the ski boot. In the biofeedback scenario, foot pressures act as input variables for an optimized XGBoost model. We achieve a high predictive accuracy of 92.7 % for center of mass predictions (dorsal shift, neutral stand, ventral shift). Subsequently, we parallelized and fine-tuned our XGBoost model for a RISC-V based low power parallel processor (GAP9), based on the Parallel Ultra-Low Power (PULP) architecture. We demonstrate real-time detection and feedback (0.0109 ms/inference) using our on-chip deployment. The proposed smart system is unobtrusive with a slim form factor (13 mm baseboard, 3.2 mm antenna) and a lightweight build (26 g). Power consumption analysis reveals that the system’s energy-efficient design enables sustained operation over multiple days (up to 300 hours) without requiring recharge.
  • Gruel, Amélie; Di Mauro, Alfio; Hunziker, Robin; et al. (2023)
    2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)
    Neuromorphic computing has been identified as an ideal candidate to exploit the potential of event-based cameras, a promising sensor for embedded computer vision. However, state-of-the-art neuromorphic models try to maximize the model performance on large platforms rather than a trade-off between memory requirements and performance. We present the first deployment of an embedded neuromorphic algorithm on Kraken, a low-power RISC-V-based SoC prototype including a neuromorphic spiking neural network (SNN) accelerator. In addition, the model employed in this paper was designed to achieve visual attention detection on event data while minimizing the neuronal populations’ size and the inference latency. Experimental results show that it is possible to achieve saliency detection in event data with a delay of 32ms, maintains classification accuracy of 84.51% and consumes only 3.85mJ per second of processed input data, achieving all of this while processing input data 10 times faster than real-time. This trade-off between decision latency, power consumption, accuracy, and run time significantly outperforms those achieved by previous implementations on CPU and neuromorphic hardware.
  • Hu, Cheng; Xie, Yangyang; Xie, Lei; et al. (2025)
    Transportation Research Part C: Emerging Technologies
    Vehicle safety is paramount in autonomous driving, particularly when managing vehicles at extreme side-slip angles-a challenge often overlooked by conventional controllers. Recent studies have focused on vehicle drift control under such extreme conditions. However, tracking complex trajectories while drifting is challenging, especially in the presence of a model mismatch. This paper proposes a two-layer model predictive controller based on sparse variational Gaussian processes. The first layer is responsible for computing the optimal drift equilibrium points, while the second layer is tasked with tracking these points. A variational free energy-based Gaussian process is utilized to compensate for errors in the upper-layer drift equilibrium point calculations and mismatches in the lower-layer controller model. Moreover, the vehicle's state is determined to be either in transit drift or deep drift based on whether the slip angle and steering angle have reached critical values. Gaussian models are established for each state to enhance prediction accuracy. The effectiveness of the controller is demonstrated through joint simulations on MATLAB and CarSim platforms. First, the proposed two-layer model predictive controller was compared with three state-of-the-art drift controllers, demonstrating at least a 48.64% reduction in average lateral error when tracking trajectories with varying curvature. Second, when combined with sparse Gaussian processes, the controller's learning ability was validated in scenarios with a 5% to 20% friction coefficient mismatch. Specifically, in the scenario with a 20% friction coefficient mismatch, its average lateral error was reduced by 95.09% after model error learning. Additionally, the controller was compared with both Fully Independent Training Conditional (FITC) GP-based MPC and Full GP-based MPC controllers, demonstrating better trajectory tracking capability and model error learning ability.
  • Bian, Sizhen; Magno, Michele (2023)
    ISWC '23: Proceedings of the 2023 ACM International Symposium on Wearable Computers
    Energy efficiency and low latency are crucial requirements for designing wearable AI-empowered human activity recognition systems, due to the hard constraints of battery operations and closed-loop feedback. While neural network models have been extensively compressed to match the stringent edge requirements, spiking neural networks and event-based sensing are recently emerging as promising solutions to further improve performance due to their inherent energy efficiency and capacity to process spatiotemporal data in very low latency. This work aims to evaluate the effectiveness of spiking neural networks on neuromorphic processors in human activity recognition for wearable applications. The case of workout recognition with wrist-worn wearable motion sensors is used as a study. A multi-threshold delta modulation approach is utilized for encoding the input sensor data into spike trains to move the pipeline into the event-based approach. The spikes trains are then fed to a spiking neural network with direct-event training, and the trained model is deployed on the research neuromorphic platform from Intel, Loihi, to evaluate energy and latency efficiency. Test results show that the spike-based workouts recognition system can achieve a comparable accuracy (87.5%) comparable to the popular milliwatt RISC-V bases multi-core processor GAP8 with a traditional neural network (88.1%) while achieving two times better energy-delay product (0.66 vs. 1.32 ).
  • Schärer, Nicolas; Mikhaylov, Denis; Sievi, Cédric; et al. (2025)
    IEEE Access
    Wind power generation plays a key role in transitioning away from fossil fuel-dependent energy sources, contributing significantly to the mitigation of climate change. Monitoring and evaluating the aerodynamics of large wind turbine rotors is crucial to enable more wind energy deployment. This is necessary to achieve the European climate goal of a reduction in net greenhouse gas emissions by at least 55% by 2030, compared to 1990 levels. This paper presents a comparison between two measurement systems for evaluating the aerodynamic performance of wind turbine rotor blades on a full-scale wind tunnel test. One system uses an array of ten commercial compact ultra-low power micro-electromechanical systems (MEMS) pressure sensors placed on the blade surface, while the other employs high-accuracy lab-based pressure scanners embedded in the airfoil. The tests are conducted at a Reynolds number of 3.5^10*6, which represents typical operating conditions for wind turbines. MEMS sensors are of particular interest, as they can enable real-time monitoring which would be impossible with the ground truth system. This work provides an accurate quantification of the impact of the MEMS system on the blade aerodynamics and its measurement accuracy. Our results indicate that MEMS sensors, with a total sensing power below 1.6 mW, can measure key aerodynamic parameters like Angle of Attack (AoA) and flow separation with a precision of 1°. Although there are minor differences in measurements due to sensor encapsulation, the MEMS system does not significantly compromise blade aerodynamics, with a maximum shift in the angle of attack for flow separation of only 1°. These findings indicate that surface and low-power MEMS sensor systems are a promising approach for efficient and sustainable wind turbine monitoring using self-sustaining Internet of Things devices and wireless sensor networks.
Publications 1 - 10 of 156