Journal: IEEE Internet of Things Journal

Loading...

Abbreviation

Publisher

IEEE

Journal Volumes

ISSN

2327-4662

Description

Search Results

Publications 1 - 10 of 37
  • Zhang, Jindi; Zhang, Yifan; Lu, Kejie; et al. (2021)
    IEEE Internet of Things Journal
    For autonomous driving, an essential task is to detect surrounding objects accurately. To this end, most existing systems use optical devices, including cameras and light detection and ranging (LiDAR) sensors, to collect environment data in real time. In recent years, many researchers have developed advanced machine learning models to detect surrounding objects. Nevertheless, the aforementioned optical devices are vulnerable to optical signal attacks, which could compromise the accuracy of object detection. To address this critical issue, we propose a framework to detect and identify sensors that are under attack. Specifically, we first develop a new technique to detect attacks on a system that consists of three sensors. Our main idea is to: 1) use data from three sensors to obtain two versions of depth maps (i.e., disparity) and 2) detect attacks by analyzing the distribution of disparity errors. In our study, we use real data sets and the state-of-the-art machine learning model to evaluate our attack detection scheme and the results confirm the effectiveness of our detection method. Based on the detection scheme, we further develop an identification model that is capable of identifying up to n-2 attacked sensors in a system with one LiDAR and n cameras. We prove the correctness of our identification scheme and conduct experiments to show the accuracy of our identification method. Finally, we investigate the overall sensitivity of our framework.
  • Li, Changling; Li, Ying (2025)
    IEEE Internet of Things Journal
    Multiagent reinforcement learning (MARL) has shown wide applicability in collaborative systems, such as autonomous driving and smart cities for its ability of learning through interaction. With the recent development of drone networks, researchers have also applied MARL to address the trajectory planning problems. However, the dynamic environment and the limited battery capacity are still challenging for using MARL to achieve efficient collaborative task execution. In this article, we propose an energy-aware MARL model as an attempt to tackle these challenges, leveraging deep Q-networks (DQNs) with individual reward functions driven by the task execution progress and the remaining battery of drones. We conduct a set of simulation studies for the proposed mode and compare it with the shared reward MARL (Li et al., 2022) to explore the impact of credit assignment in MARL. The results indicate that our proposed model can achieve at least 80% success rate regardless of the task locations and lengths. Similar to the shared reward mode, the individual reward mode can achieve a better success rate when the task density is high, and it can hit nearly a 100% success rate when task density gets close to 40%. The true advantage of our proposed model with individual reward is revealed when scaling up the environment. The comparison to the shared reward MARL shows that the our proposed model is more robust toward the change of the environment size and agent numbers. It can achieve higher success rate with fewer steps due to the clarity of the goal which improves energy efficiency even better.
  • Palossi, Daniele; Loquercio, Antonio; Conti, Francesco; et al. (2019)
    IEEE Internet of Things Journal
  • Niculescu, Vlad; Palossi, Daniele; Magno, Michele; et al. (2023)
    IEEE Internet of Things Journal
    Smart interaction between autonomous centimeter-scale unmanned aerial vehicles (i.e., nano-UAVs) and Internet of Things (IoT) sensor nodes is an upcoming high-impact scenario. This work tackles precise 3-D localization of indoor edge nodes with an autonomous nano-UAV without prior knowledge of their position. We employ ultrawideband (UWB) and wake-up radio (WUR) technologies: we perform UWB-based ranging and data exchange between the nano-UAV and the nodes, while the WUR minimizes the sensors' power consumption. UWB-based precise localization requires addressing multiple sources of error, such as UWB-ranging noise and UWB antennas' uneven radiation pattern. The limited computational resources aboard a nano-UAV further complicate this scenario, requiring real-time execution of the localization algorithm within a microcontroller unit (MCU). We propose a novel UWB-based localization system for nano-UAVs, composed by: 1) a lightweight localization algorithm; 2) an optimal flight strategy; and 3) a ranging-error-correction model. Our 3-D flight policy requires only five UWB measurements to feed the localization algorithm, which bounds the localization error within $\mathrm {28 \, \text {c} \text {m} }$ and runs in $\mathrm {1.2 \text {m} \text {s} }$ on a Cortex-M4 MCU. Localization accuracy is improved by an additional 25% thanks to a novel error-correction model. Leveraging the WUR, the entire localization/data-exchange cycle costs only $\mathrm {24 \, \text {m} \text {J} }$ at the sensor node, which is 50 times more energy efficient than the state of the art with comparable localization accuracy.
  • Rusci, Manuele; Rossi, Davide; Farella, Elisabetta; et al. (2017)
    IEEE Internet of Things Journal
  • Nitti, M.; Atzori, Luigi; Cvijikj, Irena P. (2014)
    IEEE Internet of Things Journal
  • Hu, Youbing; Li, Zhijun; Chen, YongRui; et al. (2023)
    IEEE Internet of Things Journal
    Many intelligent applications based on deep neural networks (DNNs) are increasingly running on Internet of Things (IoT) devices. Unfortunately, the computing resources of these IoT devices are limited, which will seriously hinder the widespread deployment of various smart applications. A popular solution is to offload part of computation tasks from the IoT device to cloud by way of device-cloud collaboration. However, existing collaboration approaches may suffer from long network transmission delay or degraded accuracy due to the large amount of intermediate results, bring enormous challenges to the tasks, such as object detection, that require massive computing resources. In this article, we propose an efficient device-cloud collaborative inference (DCCI) object detection framework, which dynamically adjusts the amount of transferred data according to the content of input images. Specifically, a content-aware hard-case discriminator is proposed to automatically classify the input images as hard-cases or simple-cases, the hard-cases are uploaded to the cloud to be processed by a deployed heavyweight model, and the simple cases are processed by a lightweight model deployed to the IoT device, where the lightweight model is automatically compressed based on reinforcement learning according to the resource constraints of the IoT device. Furthermore, a collaborative scheduler based on the runtime load and network transmission capability of IoT devices is proposed to optimize the collaborative computation between IoT devices and the cloud. Extensive experimental evaluations show that compared to the Device-only approach, DCCI can reduce the memory footprint and compute resources of IoT devices by more than 90.0% and 30.87%, respectively. Compared to Cloud-centric, DCCI can save 2.0× of network bandwidth. In addition, compared with the state-of-the-art DNN partitioning method, DCCI can save 1.2× of inference latency, and 1.3× of IoT device energy consumption with the same accuracy constraint.
  • Camilleri, Patrick; Mossayebi, Shahram; Paterson, Kenneth G.; et al. (2024)
    IEEE Internet of Things Journal
    physically unclonable functions (PUFs) in silicon have received significant attention since their initial proposal, with a wide variety of different designs available. This is largely due to their ability to provide device-specific outputs which can be used in diverse applications, such as authentication, attestation, and cryptographic key generation. Existing designs for silicon PUFs are based on exploiting manufacturing variation in delay lines or the behavior of memory cells to provide device-unique outputs. This article proposes a new kind of PUF which we refer to as a gate tunneling PUF. Our PUF design exploits the effect of manufacturing variation on quantum gate tuneling current to generate unique, reproducible, yet unpredictable PUF outputs. A significant benefit of our design is its realisability in standard complementary metal oxide semiconductor technology, leading to easy integration of our gate tuneling PUF with other security and general system functions in single-chip designs. We have prototyped our gate tuneling PUF design, producing test devices in arrays of 3584 output bits. Initial results are very promising, showing good randomness, uniqueness, reproducibility and temperature stability. These properties suggest that our devices’ outputs require minimal post-processing to be used for cryptographic keys.
  • Kalenberg, Konstantin; Müller, Hanna; Polonelli, Tommaso; et al. (2024)
    IEEE Internet of Things Journal
    Autonomously navigating robots need to perceive and interpret their surroundings. Currently, cameras are among the most used sensors due to their high resolution and frame rates at relatively low-energy consumption and cost. In recent years, cutting-edge sensors, such as miniaturized depth cameras, have demonstrated strong potential, specifically for nano-size unmanned aerial vehicles (UAVs), where low-power consumption, lightweight hardware, and low-computational demand are essential. However, cameras are limited to working under good lighting conditions, while depth cameras have a limited range. To maximize robustness, we propose to fuse a millimeter form factor 64 pixel depth sensor and a low-resolution grayscale camera. In this work, a nano-UAV learns to detect and fly through a gate with a lightweight autonomous navigation system based on two tinyML convolutional neural network models trained in simulation, running entirely onboard in 7.6 ms and with an accuracy above 91%. Field tests are based on the Crazyflie 2.1, featuring a total mass of 39 g. We demonstrate the robustness and potential of our navigation policy in multiple application scenarios, with a failure probability down to 1.2. 10-3 crash/meter, experiencing only two crashes on a cumulative flight distance of 1.7 km.
  • Lu, Anqi; Cheng, Yun; Hu, Youbing; et al. (2023)
    IEEE Internet of Things Journal
    Recently, buoyed by advances in the space industry, low Earth orbit (LEO) satellites have become an important part of the Internet of Things (IoT). LEO satellites have entered the era of a big data link with IoT, how to deal with the data from the satellite IoT is a problem worthy of consideration. Conventional object detection method in optical remote sensing simply transmits the raw data to the ground. However, it ignores the properties of the images and the connection with the downstream task. To obtain efficient data transmission and accurate object detection, we propose a task-inspired satellite-terrestrial collaborative object detection framework called STCOD. It detects regions of interest (ROIs) and adopts a block-based adaptive sampling method to compress the background (BG) in optical remote sensing images by introducing satellite edge computing (SEC) on satellites. The STCOD framework also sets the transmission priority of image blocks according to their contributions to the task and uses fountain code to ensure the reliable transmission of important image blocks. We build a whole software simulation framework to validate our method, including the satellite module, the transmission module, and the terrestrial module. Extensive experimental results show that the STCOD framework can reduce the amount of downlink data decreased by 50.04% while losing the detection accuracy by 0.54%. In our simulated satellite-terrestrial link, the STCOD framework can reduce the number of satellite-to-terrestrial transmissions by half. When the packet loss rate is between 5% and 20%, the detection accuracy is lost only 0.05% to 0.5%.
Publications 1 - 10 of 37