Journal: CAAI Transactions on Intelligence Technology

Loading...

Abbreviation

CAAI Trans. Intell. Technol.

Publisher

Institution of Engineering and Technology

Journal Volumes

ISSN

2468-2322

Description

Search Results

Publications 1 - 3 of 3
  • Li, Yidi; Wang, Guoquan; Chen, Zhan; et al. (2023)
    CAAI Transactions on Intelligence Technology
    Audio-visual wake word spotting is a challenging multi-modal task that exploits visual information of lip motion patterns to supplement acoustic speech to improve overall detection performance. However, most audio-visual wake word spotting models are only suitable for simple single-speaker scenarios and require high computational complexity. Further development is hindered by complex multi-person scenarios and computational limitations in mobile environments. In this paper, a novel audio-visual model is proposed for on-device multi-person wake word spotting. Firstly, an attention-based audio-visual voice activity detection module is presented, which generates an attention score matrix of audio and visual representations to derive active speaker representation. Secondly, the knowledge distillation method is introduced to transfer knowledge from the large model to the on-device model to control the size of our model. Moreover, a new audio-visual dataset, PKU-KWS, is collected for sentence-level multi-person wake word spotting. Experimental results on the PKU-KWS dataset show that this approach outperforms the previous state-of-the-art methods.
  • Lin, Han; Han, Jiatong; Wu, Pingping; et al. (2024)
    CAAI Transactions on Intelligence Technology
    As human-machine interaction (HMI) in healthcare continues to evolve, the issue of trust in HMI in healthcare has been raised and explored. It is critical for the development and safety of healthcare that humans have proper trust in medical machines. Intelligent machines that have applied machine learning (ML) technologies continue to penetrate deeper into the medical environment, which also places higher demands on intelligent healthcare. In order to make machines play a role in HMI in healthcare more effectively and make human-machine cooperation more harmonious, the authors need to build good humanmachine trust (HMT) in healthcare. This article provides a systematic overview of the prominent research on ML and HMT in healthcare. In addition, this study explores and analyses ML and three important factors that influence HMT in healthcare, and then proposes a HMT model in healthcare. Finally, general trends are summarised and issues to consider addressing in future research on HMT in healthcare are identified.
  • Li, Yidi; Ren, Jiale; Wang, Yawei; et al. (2024)
    CAAI Transactions on Intelligence Technology
    As one of the most effective methods to improve the accuracy and robustness of speech tasks, the audio–visual fusion approach has recently been introduced into the field of Keyword Spotting (KWS). However, existing audio–visual keyword spotting models are limited to detecting isolated words, while keyword spotting for unconstrained speech is still a challenging problem. To this end, an Audio–Visual Keyword Transformer (AVKT) network is proposed to spot keywords in unconstrained video clips. The authors present a transformer classifier with learnable CLS tokens to extract distinctive keyword features from the variable‐length audio and visual inputs. The outputs of audio and visual branches are combined in a decision fusion module. As humans can easily notice whether a keyword appears in a sentence or not, our AVKT network can detect whether a video clip with a spoken sentence contains a pre‐specified keyword. Moreover, the position of the keyword is localised in the attention map without additional position labels. Exper- imental results on the LRS2‐KWS dataset and our newly collected PKU‐KWS dataset show that the accuracy of AVKT exceeded 99% in clean scenes and 85% in extremely noisy conditions. The code is available at https://github.com/jialeren/AVKT.
Publications 1 - 3 of 3