Micro Gesture Recognition with Multi-Dimensional Feature Fusion and CQ-MobileNetV3 Using FMCW Radar
View abstract on PubMed
Summary
This summary is machine-generated.This study introduces a novel radar-based micro gesture recognition system using multi-dimensional feature fusion and a lightweight neural network. The method achieves high accuracy for contactless human-computer interaction with low computational costs.
Area Of Science
- Human-Computer Interaction
- Radar Signal Processing
- Deep Learning
Background
- Contactless human-computer interaction (HCI) is advancing, with radar-based gesture recognition gaining traction.
- Recognizing micro gestures presents challenges due to small motion amplitudes and short durations, impacting feature extraction and accuracy.
- Balancing high recognition accuracy with low computational and storage demands is crucial for practical applications.
Purpose Of The Study
- To develop an efficient and accurate micro gesture recognition method for radar-based HCI.
- To address the challenges of feature extraction and computational complexity in micro gesture recognition.
- To propose a novel approach combining multi-dimensional feature fusion with a lightweight deep learning network.
Main Methods
- Constructing range-time, velocity-time, and angle-time maps from radar data.
- Refining these maps through normalization and adaptive filtering, then fusing them into a comprehensive range-velocity-angle-time map.
- Designing and implementing a lightweight CQ-MobileNetV3 network, optimizing MobileNetV3 with attention modules (CBAM, SA) for enhanced accuracy and reduced complexity.
Main Results
- The proposed method achieved a 97.16% recognition accuracy for 14 distinct micro gestures.
- The CQ-MobileNetV3 network demonstrated remarkable efficiency with only 0.207 M parameters and 0.027 GFLOPs computational complexity.
- Experimental results using a 77 GHz FMCW radar validated the superior performance compared to other deep neural networks.
Conclusions
- The developed multi-dimensional feature fusion and CQ-MobileNetV3 network offer a highly accurate and computationally efficient solution for radar-based micro gesture recognition.
- This approach effectively captures micro gesture characteristics, overcoming limitations in feature extraction and enabling practical contactless HCI.
- The study highlights the potential of lightweight deep learning models and advanced feature fusion techniques in advancing radar-based interaction systems.

