Autotrinet YOLO triple attention framework for robust traffic sign detection
View abstract on PubMed
Summary
This summary is machine-generated.This study introduces AutoTriNet-YOLO, a new traffic sign detection framework that uses triple-attention to improve accuracy and efficiency. It achieves state-of-the-art results for intelligent transportation systems.
Area Of Science
- Computer Vision
- Artificial Intelligence
- Intelligent Transportation Systems
Background
- Traffic sign detection (TSD) is crucial for intelligent transportation systems but faces challenges like small targets and environmental variability.
- Existing deep learning methods, including YOLO, struggle to balance accuracy, efficiency, and robustness in diverse scenarios.
Purpose Of The Study
- To propose AutoTriNet-YOLO, a novel framework enhancing TSD by integrating triple-attention, dynamic feature fusion, and adaptive computation.
- To address limitations in current TSD methods regarding accuracy, computational efficiency, and robustness.
Main Methods
- Introduced the TriplePathBlock module, parallelizing Convolutional Block Attention (CBAM), Non-local Blocks, and Lite Transformer for multi-scale contextual dependencies.
- Implemented a Dynamic Fusion Gate for adaptive attention path weighting and a Selective Insert mechanism for pruning redundant operations.
Main Results
- Achieved state-of-the-art performance on a traffic sign dataset with 86.6% mAP@50 and 65.3% mAP@50-95.
- Outperformed existing methods like TSD-YOLO and EDN-YOLO, with ablation studies confirming component contributions, especially CBAM for local features.
Conclusions
- AutoTriNet-YOLO offers a scalable, computationally optimized architecture for robust TSD by unifying diverse attention mechanisms.
- The framework's real-time efficiency makes it suitable for edge deployment in autonomous driving systems.

