ADM-SLAM: Accurate and Fast Dynamic Visual SLAM with Adaptive Feature Point Extraction, Deeplabv3pro, and Multi-View Geometry
View abstract on PubMed
Summary
This summary is machine-generated.This study introduces ADM-SLAM, a visual SLAM system for dynamic environments. It significantly reduces trajectory errors by efficiently handling moving objects, improving robot navigation accuracy.
Area Of Science
- Robotics
- Computer Vision
- Artificial Intelligence
Background
- Visual Simultaneous Localization and Mapping (V-SLAM) is essential for autonomous systems.
- Dynamic environments pose significant challenges for V-SLAM accuracy.
- Current deep learning methods for dynamic object recognition are computationally intensive.
Purpose Of The Study
- To develop an efficient V-SLAM system for dynamic environments.
- To overcome the computational limitations of existing dynamic object recognition models.
- To improve the accuracy and real-time performance of V-SLAM in challenging scenarios.
Main Methods
- Proposed ADM-SLAM system building upon ORB-SLAM2.
- Integrated adaptive feature point homogenization.
- Employed lightweight deep learning semantic segmentation (improved DeepLabv3).
- Utilized multi-view geometric segmentation for motion state detection.
Main Results
- ADM-SLAM significantly outperforms ORB-SLAM2 in dynamic environments.
- Achieved up to a 97% reduction in Absolute Trajectory Error (ATE) in high-dynamic scenes.
- Demonstrated superior real-time performance and accuracy compared to DS-SLAM and DynaSLAM.
Conclusions
- ADM-SLAM effectively eliminates dynamic interference points.
- The system shows excellent adaptability and robustness in highly dynamic environments.
- Offers a viable solution for real-time V-SLAM in challenging conditions.

