Ensemble of vision transformer architectures for efficient Alzheimer's Disease classification
View abstract on PubMed
Summary
This summary is machine-generated.This study introduces an ensemble of Vision Transformers (VTs) for efficient Alzheimer's Disease (AD) classification. The proposed VT framework significantly improves accuracy over traditional ML and CNN models, offering a promising tool for early AD detection.
Area Of Science
- Computer Vision
- Artificial Intelligence
- Neuroscience
Background
- Vision Transformers (VTs) have emerged as state-of-the-art in computer vision, excelling at capturing dependencies and handling class imbalance.
- Alzheimer's Disease (AD) classification remains a challenge, particularly under imbalanced and data-scarce conditions.
Purpose Of The Study
- To propose and evaluate an ensemble framework of VTs for efficient and accurate classification of Alzheimer's Disease (AD).
- To assess the model's performance on imbalanced and data-scarce AD datasets.
Main Methods
- An ensemble framework comprising four vanilla Vision Transformers (VTs) utilizing hard and soft-voting approaches.
- Model testing and validation on the OASIS and ADNI Alzheimer's Disease datasets.
- Comparative analysis against state-of-the-art Convolutional Neural Network (CNN) and Machine Learning (ML) models.
Main Results
- The ensemble VT model achieved a 2% improvement over individual VT models.
- Demonstrated superior performance with accuracy gains of 4.14% over ML models and 4.72% over CNN models.
- Effective performance under imbalanced and data-scarce conditions using the ADNI dataset.
Conclusions
- The proposed ensemble VT framework offers a significant advancement in Alzheimer's Disease classification accuracy.
- The approach shows promise for efficient AD detection, especially in challenging data scenarios.
- Identified limitations and future research directions are discussed, with code made publicly available.

