Abstract
Accurate classification of FAMACHA© scores is essential for assessing anemia in small ruminants and optimizing parasite management strategies in livestock agriculture. The FAMACHA© system categorizes anemia severity on a scale from 1 to 5, where scores 1 and 2 indicate healthy animals, score 3 represents a borderline condition, and scores 4 and 5 indicate severe anemia. In this study, a dataset of 4700 images of the lower eye conjunctiva of young male goats was collected weekly over six months using a Samsung A54 smartphone. Traditional FAMACHA© assessment methods rely on subjective visual examination, which is labor-intensive and susceptible to observer bias. To address this limitation, this study implemented machine learning algorithms to automate FAMACHA© classification, leveraging Support Vector Machine (SVM), Backpropagation Neural Network (BPNN), and Convolutional Neural Network (CNN) models. A comparative analysis of these models was conducted using precision, recall, F1-score, and accuracy metrics. The CNN model demonstrated the highest classification accuracy (97.8 %), outperforming both BPNN and SVM. The SVM model achieved a mean accuracy of 84.6 %, with strong performance in severe anemia detection, but limitations in intermediate classes. The overall accuracy of 84 % attained by the BPNN model provided a balanced tradeoff between precision and recall. The CNN model's superior performance was attributed to its ability to learn spatial and contextual patterns from images, ensuring robust classification across all FAMACHA© categories. These findings underscore CNN's potential as a reliable, scalable solution for automated anemia detection in livestock, facilitating early intervention and improving herd health management. The study also highlights the need for future research to explore ensemble learning approaches and integration with mobile applications for real-time deployment for both commercial and resource-limited livestock producers.