Differences in the annotation between facial images and videos for training an artificial intelligence for skin type determination
View abstract on PubMed
Summary
This summary is machine-generated.Videos may offer a more sensitive assessment of dynamic skin features like wrinkles compared to static images. Subjectivity in skin analysis highlights the need for consistent rater training and cross-validation in AI development.
Area Of Science
- Dermatology
- Artificial Intelligence
- Medical Imaging
Background
- The Grand-AID project aims to develop an AI-powered digital tool for personalized skin analysis and care routines.
- Training this AI necessitates accurate annotation of skin features from facial imagery.
- A key research question is whether video analysis is superior to static images for assessing dynamic skin parameters.
Purpose Of The Study
- To compare the effectiveness of video versus static image analysis for annotating eight distinct skin features.
- To evaluate inter-rater reliability in assessing these features across different modalities.
Main Methods
- 25 healthy volunteers provided standardized image sequences and video recordings with facial expressions.
- Four dermatologically expert raters annotated eight skin features using semi-quantitative and linear scales.
- A cross-over design was employed to compare image and video modalities and assess rater differences.
Main Results
- Most assessed skin parameters showed higher scores in video analysis compared to static images, with some differences being statistically significant.
- Significant variability was observed among the expert raters' assessments.
Conclusions
- Significant differences exist between image and video analysis for skin feature assessment.
- Subjectivity inherent in skin analysis underscores the importance of rigorous rater training and cross-validation for AI model development.

