Comparison of ChatGPT knowledge against 2020 consensus statement on ankyloglossia in children
View abstract on PubMed
Summary
This summary is machine-generated.ChatGPT shows good accuracy on ankyloglossia (tongue-tie) information, but caution is advised for non-consensus topics. Further research is needed for safe AI integration in healthcare.
Area Of Science
- Medical Informatics
- Artificial Intelligence in Healthcare
- Congenital Oral Conditions
Background
- Ankyloglossia (tongue-tie) is a congenital oral condition with evolving clinical consensus.
- Patients increasingly use AI tools like ChatGPT for medical information.
- Evaluating AI accuracy in healthcare is critical for patient safety.
Purpose Of The Study
- To assess ChatGPT's accuracy and consistency regarding ankyloglossia information.
- To compare ChatGPT responses against expert consensus on ankyloglossia.
- To explore implications of AI use for patients seeking medical advice.
Main Methods
- ChatGPT was presented with statements from the 2020 clinical consensus statement on ankyloglossia.
- Responses were scored using a 9-point Likert scale.
- Mean scores and standard deviations were analyzed to assess alignment with expert consensus.
Main Results
- 67% of ChatGPT responses closely aligned with expert consensus mean scores.
- 17% of responses showed significant deviations (>= 2.0) from expert consensus.
- Discrepancies highlight potential for AI to disseminate uncertain or debated medical information.
Conclusions
- ChatGPT demonstrates reasonable accuracy on ankyloglossia but requires caution for non-consensus information.
- Refining AI models and addressing inaccuracies are crucial for safe medical integration.
- Ongoing evaluation of AI's role in health equity and information access is necessary.

