ChatGPT-4.0 as a Tool for Automated Review of Ethics and Transparency in Biomedical Literature
View abstract on PubMed
Summary
This summary is machine-generated.ChatGPT-4.0 effectively identifies missing ethics statements in public health journals, showing high sensitivity. However, precision varies, necessitating human oversight for robust editorial checks and maintaining ethical standards.
Area Of Science
- Public Health
- Bibliometrics
- Artificial Intelligence in Publishing
Background
- Growing interest in AI, specifically large language models (LLMs), for streamlining editorial processes.
- Focus on ethical and transparency reporting in public health journals.
- Need to evaluate AI capabilities in manuscript assessment.
Purpose Of The Study
- Evaluate ChatGPT-4.0's accuracy in detecting missing ethical and transparency statements.
- Compare performance between high-ranked (Q1) and low-ranked (Q4) public health journals.
Main Methods
- Analysis of articles from Q1 and Q4 public health journals using ChatGPT-4.0.
- Assessment for essential ethical components: ethics approval, informed consent, animal ethics, conflicts of interest, funding, and data sharing.
- Calculation of performance metrics: sensitivity, recall, and precision.
Main Results
- ChatGPT-4.0 demonstrated high sensitivity and recall for all evaluated ethical components.
- Precision varied, with high accuracy for data availability (0.96) but lower for funding statements (0.16).
- Q4 journals showed a significant increase in missing statements, especially for open data sharing, ethics approval, and informed consent.
Conclusions
- ChatGPT-4.0 shows promise for preliminary screening of missing ethics statements with high accuracy.
- Precision limitations necessitate complementary human review.
- Recommends balanced integration of AI and human judgment to enhance editorial checks and uphold ethical standards.

