Not All EPAs Are Created Equal: Fixing Sampling Bias With Utility Modeling
View abstract on PubMed
Summary
This summary is machine-generated.Entrustable professional activities (EPAs) assessments are unevenly completed, creating biases. Our new EPA assessment utility modeling corrects these biases and guides faculty to complete the most impactful assessments.
Area Of Science
- Medical Education
- Health Professions Education
- Competency-Based Medical Education
Background
- Entrustable professional activities (EPAs) are crucial for assessing resident readiness for practice.
- Manual initiation of EPA assessments leads to uneven completion rates and introduces biases.
- Variability in EPA assessment exists across individuals, specialties, and institutions.
Purpose Of The Study
- Introduce EPA assessment utility modeling to address biases in assessment completion.
- Provide a data-driven approach to correct for and avoid biases in EPA assessments.
- Inform faculty on the usefulness of EPA assessment opportunities and identify when they are most needed.
Main Methods
- Longitudinal analysis of general surgery EPA assessments across 37 institutions using an EHR-integrable platform.
- Power law curve fitting to measure skewing in EPA assessment counts.
- Bayesian network modeling and Monte Carlo simulations to quantify assessment impact and develop an assessment utility score.
Main Results
- EPA assessment counts exhibited significant skewing across EPA type, faculty, specialty, and resident.
- Top 4 EPA types accounted for 52.8% of assessments; top 15 faculty provided 33.5%.
- Top 2 specialties contributed 31.0% of assessments; top 20 residents received 20.1%.
Conclusions
- EPA assessments are heavily skewed, leading to biased representation of entrustment levels.
- An assessment utility framework is proposed to optimize EPA assessment timing, assessor selection, and prioritization.
- This data-driven approach aims to improve the measurement of competency-based medical education.
Related Concept Videos
Mechanistic models are utilized in individual analysis using single-source data, but imperfections arise due to data collection errors, preventing perfect prediction of observed data. The mathematical equation involves known values (Xi), observed concentrations (Ci), measurement errors (εi), model parameters (ϕj), and the related function (ƒi) for i number of values. Different least-squares metrics quantify differences between predicted and observed values. The ordinary least...
Sampling is a crucial step in analytical chemistry, allowing researchers to collect representative data from a large population. Common sampling methods include random, judgmental, systematic, stratified, and cluster sampling.
Random sampling is a method where each member of the population has an equal chance of being selected for the sample. It involves selecting individuals randomly, often using random number generators or lottery-type methods. For example, when analyzing the properties of a...
In the case of systematic errors, the sources can be identified, and the errors can be subsequently minimized by addressing these sources. According to the source, systematic errors can be divided into sampling, instrumental, methodological, and personal errors.
Sampling errors originate from improper sampling methods or the wrong sample population. These errors can be minimized by refining the sampling strategy. Defective instruments or faulty calibrations are the sources of instrumental...
Effective sample preparation is crucial for accurate and reliable laboratory analysis. During this process, two significant sources of error can arise: concentration bias from improper sample splitting and contamination caused by methods used to reduce particle size, such as grinding or homogenization. Identifying and minimizing these potential errors is crucial to ensuring the validity of the analysis.
Another key consideration is determining the appropriate number of samples required to...
Mechanistic models play a crucial role in algorithms for numerical problem-solving, particularly in nonlinear mixed effects modeling (NMEM). These models aim to minimize specific objective functions by evaluating various parameter estimates, leading to the development of systematic algorithms. In some cases, linearization techniques approximate the model using linear equations.
In individual population analyses, different algorithms are employed, such as Cauchy's method, which uses a...
Sampling is a technique to select a portion (or subset) of the larger population and study that portion (the sample) to gain information about the population. Data are the result of sampling from a population. The sampling method ensures that samples are drawn without bias and accurately represent the population.
Convenience sampling is a non-random method of sample selection; this method selects individuals that are easily accessible and may result in biased data. For example, a marketing...

