Bridging big data in the ENIGMA consortium to combine non-equivalent cognitive measures
View abstract on PubMed
Summary
This summary is machine-generated.Researchers harmonized auditory verbal learning task (AVLT) scores across 53 studies, totaling 10,505 individuals. This Big Data approach improved memory score reliability and created a universal scale for better cross-study comparisons.
Area Of Science
- Neuroscience
- Behavioral Science
- Data Science
Background
- Replication and reliability issues in neuroscience research are often addressed by increasing sample sizes.
- Integrating multisite and multi-instrument data presents significant challenges.
- Auditory verbal learning tasks (AVLTs) are widely used but lack score standardization across different versions and study sites.
Purpose Of The Study
- To develop a method for linking and harmonizing scores from various auditory verbal learning tasks (AVLTs).
- To create a unified scale for latent verbal learning ability.
- To improve the reproducibility and comparability of memory test data across studies.
Main Methods
- An international secondary analysis aggregated raw data from 53 studies (N=10,505) involving AVLTs.
- The ComBat-GAM algorithm was employed to remove site-specific effects while retaining instrument-specific variations.
- A continuous item response theory model estimated latent verbal learning ability, enabling score conversion across different AVLTs.
Main Results
- Harmonization reduced total cross-site score variance by 37%, preserving meaningful memory effects.
- Age was the most significant factor influencing scores (-11.4%), while race/ethnicity was not significant.
- Validated conversion tools were developed, allowing researchers and clinicians to standardize AVLT scores.
Conclusions
- Global harmonization initiatives can effectively address reproducibility challenges in the behavioral sciences.
- The developed methods and online tools facilitate score conversion across different AVLT instruments.
- This work establishes a foundation for more reliable and comparable memory research using Big Data.
Related Concept Videos
Alfred Binet, along with his student Théophile Simon, was tasked by the French Ministry of Education in 1904 to create a method for identifying students who struggled to learn through conventional classroom instruction. This initiative aimed to address overcrowding by placing such students in specialized schools. Binet and Simon developed an intelligence test comprising 30 tasks, ranging from simple commands, like touching one's nose or ear, to more complex tasks, such as drawing...
Psychologists measure intelligence by using standardized tests that produce a score known as the intelligence quotient or IQ. To understand IQ tests, it's important to recognize the key principles behind their construction: validity, reliability, and standardization.
Validity refers to how well a test measures what it claims to measure. An intelligence test should accurately assess intelligence rather than another characteristic, like anxiety. Criterion validity is one way to evaluate this;...
David Wechsler, a psychologist who worked with World War I veterans, developed a significant IQ test in 1939 called the Wechsler-Bellevue Intelligence Scale. This test was innovative because it combined several subtests that measured both verbal and nonverbal skills, reflecting Wechsler's belief that intelligence is a global capacity involving purposeful action, rational thinking, and effective interaction with the environment. This test later evolved into the Wechsler Adult Intelligence...

