A continuous aggregated accumulation model of recognition judgments
View abstract on PubMed
Summary
This summary is machine-generated.A new model explains recognition memory by integrating familiarity and recollection time courses. It resolves the paradox where faster familiarity doesn't always mean faster "know" responses in memory tasks.
Area Of Science
- Cognitive Psychology
- Neuroscience
- Memory Research
Background
- Recognition memory involves two processes: recollection (remember) and familiarity (know).
- Existing single-process and dual-process models struggle to explain the timing of these memory responses.
- A paradox exists: familiarity is faster than recollection, yet 'remember' responses are often faster than 'know' responses.
Purpose Of The Study
- To propose and test a new model of recognition memory that accounts for the time course of familiarity and recollection.
- To resolve the paradoxical timing differences between remember and know responses in recognition tasks.
- To quantitatively model the interplay of confidence, accuracy, and response type on reaction time.
Main Methods
- Developed a novel dual-process model analyzing the detailed time course of recollection and familiarity.
- Incorporated factors like item familiarity, recollection strength, and response criteria into the model.
- Validated the model using a 12-parameter quantitative approach, comparing predicted and observed reaction times.
Main Results
- The proposed model successfully explains why average 'know' response times can be slower than 'remember' response times.
- It accounts for 'know' responses driven by high familiarity (fast) and those with low familiarity/recollection (slow).
- The 12-parameter model demonstrated the best fit between expected and observed reaction times across tested models.
Conclusions
- The new model provides a more comprehensive understanding of recognition memory by integrating temporal dynamics.
- It resolves the familiarity-recollection timing paradox by considering response categorization of low-familiarity, slow-recollection items.
- This framework offers a robust quantitative approach to studying recognition memory processes and their timing.
Related Concept Videos
Aggregate classification is generally based on its size, petrographic characteristics, weight, and source. Size classification ranges from coarse to fine aggregates, defined by the size of the particles. Coarse aggregates are particles that do not pass through ASTM sieve No. 4, and aggregates that pass through the sieve are fine aggregates.
Petrographic classification groups aggregates based on common mineralogical characteristics. Some of the common mineral groups found in aggregates are...
The representative heuristic describes a biased way of thinking, in which you unintentionally stereotype someone or something. For example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.
This text is adapted from OpenStax, Psychology. OpenStax...
Data collection refers to a systematic way of obtaining, observing, measuring, and analyzing accurate information. Observational studies are one of the most widely used methods of data collection. It involves collecting data by observing the behavior and physical characteristics of a sample without making any modifications to the sample.
An astronomer viewing the motion and brightness of stars in the sky and recording the data is an example of observational data collection. A botanist recording...
The "center" of a data set is also a way of describing location. The two most widely used measures of the "center" of the data are the mean (average) and the median. The words "mean" and "average" are often used interchangeably. The substitution of one word for the other is common practice. The technical term is "arithmetic mean" and "average" is technically a center location. However, in practice among non-statisticians,...
Measures of central tendency are tools used in biostatistics to identify the average or center of a dataset. They offer a single representative value for understanding and summarizing data distribution.
The mean is one such measure, calculated by totaling all values in a dataset and dividing by the number of values. For instance, the mean blood pressure reading (120, 130, 140, 150) would be 135. However, the mean can be affected by extreme values or outliers.
The median, another measure,...
An ideal Y-Y transformer, grounded through neutral impedances, displays per-unit sequence networks akin to those of a single-phase ideal transformer when subjected to balanced positive- or negative-sequence currents. These currents do not produce neutral currents, and their associated voltage drops.
Zero-sequence currents, which are identical in magnitude and phase, generate a neutral current, resulting in voltage drops across the neutral impedance and the low-voltage winding. If the...

