Abstract
The Wason selection task is one of the most prominent paradigms in the psychology of reasoning, with hundreds of published investigations in the last fifty odd years. But despite its central role in reasoning research, there has been little to no attempt to make sense of the data in a way that allows us to discard potential theoretical accounts. In fact, theories have been allowed to proliferate without any comprehensive evaluation of their relative performance. In an attempt to address this problem, Ragni et al. (Psychological Bulletin, 144, 779–796 2018) reported a meta-analysis of 228 experiments using the Wason selection task. This data corpus was used to evaluate sixteen different theories on the basis of three predictions: (1) the occurrence of canonical selections, (2) dependencies in selections, and (3) the effect of counter-example salience. Ragni et al. argued that all three effects cull the number of candidate theories down to only two, which are subsequently compared in a model-selection analysis. The present paper argues against the diagnostic value attributed to some of these predictions. Moreover, we revisit Ragni et al.’s model-selection analysis and show that the model they propose is non-identifiable and often fails to account for the data. Altogether, the problems discussed here suggest that we are still far from a much-needed theoretical winnowing.
Original language | English (US) |
---|---|
Pages (from-to) | 341-353 |
Number of pages | 13 |
Journal | Computational Brain and Behavior |
Volume | 3 |
Issue number | 3 |
DOIs | |
State | Published - Sep 2020 |
Keywords
- Hypothesis testing
- Mental models
- Reasoning
- Selection task
ASJC Scopus subject areas
- Developmental and Educational Psychology
- Neuropsychology and Physiological Psychology