Ragni and Johnson-Laird ask: “Explanation or modeling?”

In the latest issue of Computational Brain & Behavior, Marco Ragni and Phil Johnson-Laird respond to recent criticism by Kellen and Klauer (same issue) about a meta-analysis of studies on the Wason selection task. Kellen and Klauer’s central points can be summarized in their abstract, here:

The Wason selection task is one of the most prominent paradigms in the psychology of reasoning, with hundreds of published investigations in the last fifty odd years. But despite its central role in reasoning research, there has been little to no attempt to make sense of the data in a way that allows us to discard potential theoretical accounts. In fact, theories have been allowed to proliferate without any comprehensive evaluation of their relative performance. In an attempt to address this problem, Ragni, Kola, and Johnson-Laird (2018) reported a meta-analysis of 228 experiments using the Wason selection task. This data corpus was used to evaluate sixteen different theories on the basis of three predictions:1) the occurrence of canonical selections, 2) dependencies in selections, and 3) the effect of counter-example salience. Ragni et al. argued that all three effects cull the number of candidate theories down to only two, which are subsequently compared in a model-selection analysis. The present paper argues against the diagnostic value attributed to some of these predictions. Moreover, we revisit Ragni et al.’s model-selection analysis and show that the model they propose is non-identifiable and often fails to account for the data. Altogether, the problems discussed here suggest that we are still far from a much-needed theoretical winnowing.

Ragni and Johnson-Laird’s response to the critique can be downloaded here; and here’s the abstract:

In Wason’s “selection” task, individuals often overlook potential counterexamples in selecting evidence to test hypotheses. Our recent meta-analysis of 228 experiments corroborated the main predictions of the task’s original theory, which aimed to explain the testing of hypotheses. Our meta-analysis also eliminated all but 1 of the 15 later theories. The one survivor was the inference- guessing theory of Klauer et al., but it uses more free parameters to model the data. Kellen and Klauer (this issue) dissent. They defend the goal of a model of the frequencies of all 16 possible selections in Wason’s task, including “guesses” that occur less often than chance, such as not selecting any evidence. But an explanation of hypothesis testing is not much advanced by modeling such guesses with independent free parameters. The task’s original theory implies that individuals tend to choose items of evidence that are dependent on one another, and the inference-guessing theory concurs for those selections that are inferred. Kellen and Klauer argue against correlations as a way to assess dependencies. But our meta-analysis did not use them; it used Shannon’s measure of information to establish dependencies. Their modeling goal has led them to defend a “purposely vague” theory. Our explanatory goal has led us to defend a “purposely clear” algorithm and to retrieve long-standing evidence that refutes the inference-guessing theory. Individuals can be rational in testing a hypothesis: in repeated tests, they search for some examples of it, and then exhaustively for counterexamples.

 

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.