Evaluating the Robustness of Parameter Estimates in Cognitive Models: A Meta-Analytic Review of Multinomial Processing Tree Models Across the Multiverse of Estimation Methods

Henrik Singmann, Daniel W. Heck, Marius Barth, Edgar Erdfelder, Nina R. Arnold, Frederik Aust, Jimmy Calanchini, Fabian E. Gümüsdagli, Sebastian S. Horn, David Kellen, Karl C. Klauer, Dora Matzke, Franziska Meissner, Martha Michalkiewicz, Marie Luisa Schaper, Christoph Stahl, Beatrice G. Kuhlmann, Julia Groß

Research output: Contribution to journalReview articlepeer-review

1 Scopus citations

Abstract

Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete-pooling, no-pooling, or partial-pooling) and the statistical framework (frequentist or Bayesian). These decisions span a multiverse of estimation methods.We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the magnitude of divergence between estimation methods for the parameters of nine popular MPT models in psychology (e.g., process-dissociation, source monitoring). We further examined moderators as potential sources of divergence. We found that the absolute divergence between estimation methods was small on average (<.04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling.

Original languageEnglish (US)
Pages (from-to)965-1003
Number of pages39
JournalPsychological Bulletin
Volume150
Issue number8
DOIs
StatePublished - Jun 27 2024

Keywords

  • cognitive modeling
  • multinomial processing tree models
  • multiverse analysis
  • parameter estimation
  • transparency

ASJC Scopus subject areas

  • General Psychology

Fingerprint

Dive into the research topics of 'Evaluating the Robustness of Parameter Estimates in Cognitive Models: A Meta-Analytic Review of Multinomial Processing Tree Models Across the Multiverse of Estimation Methods'. Together they form a unique fingerprint.

Cite this