Beyond maximal random effects for logistic regression: Moving past convergence problems

Abstract

Mixed effects models are widespread in language science because they allow researchers to incorporate participant and item effects into their regression. These models can be robust, useful and statistically valid when used appropriately. However, a mixed effects regression is implemented with an algorithm, which may not converge on a solution. When convergence fails, researchers may be forced to abandon a model that matches their theoretical assumptions in favor of a model that converges. We argue that the current state of the art of simplifying models in response to convergence errors is not based in good statistical practice, and show that this may lead to incorrect conclusions. We propose implementing mixed effects models in a Bayesian framework. We give examples of two studies in which the maximal mixed effects models justified by the design do not converge, but fully specified Bayesian models with weakly informative constraints do converge. We conclude that a Bayesian framework offers a practical–and, critically, a statistically valid–solution to the problem of convergence errors.

Publication
Unpublished Manuscript
Date