Is the following approach appropriate for Bayesian mixed models, or could it be problematic?
I’m trying to determine the random-effects structure that is best supported by my data, using default Bayes Factors with the BayesFactor R package.
TL;DR:
I’m using the BayesFactor R package to compare mixed models by first selecting the best random-effects structure (via Bayes Factors) and then, with that structure fixed, comparing fixed effects. Is this two-step approach valid, or could it introduce problems?
My approach:
- Step 1 (Random effects):
- I start by defining a full model with all possible random slopes I want to test.
- My R code then generates all possible model formulas by keeping the fixed effects constant while varying the random slopes (including the option of no random slopes).
- I compare these models using Bayes Factors to select the best random-effects structure.
- Step 2 (Fixed effects):
- With the random-effects structure fixed to the one selected in Step 1, I then perform model comparisons on the fixed effects.
- I generate all possible fixed-effect combinations and use Bayes Factor model comparison (or calculate inclusion Bayes Factors) to find the best fixed-effects structure.
Note:
The BayesFactor package does not implement “traditional” random slopes. Instead, random effects are treated as fixed effects of the form IV:participant
with whichRandom = participant
(true random intercepts).
Does this two-step approach (selecting random effects first, then fixed effects) make sense for mixed models using Bayes Factors? Could it be problematic in any way? I’m happy to share the code if that helps.