Baws factors for model selection
Hi,
I'm using baws factors in a linear model with categorical and continuous variables using generalTestBF. I'm using the baws factors as a way to select variables for the model that I interpret, but I'm curious as to whether this makes sense. Bayes is new territory for me and I'm wary of making errors.
Code that replicates these analyses is here
My data have the following structure:
- Estimate: A parameter from our cognitive model
- crowded: A two-level categorical variable,
- eccentricity: a continuous variable
- ID: Observer ID. Each observer has observations from all combinations of eccentricity and crowded. but in this dataset some data have been excluded due to problems with the cognitive model.
To do this, I run a generalTest predicting Estimate with crowded and eccentricity with random intercepts and slopes by ID.
generalTestBF(estimate ~ eccentricity * crowded * ID, data=theseData, whichRandom = c('ID', 'eccentricity:ID', 'crowded:ID', 'eccentricity:crowded:ID'), progress = TRUE)
I then calculate the baws factors and I get these values
- crowded = .63
- eccentricity = 2909.50
- eccentricity:crowded = .32
- ID = 1405.25
- eccentricity:ID = .11
- crowded:ID = .03
- eccentricity:crowded:ID = .0006
The only variables with evidence are ID (random intercepts) and eccentricity. So I then interpret the model estimate ~ eccentricity + ID. I sample from the posterior and see that the model's fixed effect for eccentricity predicts that estimates will increase with eccentricity.
Is this a valid way to use baws factors or have I made some horrible inferential error?
Thanks very much
Comments
Dear cludowici,
This is a tricky one for me to answer, as I am mostly working with JASP, and Richard has reservations to model-averaging. But I'll sign him in nonetheless, maybe he can at least speak to the general setup of the analysis.
Cheers,
E.J.