Linear regression analyses: 1. Robustness check / 2. Interpretation (models include covariates)
Hi everybody,
I am currently conducting six regression analyses with 4 IV and 6 different DV. Furthermore, I include 4 covariates in the models. Following the paper of van Doorn et al. (The JASP guidelines for conducting and reporting a Bayesian analysis), I want to assess the robustness of the results. I used the default prior option, because prior knowledge is absent in my case (so: r scale = 0.354). I would like to check manually if my results are robust (by setting different priors). My first question is: how do I choose my other priors? Does it make sense to use r scale = 0.1, 0.5 and 0.7? Or are there "classical" / recommended priors I should use to compare my results with the default priors?
My second question relates to the interpretation of the robustness check. I saw that this analysis: https://osf.io/wae57 (mixed ANOVA without covariates). The authors stated: "Since we do not have random effects or covariates in the model, only the hyperparameter for fixed effects is relevant." Now I am wondering how to interpret the results of my regression models because I included covariates and conducted a regression. Which indices do I use in order to compare my results for different priors?
I hope that my questions are clear and that somebody can help me out. I was looking in the literature, but maybe I did overlook some information.
Thank you very much!!
Comments
Hi Alexa,
Sorry for the tardy response. My gut says to either half or double the default. But I'll contact the team for a more informed opinion...
Cheers,
E.J.
Hey E. J.,
this would be perfect. I still did not fix theses issues. Thanks a lot!
Alexa
Hi @alexa ,
To add to EJ's comments - I would not use too extreme values in the robustness test, since at some point the prior becomes so narrow or wide that the model just becomes non-sensical. EJ's suggestion to double and half the main values works in that regard (but your suggestion to use a prior width of 0.1 would be too narrow I think).
With regards to the mixed anova example - what we meant there was that we only have fixed effects in the model, that's the only prior we vary. When other effects are included, there are more relevant priors to assess the robustness of. When this is the case, you can run the various permutations (e.g., 0.2 fixed/0.5 random; 0.2 fixed/0.7 random; etc).
Kind regards
Johnny
Hi Johnny,
perfect, that helps a lot. Thank you very much!
Best
Alexa