Medium, Wide, and Ultrawide Priors for Bayesian ANOVA
Hey all,
I analyzed my data with JASP, submitted it, and now the reviewers require of me to do the analyses again but with different priors to check the robustness for my Bayesian ANOVAs. I know that this is easily done with Bayesian t-tests, but could not find how I can do this for the ANOVAs. I think it would be sufficient if I could run the ANOVAs again, but with different priors (medium, wide, and ultrawide). I see that in the advanced options under 'prior' I can adjust 'r-scale fixed effects' and 'r-scale random effects'. I assume these have to do something with what I am trying to achieve, but am not sure which numbers I have fill out to achieve medium/wide/ultrawide priors. Some help would be much appreciated!
Kevin
Comments
Hi Kevin,
You can run this past Richard Morey to be sure, but if you compare the scale for the ANOVA to that of the t-test you see that they differ by a factor of 1/2. So if you take the t-test scales and divide by 2 you should be good. Again, you can compare by trying this out on two-group data and executing a t-test as well as an ANOVA. I would adjust only the priors for the fixed effects, as these are most likely the ones you are interested in.
Cheers,
E.J.
Hi E.J.,
Thanks for your quick reply! Do you mean I should send Richard Morey a direct email with this question?
Kind regards,
Kevin
That would be your best bet. You could add my reply and ask whether he agrees.
Hi EJ,
Thanks for providing some insight on this.. doesn't seem to be discussed anywhere!
I'm still confused as how to choose an 'h' value. I have some pilot data where I did a paired t-test between two groups and so I have a difference in means and the effect size in Cohen's d units.
I am now looking to run an ANOVA which includes one more group than the pilot. How do I use these data to inform my h prior?
Cheers
I would not use those data to inform your prior. Ideally, the prior is determined before you see any data. Using the data to inform the prior and then using the same prior to model the data is a sort of statistical double dipping. I would simply stick to the defaults for the ANOVA, and do (informed) t-tests on the comparisons of interest.
Cheers,
E.J.