Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Reviewer wants justification for the default prior

Hi,
I Have conducted a repeated measures Bayesian ANOVA and stated the default prior: r scale fixed effects =0.5; r scale random effects = 1). I have also cited and read the appropriate articles (e.g., Rouder et al., 2012). But the reviewer wants a specific justification. I do not expect a big effects from the experimental condition when I compare the models.
Any idea how to formulate the justification this without going to the math? The point is that it is a default prior that should fit most cases in experimental psychology. But the reviewer wants more than that.

Best regards,
Ester

Comments

  • Dear Ester,

    Perhaps you only have fixed effects, in which case I'd just report those. The ANOVA priors were proposed by analogy to the t-test; if you conduct a between-subjects t-test with the default r=.707 setting you ought to get the same result as for a one-way ANOVA with two levels. You could conduct a robustness analysis and examine the extent to which you get qualitatively similar results if you change the settings somewhat. I think this is more compelling than philosophical argument.

    Cheers
    E.J.

  • We have the same problem.
    This is not a very satisfying answer :-(

  • Hi Pete,

    Some additional thoughts:
    1. The default priors were chosen to meet formal desiderata, see @ARTICLE{BayarriEtAl2012,
    AUTHOR = {Bayarri, M. J. and Berger, J. O. and Forte, A. and {Garc\'{\i}a-Donato}, G.},
    TITLE = {Criteria for {B}ayesian Model Choice With Application to Variable Selection},
    JOURNAL = {The Annals of Statistics},
    YEAR = {2012},
    volume = {40},
    pages = {1550--1577},
    }
    2. One can promote subjective or informed priors, and that's fine and useful, but (a) default priors still provide a good reference point; (b) for complicated models the subjective approach becomes practically very difficult.
    3. A robustness analysis is always useful.
    4. If others have more information they can use a different prior -- as long as the data are available then anyone is free to apply any model they like.
    5. At APS, Julia Haaf presented work that showed how one can add theoretically-motivated order-constraints. This is not yet possible in JASP, but it's on our agenda.
    6. It is always a good idea to acknowledge uncertainty; there is no "ultimate" or "correct" prior distribution. But the defaults seem to work well enough for the applications that have been encountered, and provide a useful alternative solution to the "p<.05" summary.

    Hope this helps.
    Cheers,
    E.J.

Sign In or Register to comment.