Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Evaluating and reporting the quality of Bayes analyses reported by JASP

Hello. Can one of the JASP team please explain to me how to check whether output from a Bayesian analyses in JASP (such as independent sample t-tests or repeated measures anova or correlation pairs, etc.) are based on MCMC analyses that have converged are otherwise of satisfactory quality?

If there is lack of convergence, my understanding is that the BFs etc may not be reliable? Do JASP Bayesian estimation routines automatically check for convergence in some way and give an error message if there is a problem? For researchers (especially those like me that are new to the Bayesian approach) how can the quality of the reported Bayesian estimates most easily be judged and how should this be reported in published research based on JASP to satisfy journal reviewers?

Another question: How does one interpret the reported error percentage in the output tables?

Thank you!

Comments

  • Hi KenC,

    We try to avoid MCMC as much as we can. For many of our tests, we have analytic solutions, or only require a one-dimensional integral. When we do use numerical methods, the error percentage gives an indication of the accuracy of the approximation (running the same analysis multiple times will of course also give you an idea about that).

    Cheers,

    E.J.

  • Thank you, E.J., for your prompt and helpful response.

    Subsequently I did some further reading in various JASP-related papers by you and others on Bayesian ANOVA, t-tests, etc. that further clarified the issue. Another post on this forum was also illuminating: "interpreting error percentage".

    Muchas gracias for your help and for JASP!

    KenC

Sign In or Register to comment.