Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Bayesian t-test - same data different BF10 each time

I was working with the Stereograms t-test data set with Mann-Whitney while reading the reporting guidelines preprint. While I was flipping between different hypotheses when I returned to Group1>Group 2 I get a completely different BF10 while all the other factors remain the same. Mann Whitney W stays at 973, credible intervals are always the same but BF10 has varied between 10.44 and 5.7. (See attached screenshots).
Any idea why?

Mark

Comments

  • Hi Mark,

    Thanks for sharing this. The underlying algorithm introduces some degree of variation (there is Gibbs sampling both for dealing with rank data and for sampling from the delta posterior distribution). I have just implemented some more stability and clarification about the Bayesian Mann-Whitney test - now it includes a footnote about the algorithm, it runs multiple chains and bases the Bayes factor off that (I might also implement the user specifying that number themselves), and the R-hat statistic (i.e., Gelman-Rubin diagnostic) that measures convergence of the mcmc-chains.
    This variation is especially prevalent when there is either low sample size, or low number of mcmc-samples. For now, maybe it helps to increase that number to the maximum. The updated code will be in JASP 0.9.3, in any case!

    Kind regards
    Johnny

    Thanked by 1EJ
Sign In or Register to comment.