Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Problems understanding Bayesian repeated measures anova

Hi all,

I have some problems understanding some parts of the output of a bayesian repeated measures anova. In my study, I have a between subject variable (condition) and a within subject variable (face condition). In the standard analysis I obtain a main effect of face condition. the main effect of condition or interaction between the factors were not significant.

In the Bayesian repeated measures anova analysis I obtained the following results:

As you can see BF10 for the variable face condition is 1.000. What it strikes me is that if I change the bayes factor to BF01, I obtain the same value

Not very sure how to interpret this and it doesn't look right to me, especially when I compared these values with the null model... Any idea?



  • Hi Ajestudillo,

    The BF10/BF01 compares all models to the best performing model (this is a setting you can change to "compare to null model"). This means that the best performing model is at the top of your table - in this case the model with only the main effect of face condition performs best, which is in line with your other analysis. Changing the subscript of BF is only for interpretation, so instead of saying that the data are 0.2637 times more likely under the model with both main effect, compared to the model with only the main effect of face condition, you can say that the data are 3.7928 times more likely under the latter model than under the former model. These Bayes factors have the same information, but BF01 is just equal to 1/BF10. Sometimes one is nicer for interpretation, sometimes the other.

    Your BF is 1 because in the top row, you are comparing the model with the main effect of face condition to the model with the main effect of face condition. The new JASP version has improved help files, and you will find some more information about the different columns there as well.

    You can also tick the box "Effects" to get results per factor in your model, which is averaged across all the possible models. So instead of doing model comparison you are doing above, you are doing effect comparison (which compares the performance of all models with that particular effect to the performances of all models without that particular effect).

    I hope this helps!


  • Hi Johnny,

    That makes perfect sense. I was expected a comparison with the null model, so i didn't get the results.

    Thanks a lot!


Sign In or Register to comment.