Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Interpreting Analysis of Effects in Bayesian RM-ANOVA output

edited June 2016 in JASP & BayesFactor

Hi JASP-team,

I have conducted a simple reaction time experiment with congruent and incongruent trials as a within-subject variable with two factors and the method of stimulus presentation (Source: Laptop vs. Tablet) as a between subject factor.

The first table below seems clear: the data suggests that the likelihood of the model with two main effects is the most preferred, compared to the null model. Moreover, there is moderate evidence against an interaction effect between the two factors: (BF01 A + B + A*B)/(BF01 A + B). Based on this, i can conclude, as expected, that the there is a congruency effect and a main effect of presentation method, but the congruency effect does not differ on both instruments. So far, so good. (Please correct me though if i'm making some errors here)

Now when i tick the Effects box the second table was more confusing and i could not find any documentation on how to interpret this output. Why does the prior P change from 0.200 to 0.600? Can i interpret, based on the third column, the main effect of Source as stronger compared to the Congruency effect? Or how should this be intrepreted? And, if i read it correctly, is the likelihood of having an interaction effect lower posterior compared to the prior?

Any pointers or documentation are very much appreciated!

Cheers,
Laurent

ps. Awesome work with the software! After dreadful years working with SPSS JASP has certainly succeeded in making statistics fun again.

image

Comments

  • EJEJ
    edited 1:43PM

    Hi Laurent,

    We are working on a paper that explains the output. Basically, the "effects" output averages across the models. With 0.2 prior probability of each of the five models, the factor "source" occurs in three, meaning a summed prior inclusion probability of 0.2+0.2+0.2 = 0.600. The same can be done with posterior inclusion probability [this is P(M|data)]. And the change from the prior inclusion odds to the posterior inclusion odds is then the Bayes factor for including that factor. This functionality is particularly useful with many models and factors, when multiple candidate models are reasonable to compare.

    Cheers,
    E.J.

Sign In or Register to comment.