Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

[open] Bayesian ANOVA and nuisance variables in JASP

R_AR_A Posts: 4
edited March 2016 in JASP & BayesFactor

After several years of sticking to SPSS-based ANOVA, I'm just starting to grapple with Bayes (using JASP) so apologies for possibly silly questions below...

Within a multi-factorial design, I have factor A that gives a significant main effect (e.g. p=.009) in the traditional repeated measures ANOVA, but BF10 in favour of the null (e.g. .31) in the Bayesian versions using JASP. When I label one of the other (strongly supported) factors (factor B) in the design as 'nuisance', factor A is then supported (e.g. BF10 = 6.15). The factor I'm labeling as nuisance here is of less interest than factor A, but it isn't of zero interest, so I'm not sure if and when its appropriate to do so.
It would also be useful to understand more clearly why factor B is influencing the comparison of the factor A model vs. the null (when the two don't interact) - this makes me less confident about interpreting any factor's model in the context of a multi-factorial design if its potentially going to be obscured by other factors in the design.

I've seen this with other data sets too, so any help appreciated!

Comments

  • EJEJ Posts: 408
    edited 9:09AM

    Hi R_A,

    thanks for that question. It has me scratching my head. Perhaps Richard knows more. Let me look into this and get back to you.

    Cheers.
    E.J.

  • EJEJ Posts: 408
    edited 9:09AM

    OK, after talking to Richard it is now clear to me. Consider a situation with factors A and B. BTW, a specific example is always appreciated -- note that you can upload annotated .jasp files to the OSF and everybody can view the output.

    Anyway, consider the original analysis where nothing is nuisance. You have a BF10 for A+B over the null (the two main effects model; BF10(A+B)) and you have a BF10 for B over the null, BF10(B). Let's use transitivity to compute the evidence for "adding A with B already in hand": BF10(A+B)/BF10(B).

    If you tick "B" as nuisance, you are comparing the "null model with B in hand" to a model that adds A. The number you get should be identical to one you get from the computation outlined above that did not involve ticking B as nuisance (the BF10(A+B)/BF10(B) operation).

    You are right that BF10(A) [where you look at the support for adding A over having nothing] is not the same as BF10(A+B)/BF10(B) [where you look at the support for adding A with B in hand]. This is what it is -- maybe Richard can say more.

    Of course you may wonder what analysis to report. There's at least three options: (1) be transparent and report all comparisons; (2) include the factors for which there is good support, and then see whether the factor of interest adds more -- so if there is good support for including B, include it first and then look at the support for adding A as well; (3) do an "effects" analysis where you don't focus on specific models but you average across all of them to identify the overall inclusion probability for the factor of interest.

    Cheers,
    E.J.

  • R_AR_A Posts: 4
    edited 9:09AM

    Thanks for the quick and helpful response! I did try both the transitive and nuisance approaches and they give very similar outputs, as you say. I'll try to upload the output image from the basic analysis if that will help others. The prior discussion I'd seen concerning interpretation of main effect outputs tended to imply that we just take the BF10 for each factor, when in fact the genuine BF could be being obscured by other factors in the design, as seems to be the case here.

    On a general note, I am finding JASP useful and easy to use so far. My main issues have been with output interpretation (though this is proving useful in learning about the underlying mechanics). I have also had an issue with the slight variation in outputs that JASP gives between running the same analysis on the same data multiple times, but there are approaches to dealing with this. Are there any plans to expand the guidance/help sections directly associated with JASP, in order to help relative beginners overcome such bumps?

    Cheers, RA.

  • R_AR_A Posts: 4
    edited 9:09AM

    image

  • R_AR_A Posts: 4
    edited 9:09AM

    And with Factor B labelled as nuisance...

    image

  • EJEJ Posts: 408

    Hi RA,

    I am working on a JASP manual, and on a JASP article that describes how to interpret the JASP output for ANOVA designs. You can also check out the ANOVA paper on my website, http://www.ejwagenmakers.com/inpress/RouderEtAlinpressANOVAPM.pdf

    Cheers,
    E.J.

  • vinsvins Posts: 4

    Hi,
    I analyzed my data with RM anova using SPSS (I have 4 RMfactors : Canale (8 levels) , Emisfero (2 levels), Sequenza(4 levels) and Task( 3 levels)
    after revision the reviewer said "....the authors should then run Bayesian statistics to show that the null hypothesis is indeed true"
    I tried to use JASP bayesian RM anova , the table reports RManova results and BF inclusion

    Could someone help me to interpret the BF inclusion?

    image.png 39.9K
  • EJEJ Posts: 408

    The BF inclusion averages across all models under consideration. It looks at all models that include the factor of interest, and pits them against all models that exclude that factor. You then look at the change from prior inclusion odds (summed prior probability for all models that include the factor versus summed prior probability for all models that exclude the factor) to posterior inclusion odds. This is the inclusion BF. The topic was discussed on the Forum several times, so when you look for it you will find more information.

    Cheers,
    E.J.

    Thanked by 1vins
  • vinsvins Posts: 4

    Thank you very much
    best
    Vins

  • vinsvins Posts: 4

    Hi EJ,

    the estimated Bayes inclusion factor for the effect of "task" indicated that the data were infinite times in favour of the alternative hypothesis (relative to all alternative models)... is it correct?

  • vinsvins Posts: 4

    Hi ,
    I have run bayesian RM anova and in post hoc comparisons I read BF10, U which means U?
    could you help me?

  • EJEJ Posts: 408

    U stands for "uncorrected", so it does not include the post-hoc correction term that comes from the prior model probability.

  • EJEJ Posts: 408

    We'll clarify this in the table heading for the next release; I've made it an issue on our GitHub page.

Sign In or Register to comment.