Bayesian RM ANOVA: Interpretation of interaction model and BFs that are "anecdotal"
Hi there,
I just started working with JASP and have some problems with interpreting the output of the Bayesian RM ANOVA. Any help would be greatly appreciated
In my current experiment I tested two groups of participants. I'm using the BF to check whether there is a difference between two of my conditions. If there is no difference I would like to collapse across these two conditions.
I did a Bayesian RM Anova with Group (Group 1 & 2) as between factor and Condition (Level A & as within factor. Now I think I should look at the interaction model (right?). I read in one of the previous posts (http://www.cogsci.nl/forum/index.php?p=/discussion/1716/interpretation-of-bayes-repeated-mesures-anova/p1) that I would have to divide the BF10 for the model including the two main effects by the BF10 for the model including the main effects and the interaction to isolate the interaction and see what it adds to the model. For my data that results in 2.1. Now I'm a bit confused as to what that means? It looks like including any of the factors does not really add anything to explain the data (all BFs < 1). How can the value for the isolated interaction be 2? Also none of the BFs is under 0.3 and therefore it's only "anecdotal" evidence for the null. I'm not sure what to make of that? Can I collapse across my conditions or not? Or is it just up to me whether I do?
Thanks for any help in advance. I really appreciate it!
Comments
http://img.cogsci.nl/uploads/573975dde1460.jpg
Sorry, my image got lost. Click link above Thanks!
Hi Lori,
First, let's assume that looking at the interaction term is a good idea. Then yes, the BF is 2.1 against including the interaction (over and above the two main effects model). You can get the same result more easily if you use the model specification option and assign both main effects as "nuisance": this will make them part of the null model and you don't need to do a computation yourself.
Secondly, I am not sure whether that interaction term tells you what you want to know. Suppose that on average, group A scores 11 and 15 in the two conditions, and group B scores 111 and 115. There is no interaction, but do you really want to collapse across the two conditions? Probably not.
Instead, it seems to me that you might want to decide whether or not to add condition as a covariate. For concreteness, suppose you want to test ADHD children against controls. Each child does a task with letters or with numbers, and you want to see whether you can collapse across those, because they are not really what you care about. The same question arises when you have counterbalanced some factor, for instance across two halves of the experiment. Should one include the counterbalancing factor in the analysis? It depends -- if there is no effect then you are better off without that factor, because you pay for it with one degree of freedom; if there is an effect then you are better off including it. At least this is my quick assessment; other people may have different ideas.
Basically, I am saying that the assessment: "do the groups differ in the effect of condition?" (the interaction term) does not tell you whether you can collapse across condition.
Cheers,
E.J.
Thanks for the quick reply EJ. That makes a lot of sense! I don't fully understand why the value 2.1 is evidence against including the interaction?
Thanks for the comment on the usefulness of the interaction term here. I'm am wondering whether that makes sense at all now too
The issue is that I actually have four levels in Condition. But level A & B are both control conditions so I just wanted to average across them to get a single baseline. I first did a normal RM Anova and p is insignificant for these two levels. But as I'm aware that accepting the null is not possible I wanted to move on to bayes. So for the concrete example you gave, if I had letters, digits, colours, and shapes and I'm only interested in colours and shapes and digits and letters are controls can I include these as covariates?
Thanks so much for this detailed explanation - this was very helpful!
Cheers
Hi Lori,
About the 2.1 being evidence against the interaction: the table indicated that the two-main effects model didn't do so well against the null model. But adding the interaction didn't help. In fact, it made things even worse: the model with main effects and interaction was 2.1 times as bad as the model with only the main effects.
About your other question: ah, the design is now a bit more complicated than I had thought. I am just not 100% sure here. My comment about the covariate was just one idea. Regardless, I think you would need to report the analysis without collapsing anyway. So my suggestion is to report the analysis you had originally planned (without collapsing) and then present the other one as well. If they lead to very different results I would be cautious in my conclusions.
Cheers,
E.J.
Great! That makes sense. Thanks so much for your advice, this was very helpful