Discrepancy between frequentist and bayesian ANOVA
Hello,
I conducted an Experiment with one between-subjects factor (Group) and two within-subjcets factors (MemoryCue, Source). When analysing my results, a mixed ANOVA gives me two main effects for MemoryCue and Source and a sig. interaction between the two, F(1, 128) = 11.92, p < .001.
When I conduct a Bayesian ANOVA using the BayesFactor package I get the following:
Input:
# Bayesian ANOVA
FR_bf = anovaBF(SourceRecall ~ Group*MemoryCue*Source + Subject, data = data_final, whichRandom="Subject")
FR_bf = sort(FR_bf, decreasing =TRUE)
FR_bf
The output looks as follows:
# Output
Bayes factor analysis
--------------
[1] MemoryCue + Source + MemoryCue:Source + Subject : 2.071683e+111 ±3.53%
[2] MemoryCue + Source + Subject : 1.827539e+111 ±2.5%
[3] Group + MemoryCue + Source + MemoryCue:Source + Subject : 1.630972e+110 ±12.65%
[4] Group + MemoryCue + Source + Subject : 1.317137e+110 ±2.54%
[5] Group + MemoryCue + Group:MemoryCue + Source + MemoryCue:Source + Subject : 7.646137e+107 ±4.55%
[6] Group + MemoryCue + Group:MemoryCue + Source + Subject : 7.20429e+107 ±8.43%
[7] Group + MemoryCue + Source + Group:Source + MemoryCue:Source + Subject : 1.263392e+107 ±2.86%
[8] Group + MemoryCue + Source + Group:Source + Subject : 1.154352e+107 ±2.84%
[9] MemoryCue + Subject : 3.594377e+105 ±2.23%
[10] Group + MemoryCue + Group:MemoryCue + Source + Group:Source + MemoryCue:Source + Subject : 6.333031e+104 ±2.96%
[11] Group + MemoryCue + Group:MemoryCue + Source + Group:Source + Subject : 5.811523e+104 ±3.46%
[12] Group + MemoryCue + Subject : 2.818147e+104 ±9.25%
[13] Group + MemoryCue + Group:MemoryCue + Source + Group:Source + MemoryCue:Source + Group:MemoryCue:Source + Subject : 1.253457e+102 ±2.71%
[14] Group + MemoryCue + Group:MemoryCue + Subject : 1.249367e+102 ±2.94%
[15] Source + Subject : 145095.3 ±2.37%
[16] Group + Source + Subject : 9694.612 ±9.93%
[17] Group + Source + Group:Source + Subject : 7.090814 ±1.43%
[18] Group + Subject : 0.06024572 ±2.97%
Against denominator:
SourceRecall ~ Subject
---
Bayes factor type: BFlinearModel, JZS
When comparing model 1(including the interaction) with model 2 (only 2 main effects) I get the follwing:
> FR_bf[1]/FR_bf[2]
Bayes factor analysis
--------------
[1] MemoryCue + Source + MemoryCue:Source + Subject : 1.133592 ±4.32%
Against denominator:
SourceRecall ~ MemoryCue + Source + Subject
---
Bayes factor type: BFlinearModel, JZS
Thus, no evidence for an interaction.
So, why is there such a huge discrepancy between the frequentist ANOVA and the bayesian ANOVA results regarding the interaction?
Thanks for your help.
Cheers,
Ivan
Comments
Hi Ivan,
It's a little hard to say without the data (descriptives). I could assist more easily if the analysis were done in JASP, but Richard is the expert anyway so I'll bring this post to his attention. I'd be pretty impressed if he knows what's going on without looking at the data :-)
Cheers,
E.J.
Hey E.J.,
Thanks for your quick reply. I reran the analysis using JASP and the same result emerges.
The frequentist ANOVA gives a highly sig. interaction between Source and MemoryCue:
But for the bayesian ANOVA there is only "anecdotal" if any evidence for the interaction. In the following I checked Source and MemoryCue as nuissance factors to add them to the base model and compare the interaction term against that model:
Best,
Ivan
I'm wondering if there are interactions with participant here: that is, there are effects that are not picked up in your model (variance between subjects in how big the effect is) that are not modelled, and so they are being picked up as noise. There are two things that would be helpful in interpreting the output: 1) the corresponding lmer analysis, and 2) a plot of the data with all subjects shown.
If there are (unmodelled) interactions with participants, these will be interpreted by the Bayes factor as noise, and hence the effects will "appear" smaller.
Thanks for the quick reply. Here is a plot with all the data shown:
I have never conducted a lmer analysis but I will look into it.
Best,
Ivan
Now analyzed the data using a linear mixed effects model. Again, as I have never used these models before, I am not sure if I did it right:
First I fitted a baseline model without the interaction of interest:
Then a model with the interaction:
Finally I compared both models. Here there seems to be evidence for the interaction:
So, I am not sure what to conclude, as this analysis also seems to tell a different story than the bayesian ANOVA.
Ivan
Hi Ivan,
Well, I am not so sure there is a big conflict. The BF for including the interaction is about 2.7, right? And the p value for your last analysis is about .01. That discrepancy is a little larger than what I usually see, but then again, this is an interaction effect.
Cheers,
E.J.
Hi E.J.,
The BF for including only the Source*MemoryCue interaction is 1.543 if I look in the BF10 column (see above) which compares the model with the interaction against the nuissance model that has both main effects. Or am I looking at the wrong column? Should I be reporting the BF10 of the model that includes the main effect of Group and the interaction as well? As the freq. analysis gives a sig. main effect of group, then that would be reasonable right?
If that is the case, then yes the BF10 for the interaction with Group in the model is 2.689.
For the last analysis the p-value for the Source*MemoryCue interaction was <.001.
Thanks again for your help.
Best,
Ivan
Hi Ivan,
Yes, you are right, it's 1.543. The choice of whether or not to look at the other effects depends on theoretical considerations I guess. The BF of 2.7 is obtained by comparing prior odds to posterior odds, so this includes the other models under consideration. In general I am in favor of reporting many different results so the reader can have access to most of the information. You could add the complete tables to an online appendix.
In general, if you look at this table it is clear that no single effect received overwhelming evidence. So I think the classical-Bayesian conflict will remain. The .01 I was referring to was for your mixed-model analysis (or did I interpret that incorrectly?)
Cheers,
E.J.
Hey E.J.
Ok, I see. You are right about the .01 if you where referring to the mixed-model analysis. So I will be transparant about the analysis and report all models in a table in the appendix.
Thanks for your help.
Best,
Ivan