Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Performing a Bayesian Repeated Measures ANCOVA

Dear forum,


I have a few questions regarding Bayesian Repeated Measures ANCOVA. To start with some context, we are comparing three measurement conditions/methods:

- Participants performed three trials (within-participant, repeated measures)

- The same data was gathered in these trials with the same instrumentation (5 outcome variables)

- The trials were supposed to be mechanically identical, but they turned out to be not (confounder)

- We are mainly interested in H0


The first thing I looked into was using Intraclass Correlations to examine the similarity or lack thereof between the different trials. However, the analysis would be more sensible if we also included the effect of the confounding factor (mainly from a theoretical perspective).


Previous work from literature typically involved a regular RM-ANOVA and said "p>0.05 so the results between conditions are identical" which doesn't seem like a possible conclusion you can draw with frequential statistics to me. So here we are and now I'm looking into Bayesian RM-ANCOVA with JASP (looking great btw).


Ideally I would be able to compare the three trials in an RM-ANCOVA and do post-hoc comparisons. However, the confounding variable was different for every trial and the data in JASP is in a wide format. So instead I did three RM-ANCOVAs per variable (cond1 vs cond2, cond1 vs cond3, cond2 vs cond3). As the data is in a wide format I figured I could compute a proxy variable for the confounder (e.g. cond1 vs cond2): confounder1_2 = confounder1 - confounder2.

Now I have a single column as a confounder for each comparison which appears to work. Does this sound at all feasible to you? Am I violating some statistical rules that will lead to the end of humanity? Are there corrections I could/should make for multiple testing?


Next, the BF10 between the condition model and the condition + confounder model does not appear to change very much. However, it does become lower, which does indicate to me that after correcting for the confounder the null hypothesis appears more likely. Which is what I would expect. Should I do a model comparison? And if I do this, should I then compare the BF10s or the BF01s seeing as I'm interested in a null result and the 'better' model would be the one with a lower BF10. This seemed a little odd to me.


Finally, if there's any reading material that you could recommend that would be great. I've been scouring the internet but the stuff I found is either 99% formulas with little explanation or about simple t-tests.


Best,

Rick

Sign In or Register to comment.