Sequential Bayes Factor Design, Updating Priors and rmANOVA
Dear all,
I am currently conducting a study using Bayes Factor as my inference criterion. For my first hypothesis I am running an rmANOVA with a 2x2x4 design and for my second hypothesis I am using Bayesian Correlation Analysis. As someone who is relatively new to Bayesian statistics, I've encountered some uncertainties that I need to resolve.
- To assess the potential for generating misleading evidence in my chosen SBF design for my hypotheses, I conducted a Bayes Factor Design Analysis (BFDA) using the BFDA package for the correlation hypothesis. However, I encountered challenges in adapting this approach to my first hypothesis. While I am aware that I could potentially modify and extend the package for ANOVAs, I haven't yet identified an appropriate method for my specific 2x2x4 design. I've observed that previous studies have used BFDA for a 2x3 ANOVA (https://osf.io/zcygh/) under the guidance of Angelika Stefan, although I'm unsure whether the same approach is applicable to a 3-way rmANOVA. I've also seen suggestions on the forum that performing a power analysis with an alpha level of 0.005 might be a simpler solution. Could you provide some guidance on which approach is more appropriate in this context?
- As I'm running my analysis in two batches, I'm considering whether to use the posteriors from my first batch to fit the priors for the second batch. I've come across a couple of papers on replication approaches that suggest two strategies (Verhagen & Wagenmakers, 2014, and Ly, 2019). Given that it's the same study, should I consider using one of these strategies? Alternatively, is it inappropriate to apply them in the context of sequential analysis?
Your insights in clarifying these issues would be greatly appreciated.
Kind regards,
Maria
Comments
I've contacted our expert!
E.J.
Hi Maria,
to answer your questions:
(1) In principle, the mechanism of BFDA remains the same, no matter what test you use it for, so yes, you can also use BFDA for a RM ANOVA. To run a BFDA, you need a method to generate data (i.e., a simulation function that generates data that you would analyze with your Bayesian hypothesis test), and a Bayesian hypothesis testing procedure to apply to these generated data (e.g., your Bayesian RM ANOVA). The BFDA package only features a few hypothesis tests, and unfortunately RM ANOVA is not one of them (as you already discovered), and it doesn't allow you to simply extend the existing analyses, e.g., by adding additional covariates to analyses. Basically, what you would need to do is to write the simulation code yourself - which sounds more daunting than it actually is! Essentially, there are only two steps involved: (1) Generate data using a simulation function (e.g., using the Superpower package for RM ANOVA), and (2) apply your Bayesian hypothesis test (e.g., using the BayesFactor package) to each of the generated datasets (sequentially, if you like), and record the results. As to the question of what design analysis is more appropriate, I would always say the one that looks at the statistics you are actually interested in: It sounds like you would like to run a Bayesian analysis, so in my view, the information you get from a frequentist power analysis (how often do you get significant p-values under certain conditions?) is not really what you're interested in. Of course, nobody keeps you from running it though.
(2) This is an interesting question, and I think it might be worthwhile to look at a few Bayes factors here. One is, as you said, the final Bayes factor you would obtain if you view your design as a sequential design with two stages. This Bayes factor tells you what the overall evidence for an effect is. In addition, a replication Bayes factor can tell you whether the effect size in your second batch is similar to the effect size in your first batch. This is a different, but still interesting research question. You can also look at the Bayes factor from your second batch alone, using default priors, to get an impression of the evidence for the effect in the second batch of data alone (Ly et al did something similar in their replication BF paper). In general, I wouldn't shy back from reporting multiple Bayes factors because I think each of them contributes some interesting insights, so it shouldn't be an either-or decision.
One final remark on the sequential BF design: You didn't really ask this (so sorry if I'm explaining stuff you already thought about), but in my experience something people often don't consider when running a sequential BFDA for factorial designs is that there are multiple Bayes factors you could use to stop. As an example, there is the BF comparing the null model to the full model, or the BF comparing the best model to the next best model, or the BFs for each of the factor effects (see JASP's result tables, e.g., here: https://jasp-stats.org/2022/07/29/bayesian-repeated-measures-anova-an-updated-methodology-implemented-in-jasp/). All these BFs do not necessarily develop at the same speed or in the same direction as data accumulates. This means that depending on which BF you pick and what effects there are in the population, you will end your data collection sooner or later, and get whatever amount of evidence on the other Bayes factors that you weren't using as a stopping criterion. If you set up a BFDA for RM ANOVA, it is therefore important to consider in advance which BF you would like to monitor, and run your BFDA and sequential design accordingly.
Cheers,
Angelika
Hi Angelika,
Thank you so much for providing such detailed answers. Your explanations have helped clear up a lot of my uncertainties regarding the procedure. Regarding my first inquiry, I'm feeling much more confident about it now, so I'm planning to proceed with carrying it out and observing the outcomes. I will also keep in mind the Bayes factor that I'm monitoring for the BFDA. In my particular case, I believe the factor effects will play a significant role.
Regarding the second matter, I agree that reporting multiple Bayes factors won't have any negative impact. It's always better to provide a comprehensive view of the analysis.
Before I conclude, I have one last question. Concerning the RM ANOVA Post-hoc tests, I'm a bit uncertain about how to address an interaction effect. Based on my research in the forum, it seems that I might need to conduct these tests manually by creating supplementary columns and performing pair-wise t-tests. What is your take or suggestion on this matter?
Thank you once again for your invaluable assistance!
Best regards,
María