#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

# Prior in Bayesian repeated measures ANOVA

I have a two by two by two factorial design in which I tested whether there was an interaction. In the 'standard' rmANOVA I observed no evidence for a reliable interaction and this was confirmed by a Bayesian Analysis demonstrating that the two main effects model was the best model. Adding the interaction resulted in a BF10 of 0.41. I thus concluded that there was only weak evidence against a reliable interaction (2.44). This inspired a replication where I again observed no interaction and a BF of 0.281. This experiment thus provided stronger evidence against an interaction but still not overly convincing (3.56). I was wondering whether, and if so how, it is possible to include exp 1 as a prior in the analysis of Exp 2???

Thanks for the help.

Cheers,

Dirk van Moorselaar

• Hi Dirk,

There are two ways to do this. First, there is the Verhagen & Wagenmakers method, where you "simply" use the posterior from the first experiment as a prior for the second experiment. Unfortunately, the updating and specification process are non-trivial for ANOVA models. Second, you can use the Ly method and obtain a BF by adding the data together. This will yield the same result as updating one batch at a time. The Ly method does assume that the data are exchangeable, so the replication is as exact as can be. If you Google "replication Bayes factor" on my website you should find the relevant papers.

Cheers,
E.J.

• Hi Erik Jan,

Thank you for the very fast reply. This means I might be able to resubmit before Christmas . I read your paper and this is where I got a bit confused. These are the Bayes factors that I obtained. Note that the factor always denotes how much evidence there is against the interaction being a better model than the two main effects model:

Exp 1: BF10 = 0.41 -> 2.44
Exp 2: BF10 = 0.281 -> 3.56
EXP 1+2: BF10 = 0.194 -> 5.15

If I understand your paper correct I can obtain the bayes factor of experiment 2 where experiment1 is included as a prior as follows: 5.15/2.44 = 2.11.
So if I did everything correct, this means that the bayes factor now becomes smaller despite including exp 1 as a prior than it is without a prior. Maybe I am misunderstanding something but this feels a bit counterintuitive to me, because exp 1, although not very reliable, already demonstrated some evidence against the interaction

Thanks again,

Dirk

• Hi Dirk,

Exp 1 provided some evidence against the interaction; consequently, in the model that includes the interaction, the corresponding posterior distribution will have more mass near zero than the prior did. In other words, the interaction --if it exists-- is now known to be relatively small. Effectively, after seeing the data from Exp 1, the interaction model now makes predictions that are relatively similar to the two-main-effects model. When models start to make similar predictions, the evidence decreases.

Cheers,
E.J.

• Hi Erik,

Thanks this is very clear. Enjoy the Christmas break!

Cheers,

Dirk

• Hello everyone,

We'd like to use the results of the Repeated measures ANOVA of a block in the prior of the repeated measures ANOVA of another block. The second block is the same than the first one but it is based on a different set of stimuli : same participants, same dependent variables, same protocol.

JASP is already really helpful for the analysis of our data especially because it provides bayesian versions of commonly used frequentist tools (like the ANOVA) and is therefore easier to introduce to our 'p-value fans' readers in Human Interface Interaction. In our quantitative study, we would love to exploit the bayesian framework at ts best by injecting a bit of knowledge from block 1 in the analysis of block 2. After some intensive searching, we found no practical way to parameterize the second ANOVA to take into account this a priori knowledge.

It seems that it requires too much Bayesian skills and JASP is not yet implemented with these tools. So is it really a dead end for an informative prior ?

Thank you,

Olivier

• Hi Olivier,

There actually exists a workaround, based on transitivity. See https://psyarxiv.com/u8m2s/

Cheers,

E.J.

It's a bit off-topic here, but regarding the challenge to publish bayesian stats in a rather frequentist community, I read a smooth way to report Bayesian analysis as an "expansion" of traditional stats reports. P-value is reported after an ANOVA let's say, and right under, lies the bayesian approach. The ensuing discussion is actually based on the later analysis...

• After the reading of the paper and its examples (that are very clear and informative), I think my original question was a bit misleading.

Our purpose is less "to quantify the evidence from the data of a direct replication attempt given data already acquired form an original study" than accumulating the results of 2 similar experimental blocks to produce a more refined/robust conclusion about the measured effect. In my non specialist view, these 2 goals appear to be different.

Another way to see my question, is that I'm looking for an alternative solution (if any) to a mixed model for repeated measure or an ANOVA with the averaged values of each condition from the 2 blocks.

Anyway, I do thank you and your colleagues for the pedagogy you spread about bayesian stats, it is really helpful to me. Here my question seems too specific (and my statistic background too messy) for a definitve answer on a forum. ?

• edited October 2021

Hi. I am a cognitive neuroscientist and we mostly study ERP data. Size effects are usually medium-low in our line of research, so default prior scales (0.707 in t-tests, 0.5 in RM Anova) seem high for us. In t-tests, the Cauchy scale is directly interpretable is directly interpretable in terms of Cohen's d size effects (i.e., a 0.7 scale means 50% probability Cohen's d effect sizes will be between -0.7 and 0.7).

However, I hardly find information on what the prior (fixed effects) scale means in RM Anova (beyone it uses Cauchy distribution too), or how it should be interpreted. I understand this is a multivariate prior which encompasses all the factors in the model (so this question has not an easy response). In any case, as a basic, initial step, is the 0.5 ANOVA default equivalent to the 0.707 t-test default (i.e., relatively high probability assigned to relatevely high effect sizes)? In case medium-low effect sizes are expected, which scale values would be optimal (orientative)? I am making some tests introducing 0.15 as the fixed effects prior scale. I this too low? Thank you very much in advance.

• edited October 2021

Hi. I am a cognitive neuroscientist and we mostly study ERPs. Effect sizes are usually medium-low in our line of research, so default scales for Cauchy prior distribution (0.707 in t-tests, 0.5 in RM ANOVAs) seem high for our data. In t-tests, the Cauchy scale is directly interpretable in terms of Cohen's d effect sizes (i.e., 0.7 scale means 50% probability is expected for effect sizes between -0.7 and 0.7).

However, I hardly find information on how to interpret the scale in the (fixed effects) prior in RM ANOVAs. I understand this is a multivariate prior that encompasses all the factors included in the model so this question has not an easy response. In any case, as a very basic, initial information: is 0.5 (fixed effects) scale in MS ANOVA equivalent (or approx) to the 0.707 scale in t-tests (i.e., relatively high probabilities for relatively high effect sizes)? Which scale value/s would be reasonable if one expects medium or low effect sizes? I am running some tests using a 0.15 scale (fixed effects) and results seem reasonable and in accordance with traditional, frequentist ANOVAs; but is this too low?

Thank you very much in advance.

• Dear LuCA,

Sorry for the tardy response. Indeed, 0.5 in the ANOVA maps on to 0.707 in the t-test, if I recall correctly. You can try this out by taking t-test data and analyzing it with an ANOVA. The problem with prior distributions that are overly wide is sometimes exaggerated imo (see https://www.bayesianspectacles.org/concerns-about-the-default-cauchy-are-often-exaggerated-a-demonstration-with-jasp-0-12/). Of course you can lower the scale parameter but this makes the predictions from H0 and H1 very similar (and hence the outcome not very diagnostic). I personally think that if you want to make the scale as low as 0.15, you probably also have sufficient knowledge to have the prior under H1 not be centered on zero (see for instance https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1562983). But the specification of informed prior distributions becomes difficult when the model is more complex. What should be relatively straightforward still is to impose order-restrictions; so keep the scale at its default value, but include the knowledge that the effect is of a particular direction. We have not worked this out and included it in JASP, but it is on our radar.

Cheers,

E.J.