#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

# How to set up a Bayesian RM ANOVA?

edited May 2016

Hi,

this is a seemingly simple question. But I don't quite understand JASP's GUI. I want to run a simple repeated-measures ANOVA. I am working on a pre-registration and want to make sure that I know exactly how to do the analysis. I have simulated some data that I can use in JASP (script below).

Essentially, I have one outcome measure (`parameter`), one two-level within-subject factor (`reward`) and two factors that I'd like to add as "nuisance": `subj` and `item`. My question is whether there's a difference in `parameter` values between high and low `reward` (taking variation between `subj`ects and `item`s into account). I'd like to have the inclusion Bayes factor for `reward`.

The JASP Bayesian RM ANOVA GUI gives me three fields:

• "repeated measures cells": I feel like this is where `reward` should go but it won't let me add it.
• "between subject factors": I don't have any, right?
• "Covariates": I guess I put `subj` and `item` here and then mark them as "nuisance" in the "Model" dropdown menu below?

I don't get how to set my DV (`parameter`) and my within-subject factor (`reward`) properly.

Any help is appreciated!

• Florian

Script to generate data:

``````n <- 30

header <- c("subj", "item", "reward", "parameter")

data <- NULL

for(s in 1:n) {
n.items <- round(runif(1, 20, 50))

hr <- rbinom(n.items, 1, .5)
hr <- ifelse(hr == 1, "high", "low")
par <- abs(rnorm(n.items, .3, .1))

tmp <- as.data.frame(cbind(rep(s, n.items), 1:n.items, hr, par))
data <- rbind(data, tmp)
}

write.csv(data, "~/Desktop/simulated data.csv", row.names = FALSE)
``````

• edited 8:43AM

JASP expects a wide format instead of long format. Then simply fill out the Factors table (with category and levels) and fill the cells with the corresponding columns.

• edited 8:43AM

Hi,

thanks for your response! I played around with converting the data to wide format but I am not quite sure how to do exactly what I intended to do.

In the simulated data (see above), I have one parameter value for each item for each participant. Each item can be associated with either a high or a low reward. I want to know whether there is an effect of reward-level on the parameter value. If I generate `data` with the script above, I can then do:

``````foo <- aggregate(parameter ~ subj + reward, data, median) # across items
wide <- spread(foo, reward, parameter) # from library(tidyr)
``````

When I read that data into JASP, everything works nicely: I can set the high/low reward as the "repeated measures cells".

However, that way, I lose all the information about the items - because I aggregated across them now. If I don't do that, though, JASP throws an error because the number of rows are not the same (not every `subj` saw the same number of `item`s). So, if I use `spread()` on `data` directly (`wide <- spread(data, reward, parameter)`) and read that information into JASP, I can't work with it.

I see that JASP automatically includes `subj` in the model. Is there a way to also include `item` as a nuisance? (Because I know that not all items are equally difficult and it'd be nice to take that into account.

Thanks!

• Florian
• edited June 2016

Hi Florian,

Edit: I just realized that you're talking about a Bayesian RM ANOVA, and not a traditional one. I suspect that everything below holds in both cases, but I may be wrong. @EJ?

If I understand your question correctly, what you want to do is not possible with a repeated measures ANOVA, not in JASP nor in any other package. A RM ANOVA always has a single random effect, which can be either item or subject, and these are put in different rows. (That is, your cells contain either averages across items, or averages across subjects.) If you have two relevant random effects (as is often the case in psycholinguistics for example, when working with items and subjects), the tradition is to run two RM ANOVAs, one with subject as random effect (F1), and the other with item as random effect (F2), and report both.

If you want to include more than one random effect, I think the best solution is to use a linear mixed-effects model. This is not possible in JASP, but it's fairly easy in R with the `lme4` package:

I hope this helps!

Cheers!
Sebastiaan

• edited 8:43AM

The Bayesian "repeated measures ANOVA" in JASP is not really a "repeated measures ANOVA"; it is only called that so that people won't get confused. In the background it is using the BayesFactor package, which implements Bayesian linear mixed effects models, in which it is possible to have crossed random effects.

So, contrary to what Sebastiaan suggested, crossed random effects are perfectly possible. As for the details of how to do this in JASP, I'm not sure; someone more familiar with the interface would have to say.

• edited 8:43AM

@richarddmorey Thanks, that's very useful to know! I actually wanted to say hi in Granada after the methods session. But the discussion went on for so long, and I had to leave!

But I don't think it changes the story for JASP though. JASP uses more-or-less the same interface for the Bayesian RM as for the traditional RM, and this requires the data to be in so-called wide format, in which each participant (or item) is on one row and different conditions are in different columns. So I don't see how you can do crossed random effects in JASP, even though the underlying R package is apparently able to do it.

@lvanderlinden made a video showing how to set up the data for a RM in JASP. It's French, and for a regular (non-Bayesian) RM, but it's very clear, and it may still be useful:

And for those who are not familiar with all this statistical jargon, here is my best understanding of a few terms (and how they are used in this context, because different people use them differently):

• A random effect is something that you vary, but you assume that each variation is a random sample of some underlying population. Participant and item are the best examples, because you assume that it shouldn't matter which participants and items you use.
• A fixed effect is something that you expect to have an effect. For example, experimental conditions.
• If all combinations of random effects occur, you call them crossed random effects. For example, if all participants see all items, then these are crossed random effects. Random effects are usually crossed, in other words.
• edited 8:43AM

Hi Richard,

thanks for the response. I ended up doing it in R using the BayesFactor package and then just made my own plots etc.

• Florian
• edited 8:43AM

Hi everyone,

I have experimental data that I want to analyse using Bayes ANOVA. My design is repeated measures with two factors. Please can someone talk me through how to do this? Also I noticed there is new Bayesian extension commands for SPSS, however it appears this is restricted to between-group design. Is there alternative user friendly software that I can use to run the Bayes RM analysis?

Tom

• edited 8:43AM

Hi Tom,

I'm currently working on a paper that explains how to do this. We will post some videos too. These are not available right at the moment, and I'd encourage you to check out this paper: http://www.ejwagenmakers.com/inpress/RouderEtAlinpressANOVAPM.pdf

The way the interface works is similar to the classical analysis in SPSS.
Cheers,
E.J.

• edited 8:43AM

Hi EJ,

Many thanks for your reply and apologies for my late reply. I have now started using JASP and it is excellent. I also find the paper very useful, thanks. I think JASP is much preferred to SPSS. I have ran the Bayes RM analysis on JASP, but I'm still stuck with how to interpret the output. Please is there a documentation available somewhere that explains this?

Tom

• edited 8:43AM

Below is my output:

Bayesian Repeated Measures ANOVA

Model Comparison - dependent
Models P(M) P(M|data) BF M BF 10 % error
Null model (incl. subject) 0.200 4.773e -9 1.909e -8 1.000
Condition 0.200 4.341e -8 1.736e -7 9.095 1.178
Setsize 0.200 0.002 0.008 433982.994 1.130
Condition + Setsize 0.200 0.206 1.038 4.316e +7 1.528
Condition + Setsize +
Condition  ✻  Setsize 0.200 0.792 15.226 1.659e +8 0.860

Note. All models include subject.

Analysis of Effects - dependent
Effects P(incl) P(incl|data) BF Inclusion
Condition 0.600 0.998 321.19
Setsize 0.600 1.000 1.384e +7
Condition  ✻  Setsize 0.200 0.792 15.23

Please could you kindly advise what this output means. Please note each of the two factors (condition and set size) has three levels. Thanks in advance.

Tom

• edited 8:43AM

Tom

• edited 8:43AM

Hi EJ,

Tom

• edited 8:43AM

Hi Tom,

Well, if you look at BF10, you see that for every model you have strong evidence in its favor compared to the model without any factors. The full model that includes the interaction is supported (versus the null) more than the model with the two main effects. If you want to get the BF for the inclusion of the interaction you can either divide the two BF10s, or you can add the main factors "as nuisance" to the null model. This yields 1.659e8/4.316e7 = 3.84. This is some evidence, but it isn't overwhelming (!).

The effects option computes the factor inclusion probabilities by averaging across the models. You can see you have good evidence for including the interaction (BF=15). The difference between the 3.84 and the 15 is because you are asking a different question. In the case of the 3.84, you are comparing the full model against a pretty good alternative (i.e., the two main effect model); in the case of the BF=15, all model are considered, also the ones that happen not to do very well. I would simply report all of these results in a transparent manner.

Cheers,
E.J.