# Unequal cell design mixed ANOVA in JASP vs. R

Hello,

I was wondering if someone could clarify for me what assumptions JASP makes for unequal cell designs for mixed-design ANOVAs. I have 2 repeated-measures (Congruency, DemandCue) and two between-subjects measures (Feedback, Experiment). In one experiment, I had N = 60; the next experiment doubled the sample size (N = 120), so Experiment is what has the unequal cell design.

I have been trying to recreate the results that JASP outputs with many different R packages. JASP, for instance, writes that the F statistic for DemandCue is F(1,176) = 26.286, p < 0.001. The F statistic for Congruency is F(1,176) = 147.102, p < 0.001.

After factorizing subject, DemandCue, Experiment, Congruency, and Feedback (and rawRTData <- read.csv('SC_ANOVA_RT.csv') - csv file linked at the bottom), I have tried the following in R:

SC_RT_runANOVA <- aov(RT ~ Feedback * Experiment * DemandCue * Congruency + Error(subject/(Congruency*DemandCue)), data = rawRTData)

summary(SC_RT_runANOVA)

AND, using the lme4 package --

anova(lmer(RT ~ (Feedback*Experiment*DemandCue*Congruency) + (1|subject) + (1|DemandCue:subject) + (1|Congruency:subject), data=rawRTData))

These two produce the same result: F(1,176) = 172.1329, p < 0.001 for congruency and F(1,176) = 35.7272, p < 0.001. I saw on the internet that maybe I had to specify contr.sum for the contrasts and type 3 sums of squares, but that did not change the output. I also know that it is the "Experiment" factor at issue here: when I removed it from both the R code and JASP, I was able to reproduce the JASP output with the R code. I also converted the dummy coding of Experiment from 0/1 to E1/E2, and that changed the F stats to 171.9760 for congruency and 35.7607 for DemandCue. So that seemed to me like there are some issues with R handling that I don't get, but that the R peculiarity may not fully explain the JASP/R difference.

I tried looking at the JASP R code, and it had so many dependencies and references to other parts of the code that I found it harder to understand. I also trust JASP more than this simple code, because I think you all have spent more time thinking about how to best to portion out the variance than I have, but I would like to know how to reproduce the JASP results. What underlying assumption am I missing here? Are there actually different assumptions, or is this a merely R peculiarity that I hadn't discovered (e.g., the dummy coding)?

(If you want to use the same long-form data that I mention here, here is a link: https://www.dropbox.com/s/gjc1czqzeir4s2c/SC_ANOVA_RT.csv?dl=0. The wide-form JASP data is here (the first four columns indicate RM1.1, RM1.2, RM2.1, RM2.2): https://www.dropbox.com/s/ag9mmqykp8wudpn/RT_wideform_both.csv?dl=0)

Thank you!

## Comments

Hi Chris,

JASP uses the

`afex`

package for (rm)AN(C)OVA.Among other things,

`afex`

(1) makes sure factors have effect coding, and (2) used type III errors (whereas`summary.aov`

uses type II), both of which affect estimates and significances in unbalanced designs.So

`afex`

should give the same results as JASP.Cheers,

Mattan