Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

RM ANOVA different results in JASP and R.

Dear JASP team,

I am running a RM three way ANOVA in R using:

anova_test(data, dv=A, wid=subject, within=(c(B,C,D))

This gives me slightly different results for some factors than running a RM in JASP e.g

Factor A: F= -1.02e-12 (on R) vs. F=2.878 x 10^-5 (JASP)

but for other factors it gives the same result e.g

Factor C: F= 1.7636 e+01 (on R) vs. F = 17.636 (on JASP)

Why is that so? some of the interaction F values are exactly the same whereas others differ.

Then, as I wanted to follow up on a significant A X C interaction running the following on R:

data %>%

group_by(A,C,subject) %>%

get_summary_stats(values, type='mean_ci') %>%

group_by(A) %>%

pairwise_t_test(mean ~ C, paired=TRUE)


This output gives the same as grouping my variables and averaging them and just running a paired t-test on JASP. However, it does not output the same as if I use the post-hoc default on JASP as a follow-up.


Why these differences?


Thanks!

Comments

  • Factor A: F= -1.02e-12 (on R) vs. F=2.878 x 10^-5 (JASP)

    Hmm negative F-test isn't even possible...

  • Hi there,

    I would like to investigate this issue further, would you mind sharing the data here?

    Best,

    Jonas (JASP-team)

  • Hi Jonas and patc3 thanks for your comments, I had not even noticed the negative distribution.

    As I was prepping the files to share, I solved the issue.

    It seems like there is something wrong with this particular file when writing it with matlab with:

    writetable(file,'name.csv')

    If I just open the file on excel and re-save it as a .csv now the R code and JASP give the same results.

    Any ideas as to why this may have happened / suggestions on how to avoid it? It's very odd! / Does not happen with any of the other files I am saving in the exact same way.


    Thanks very much

  • Sorry, the above fixes the ANOVA results.

    However, for the post-hoc t-tests, how is JASP calculating the post-hoc tests for an interaction vs. doing a paired samples t-test for the conditions of interest (averaging across the other factors)?

    Both approaches still give slightly different t-values (but not sure if may be due to rounding errors or slightly different computations).

    Still happy to share the data if still necessary.

    Thanks very much

  • edited April 2023

    Hi @albapy ,

    Sorry for the late reply. You can read about this behavior in this recent discussion: https://forum.cogsci.nl/discussion/comment/27509 and linked blogpost: https://jasp-stats.org/2020/04/14/the-wonderful-world-of-marginal-means/

    Basically, the marginal means (which are estimated based on the specified model) can differ from the observed group means. Since these are used for the followup analyses (contrasts/posthoc), the results can differ.

    Cheers

    Johnny

  • Hi Jonas.

    I found a similar problem between R and JASP. I simulate a balanced dataset and I found a similarity of results between JASP and Type II SS using "anova" or "car:Anova" functions while in JASP is explicity reported type III SS for anova.

    I report an example code.

    set.seed(4)

    df <- data.frame(Sex=rep(c("F","M"),each=50),

                    Group=rep(c("Exp","Ctrl","Exp","Ctrl"),each=25),

                    Anxiety=c(rnorm(25,20,10),rnorm(75,10,10)))

    df$Sex <- as.factor(df$Sex)

    df$Group <- as.factor(df$Group)

    mod1 <- lm(Anxiety~Group*Sex,df)

    anova(mod1)

    library(car)

    Anova(mod1) #this results match with anova

    Anova(mod1,type = 3)

    write.csv(df,"Example.csv") #to run ANOVA in JASP


    Thank you

    Alessio

  • edited November 2024

    Hi @AlessioF ,

    When you have a balanced design, like you have in your simulated data, the types of sums of squares will yield identical results.

    When using the car package and using Anova, you need to also make sure the same contrasts are being used:

    contrasts(df$Group) <- contr.sum(levels(df$Group))
    
    contrasts(df$Sex) <- contr.sum(levels(df$Sex))
    

    This makes sure that both types of sums of squares use the same reference group when fitting the linear model and yields identical results if you then use lm and Anova. If you use afex::aov_ez, this is done automatically, and there you also obtain the same results for type 2 and type 3 (just like in JASP).

    Kind regards

    Johnny

Sign In or Register to comment.