Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

RM ANOVA - Post hoc Bonferroni comparison?

Hi, I have a question about frequentist RM ANOVA:

When you run post hoc Bonferroni comparisons (which give a t statistic), what kind of test is it? Because I ran Bonferroni-corrected paired-sample t-tests in SPSS and it gave me very different results - many more p values reached significance there. I'm very confused.

Thank you

Comments

  • Hi JLeborgne,


    In essence, the analysis is indeed running paired samples t-tests. We use the emmeans package for this in R. Are you assuming equal variances or not? Maybe that is causing the different results. JASP offers both options (equal variances or unequal variances).

    If you want, you can send me your .jasp file at johnnydoorn at gmail dot com, and I can offer some more explanation. Are you also running post hoc tests in SPSS or did you do separate t-tests and correct these by hand with the bonferroni method??


    Cheers

    Johnny

  • Im having the same issue. JASP post hoc bonferroni is giving me a smaller p-value (corrected) than the uncorrected one. Also, i don't see an option of selecting either equal or unequal variances, I assume it would be equal by default since it is RM ANOVA?

  • Hi @Yasra ,

    I would expect the bonferroni p-values to be larger than the uncorrected ones. Do you have some data set/jasp file where the reverse is the case, that you could share with me? That way I can get to the bottom of this.

    As for the equal/unequal error terms - the default behavior is to use equal error terms (which follows from the RM ANOVA model). We added the option to use the non-pooled error term because this was an often-requested feature, and matches the behavior of SPSS in some situations.

    Kind regards

    Johnny

  • Hi @Yasra ,

    Responding here for visibility.

    The results you report are based on posthoc tests with pooled error terms. This has repercussions for all posthoc tests, and can lead to higher or lower p-values, compared to using the unpooled error terms. When using unpooled error terms, this is essentially running paired samples t-tests on the data. If you run the posthoc tests with unpooled error terms, you can see that all bonferroni p-values are higher than the p-values of the individual t-tests.

    So, your bonferroni p-values are based on a different model than your t-test p-values, so it is possible that some bonferroni p-values are actually lower than the t-test p-values. Does this clarify your issue?

    Kind regards

    Johnny

  • Yes, got it.

    Thank you

Sign In or Register to comment.