I’m writing because there are some disparities between JASP’s output for some contrasts compared to my hand calculation and SuperANOVA (my hand calculations and SuperANOVA agree with each other).
First, here is a contrast where all 3 are in agreement:
The first is a set of Helmert contrasts. The weights for the first contrast are (1, -.5, -.5) and the weights for the second contrast are (0, 1, -1)
df MSE F-Value P-Value Cent vs (Cov & Sac) 1 41.56 19.99 .0042 Cov vs. Sac 1 24.04 11.56 .0145
estimate SE t-value P-Value Cent - Cov, Sac -2.632 0.589 -4.471 .004 (F = 19.99) Cov - Sac -1.733 0.510 -3.400 .014 (F = 11.56)
So, the p-values match, and the squared t-values match the F-values.
However, when I select “Repeated Contrast” in JASP, the answers don't match my hand calculations or SuperANOVA. Not only that, but notice that JASP's output for “Cov - Sac” comparison below does not match the JASP “Cov - Sac” when performing the Helmert Contrast above. Even stranger, the contrast of "Cent - Cov, Sac" above has the same values as "Cent - Cov" below!
Constrast weights: (1, -1, 0) and (0, 1, -1).
df MSE F-Value P-Value Cent vs Cov 1 9.81 4.72 .0729 Cov vs. Sac 1 24.04 11.56 .0145
estimate SE t-value P-Value Cent - Cov -2.632 0.589 -4.471 .004 (F = 19.99) Cov - Sac -3.049 0.589 -5.180 .002 (F = 26.7)
What is JASP doing with the repeated contrast? Is it making some correction under-the-hood because the set of “Repeated Contrasts” is not orthogonal?
I'll look into this. Basically we use R, but I'll ask the person who coded this to look into this issue specifically. Thanks for pointing it out.
I checked the code that is used to create the contrast weights and there seems to be an error in the way they are created at the moment. I fixed it now, and committed the change (see, https://github.com/jasp-stats/jasp-desktop/pull/1385/commits/700cd1dfbef5592752963da58ecbd7701659ae18) and this fix will be included in the next release (which we will make available in a week or so).
Thanks a lot for checking this and reporting the error! It is really important to us that the community checks the output of the analyses for bugs; even though we do test the code and check for bugs it is always possible that little mistakes sneak in.