JohnnyB
About
- Username
- JohnnyB
- Joined
- Visits
- 101
- Last Active
- Roles
- Member
Comments
-
Hi @AlejandroG , Thank you for taking the time to explain your issue. I just looked into the t-test plotting code a bit more, and, just as in the RM ANOVA, we apply the procedure for confidence intervals of group means described in Morey, 2008 (who …
-
Hi @amelie1711 , Unfortunately this is an issue with 16.2 - it is fixed now, but the fix will only be in the next release. So you can best use 16.1 for contrast analysis. Apologies for the inconvenience. Kind regards, Johnny
-
Hi @AlejandroG , The paired samples t-test uses the standard deviation of the differences to draw the CI's, so the descriptives per condition are not telling the full picture here. Similarly, the RM ANOVA also looks at the sd of the condition differ…
-
Hi @vicente_inefo , I think this is a problem that stems from binary reasoning about statistical results - if you hold the 0.05 threshold as very holy, and a p-value below that as plain evidence for a difference, then yes, this Cohen d CI is a bit c…
-
Hi @BillL2 , Thanks for reporting this, I just asked our programmer if something changed in that particular widget, since it is definitely supposed to support negative values! Kind regards, Johnny
-
Hi @vicente_inefo , My guess here is that the p-values are for the t/mean difference value (you can see that the CI for the mean difference does not include 0), which use a slightly different standard error for their standardization and CI computati…
-
Hi @bananenkuchen , I cannot fully say, based on the information you provide. It seems that the sums of squares are the same, but the F and p-values are indeed different. Did you use the same sum of squares types for both analyses? Did you make sure…
-
Hi @erindancey , The line chart as it is given for the RM ANOVA is specific to that analysis, since it uses the full RM ANOVA model for creating the error bars (you can read more about that in the RM ANOVA helpfile). I'm afraid you cannot make the s…
-
Hi @erindancey I think this is due to different handling of the missing data. If you look at the sample size in each table, you will see that they differ (for week 1 it's 10 and 13 vs 5 and 12). The RM ANOVA excludes cases listwise, which means tha…
-
Hi @roundcircle , This is probably due to the post hoc tests using the pooled error term, whereas the paired t-tests use only the error term of the two groups under consideration. If you untick this option in the RM ANOVA post hoc tests, the values …
-
Hi @tong , We apply a correction to these confidence intervals, based on Loftus & Masson (1994) and Morey (2008). This is what we write in the help file: By selecting this option, error bars will be displayed in the plot. The error bars can eith…
-
Hi Izymyl, Thanks for getting back to us, showing the infinity seems like a good solution to avoid confusion. Cheers, Johnny
-
Hi @christof , We apply a correction to these confidence intervals, based on Loftus & Masson (1994) and Morey (2008). This is what we write in the help file: By selecting this option, error bars will be displayed in the plot. The error bars can …
-
Hi Tomas, Yes, I would use the pooled error term by default, since that is in line with the anova model. Only if you have substantial grounds for doing otherwise, I would switch to unpooled. Kind regards Johnny
-
Hi @rsippel , Thanks for your suggestion, I have included the z-statistic, and hopefully it is still on time to be included in the next JASP release! Cheers Johnny
-
Hi @anagrammarian , Thanks for pointing this out. I think it will be good to at least provide a footnote to these tables that states that the Dunnett (and the other non-standard post hoc tests) are based on the uncorrected means. In the future we wi…
-
Hi @chaelaritchie , The option is under Descriptives, where you can test the assumption for each column (/level of RM design) separately. I am not sure whether this used to be in RM ANOVA though, are you sure about this? Cheers Johnny
-
Hi @num3 , The effect sizes (inc their confidence intervals and multiplicity correction) will be included for interactions in the next JASP release! As for the difference in p-values - you could try ticking the box "use multivariate model for …
-
Hi @PerPalmgren , It's an effect size measure described in this paper and as implemented in the afex R-package. This effect size takes into account the other terms in the specified model (including interactions), and seems to be the recommended effe…
-
Hi Rik, The paper is online - https://psyarxiv.com/y65h8/ We are now setting up a special issue with responses to this paper, and will then also publish a collaborative guidelines paper where we come back to the questions posed in the paper. If you …
-
Hi Declan, Ah yes, thanks for clarifying! In the case of the t-test, you can look at the 95% credible intervals for the group means (in case of two groups) or the difference with the test value (one sample t-test) by ticking the box "Descripti…
-
Hi Declan, In your calculations you're assuming that Cohen's d has the same standard error as the mean difference, which is not the case, since they operate on different scales. Even though it would maybe make intuitive sense to just divide the lowe…
-
Hi @DrPRW, Centering your variables can help in remedying multicollinearity in cases where you have multiple terms per variable such as square or interaction terms. Here, subtracting the means influences the interaction estimate, which will in turn …
-
It is the proper follow-up test for Friedman's test because it's pairwise and rank-based, while still taking into account all observations (i.e., it uses the aggregated ranking from all observations).
-
Hi @gutenbar, The Conover test is a pairwise test that compares the average ranks between two groups (these are giving by JASP as Wi and Wj; see also the help file, under Output -> Nonparametrics). The relevant reference here is Conover,W. J. (1…
-
Hi @Yasra , Responding here for visibility. The results you report are based on posthoc tests with pooled error terms. This has repercussions for all posthoc tests, and can lead to higher or lower p-values, compared to using the unpooled error term…
-
Hi @Yasra , I would expect the bonferroni p-values to be larger than the uncorrected ones. Do you have some data set/jasp file where the reverse is the case, that you could share with me? That way I can get to the bottom of this. As for the equal/u…
-
Hi @claudia , I took a look at the results, and I think this is a case where the Bayesian analysis is more conservative than the p-value. Whether to take this as a critique of the Bayes factor (for being too conservative), or a critique of the p-val…
-
Hi @carl559 , Since you initially used mixed models, this implies that you have nested observations within some categorical variable (e.g., within one participant, or experimental group - if I understand correctly this is "study label" for…
-
Hi Claudia, That seems like a strange result indeed. What version of JASP are you using? If you are using the latest version (0.15), would you be able to share your data set or jaspfile with me, so I can take a closer look? you can send it to j.b.va…