EJ
About
- Username
- EJ
- Joined
- Visits
- 2,557
- Last Active
- Roles
- Member, Administrator, Moderator
Comments
-
Hi Peter, I'll ask Johnny for more information on these tests. They come from R packages, and I think we've checked them against results from other programs, but Johnny knows more about that. Nice project, by the way! Cheers, E.J.
-
Hi Irepet, What appears to be the case is that every row (subject) shows an effect of about the same size, say like this: 1 200 300 2 700 800 3 400 500 If you drop the repeated measures aspect the results will have sufficient noise/variabil…
-
Hi Stats, Jamovi is a project that was set up by some of the people who were hired to help out with the initial implementation of JASP. The reasons for why Jamovi has started at all are a little mysterious -- I certainly never understood it. You ma…
-
Baby steps! Still very much on our radar though E.J.
-
Hi Boo, I gather that you used the pilot data for the BF t-test for Experiment 1. If you use the updating method, then you ought to use the knowledge after Experiment 1 for the analysis of Experiment 2. This knowledge includes the pilot data. So it…
-
Yes.
-
Hi fcorchs, JASP uses R, so the discrepancy will probably be between R and Python. If you post this issue on our GutHub page then the person responsible can look at the specific code (for details see https://jasp-stats.org/2018/03/29/request-featur…
-
Hi Martin, Ah, this is an analysis I am not expert on. But I do know it is a frequentist analysis. So is the H-measure a Bayesian concept? I will ask Erik-Jan (no, this is not strange). Cheers, E.J.
-
Hi Butler, I would usually report the number (so not BF > x). How you report that large number -- I don't have a preference. Surely the APA offers sage advice on reporting large numbers? I would generally go with what people find easiest to unde…
-
Hi Ayelet, Yes, the subscripts refer to the hypothesis; BF_10 = 3 means the data are three time more likely under H1 than under H0; BF_01 = 2 means the data are twice as likely under H0 than under H1. Adjusting the scaling of the plots: we are wor…
-
I'll pass this on to Johnny. We use a particular R package I think. Cheers, E.J.
-
Hi Siran, Thanks! We don't have improper uniform priors for ANOVA or t-test. You could set the scale of the Cauchy to its maximum (2, I believe); this is so spread out that your results should not differ too much from those of a uniform prior (in …
-
Hi Martin, Can you post a screenshot, so I know exactly what you are referring to? Cheers, E.J.
-
Hi Haver, Sorry for the tardy response. This should not happen, obviously! If you post this issue on our GitHub page (for details see https://jasp-stats.org/2018/03/29/request-feature-report-bug-jasp/) then we can address this issue effectively, C…
-
Hi Mathieu I don't think we have this yet, but it would be an excellent suggestion for our GitHub page! Cheers, E.J.
-
I'll attend Richard Morey to your question. E.J.
-
Hi Dario, "calculate how much the old version of the test is correlated to the new one" Since the old and new test share a lot of items, the correlation has to be high. To compute the correlation, ideally you have the same people answer a…
-
Hi Dario, Well, first off, data from a Likert scale can never be normal, because the scale is discrete. But sum scores across several items (or average scores across participants, or averaged sum scores across participants) can be approximately nor…
-
Hi Shun, Yes you can; it is under the plotting options in the ANOVA menu Cheers, E.J.
-
Hmmm. There are at least two solutions to this problem. The first is simple: just eyeball the posterior distributions of the beta's. Although informative, this is of course not a formal test. The second solution would be to compare the models with t…
-
We've made a lot of progress since then, so I think we are in a good position to address this issue now.
-
Ah, so we need to be able to save the factor scores as a separate column. I think that this should in principle be doable; if you GitHub this then I can ask those responsible for the factor analysis code to do this (maybe it is already done, at leas…
-
I recall discussing this before, at least when the combination rule is of a simple form (e.g., a weighted average over the columns). Assuming that this is what you mean, I'll ask some team members for advice.
-
Hi Dirk, Exp 1 provided some evidence against the interaction; consequently, in the model that includes the interaction, the corresponding posterior distribution will have more mass near zero than the prior did. In other words, the interaction --if…
-
Hi Mark, When you say "extract a single factor", do you refer to factor analysis or just to averaging? Cheers, E.J.
-
That's right, but you can prompt us to do so by adding a feature request on our GitHub page (for details see https://jasp-stats.org/2018/03/29/request-feature-report-bug-jasp/)! Cheers, E.J.
-
Hi Sunny, This is because in ANOVA, the BF is obtained by numerical methods. So the result is an approximation (the % error tells you how good that approximation is). If you want to decrease the % error, you can go to advanced options and increase …
-
Dear rohanp16, This does look like a bug. I'll report it for you on our GitHub page (https://jasp-stats.org/2018/03/29/request-feature-report-bug-jasp/) Cheers, E.J.
-
Hi Dirk, There are two ways to do this. First, there is the Verhagen & Wagenmakers method, where you "simply" use the posterior from the first experiment as a prior for the second experiment. Unfortunately, the updating and specificat…
-
Hmm I noticed I basically re-entered my earlier answer. Well, goes to show I didn't change my mind about this. E.J.