Avatar

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

EJ

About

Username
EJ
Joined
Visits
297
Last Active
Roles
Member, Administrator, Moderator
Thanked
40

Comments

  • Hi fcorchs, JASP uses R, so the discrepancy will probably be between R and Python. If you post this issue on our GutHub page then the person responsible can look at the specific code (for details see https://jasp-stats.org/2018/03/29/request-feat…
  • Hi Martin, Ah, this is an analysis I am not expert on. But I do know it is a frequentist analysis. So is the H-measure a Bayesian concept? I will ask Erik-Jan (no, this is not strange). Cheers, E.J.
  • Hi Butler, I would usually report the number (so not BF > x). How you report that large number -- I don't have a preference. Surely the APA offers sage advice on reporting large numbers? I would generally go with what people find easiest to un…
  • Hi Ayelet, Yes, the subscripts refer to the hypothesis; BF_10 = 3 means the data are three time more likely under H1 than under H0; BF_01 = 2 means the data are twice as likely under H0 than under H1. Adjusting the scaling of the plots: we are…
  • I'll pass this on to Johnny. We use a particular R package I think. Cheers, E.J.
  • Hi Siran, Thanks! We don't have improper uniform priors for ANOVA or t-test. You could set the scale of the Cauchy to its maximum (2, I believe); this is so spread out that your results should not differ too much from those of a uniform prior (…
  • Hi Martin, Can you post a screenshot, so I know exactly what you are referring to? Cheers, E.J.
  • Hi Haver, Sorry for the tardy response. This should not happen, obviously! If you post this issue on our GitHub page (for details see https://jasp-stats.org/2018/03/29/request-feature-report-bug-jasp/) then we can address this issue effectively, …
    in a bug - Comment by EJ January 11
  • Hi Mathieu I don't think we have this yet, but it would be an excellent suggestion for our GitHub page! Cheers, E.J.
  • I'll attend Richard Morey to your question. E.J.
  • Hi Dario, "calculate how much the old version of the test is correlated to the new one" Since the old and new test share a lot of items, the correlation has to be high. To compute the correlation, ideally you have the same people answer all the …
  • Hi Dario, Well, first off, data from a Likert scale can never be normal, because the scale is discrete. But sum scores across several items (or average scores across participants, or averaged sum scores across participants) can be approximately n…
  • Hi Shun, Yes you can; it is under the plotting options in the ANOVA menu Cheers, E.J.
  • Hmmm. There are at least two solutions to this problem. The first is simple: just eyeball the posterior distributions of the beta's. Although informative, this is of course not a formal test. The second solution would be to compare the models with t…
  • We've made a lot of progress since then, so I think we are in a good position to address this issue now.
  • Ah, so we need to be able to save the factor scores as a separate column. I think that this should in principle be doable; if you GitHub this then I can ask those responsible for the factor analysis code to do this (maybe it is already done, at leas…
  • I recall discussing this before, at least when the combination rule is of a simple form (e.g., a weighted average over the columns). Assuming that this is what you mean, I'll ask some team members for advice.
  • Hi Dirk, Exp 1 provided some evidence against the interaction; consequently, in the model that includes the interaction, the corresponding posterior distribution will have more mass near zero than the prior did. In other words, the interaction --…
  • Hi Mark, When you say "extract a single factor", do you refer to factor analysis or just to averaging? Cheers, E.J.
  • That's right, but you can prompt us to do so by adding a feature request on our GitHub page (for details see https://jasp-stats.org/2018/03/29/request-feature-report-bug-jasp/)! Cheers, E.J.
  • Hi Sunny, This is because in ANOVA, the BF is obtained by numerical methods. So the result is an approximation (the % error tells you how good that approximation is). If you want to decrease the % error, you can go to advanced options and increas…
  • Dear rohanp16, This does look like a bug. I'll report it for you on our GitHub page (https://jasp-stats.org/2018/03/29/request-feature-report-bug-jasp/) Cheers, E.J.
  • Hi Dirk, There are two ways to do this. First, there is the Verhagen & Wagenmakers method, where you "simply" use the posterior from the first experiment as a prior for the second experiment. Unfortunately, the updating and specification proc…
  • Hmm I noticed I basically re-entered my earlier answer. Well, goes to show I didn't change my mind about this. E.J.
  • Hi NvH, Yes, this has been argued, mostly by subjective Bayesians such as Lindley. The Bayesian "correction" for multiplicity is in the prior model probability. If you are testing 10 effects, do you really believe that every single test is plausi…
  • Hi Franziska, That is the same thing, isn't it? The shape of the boxplot is a little different, but the information appears to be identical (?) Cheers, E.J.
  • Hi Ondrej, Great question. Ideally, you'd integrate the ANOVA structure with the model underneath. When that is difficult some two-step method could be used. Here's a paper that may be relevant: https://www.collabra.org/article/10.1525/collabra.7…
  • Good question! The .707 from the default test is really 1/2 * sqrt(2). What you can do to check is use the informed test with "0.7071068" instead -- the result should be closer. Also, we have a better routine for the informed t-test now (much faster…
  • Yes, this should be resolved. You can set the preference for the number of decimal points in Preferences. See attached screenshot. Cheers, E.J.