```
Mean SD Naive SE Time-series SE
```

mu -1.3101 0.5116 0.005116 0.005683

sig2 3.2151 1.6501 0.016501 0.018547

delta -0.7912 0.3434 0.003434 0.004059

g 4.2972 33.7041 0.337041 0.345778

Quantiles for each variable:

`2.5% 25% 50% 75% 97.5%`

mu -2.3197 -1.6468 -1.3042 -0.9805 -0.2914

sig2 1.3409 2.1375 2.8195 3.8277 7.4126

delta -1.4782 -1.0202 -0.7813 -0.5559 -0.1496

g 0.1202 0.3878 0.8359 2.0673 23.0160

My question comes when one uses the "informed prior" tab in JASP. I am unaware of how to add in an informed prior (not centered on zero) to a paired t-test in R. Is there a relatively easy way to do this? If not, is it reasonable to take the standardized effect size in JASP and multiply it by the SD of the difference to produce the posterior distribution in raw units?

Thanks in advance.

]]>I am trying to reproduce David Howell's SPSS analysis of the aiport dataset (attached) in JASP. Dave's analysis is available at https://www.uvm.edu/~dhowell/StatPages/More_Stuff/RepMeasMultComp/RepMeasMultComp.html. When performing the polynomial contrast analysis of the effect of time on stress level (condition near aiport only), SPSS returns F(1, 99) = 1.721 p = .193 for the cubic trend while JASP returns t = 1.261 p = .208:

Obviously the t statistic returned by JASP is not equal to the square root of F returned by SPSS. Could you please clarify why?

Best,

Mat ]]>

Is it possible to perform in JASP a mini meta-analysis (3-4 studies) for an interaction effect between a dichotomous and a continuous variable (tested via hierarchical regression)? The ES in this case is partial R-squared.

Thanks in advance,

Orly

I'm using JASP for a while and I think is a really great tool.

I ran both classical and Bayesian 2 ways - RM -ANOVA. by using the the **classical** analysis I got main effect to one variable ("cong") and I didn't get main effect to the second variable (go_nogo). In addition, I also didn't get interaction.

However, when I ran the **Bayesian** analysis I got evidence to the existing of the main effect to the second variable go_nogo).

As far as I understand Bayesian statistic, the pattern should be the same. I understand that it possible to find that by using Bayesian analysis effect will disappear, but I'm not really sure how it possible to find different effects.

For being sure that I didn't miss anything, I ran more 2 tests:

- one-way (both classical and Bayesian) RM-ANOVA to the 'cong' variable.
- paired samples t-test (both classical and Bayesian) for the 'go_nogo' task.

The results for these 2 tests were same (at least in the pattern):

There was a different between the levels of the 'cong' variable (F=22.16, p<.001, BF10=21,142.844) and no different found in the go_nogo variable(t=1.385, p=.184, BF01 = .539).

However, the results for the two-way ANOVA (as I described before) in the Bayesian analysis:

is different from the classical one:

In addition, the "behaviour" of the Bayesian analysis is also different from the results given by the analysis of each variable separately (as given from the one-way ANOVA and from the paired samples t-test)

I can't find any mistake in my steps and I'll really appreciate your comments.

In any case, the jasp file (that includes the latest analysis) is attached as a zip file: https://github.com/jasp-stats/jasp-issues/files/2872991/jasp_inc_ttest_anova.zip

Thanks a lot in advance,

Ronen.

Where can I access R code for functions included in JASP?

]]>So I have run a Bayesian repeated measures ANOVA followed up with post-hoc tests (actually, the comparisons are planned, but nevertheless). JASP gives me uncorrected Bayes Factors, but I'd much prefer to report corrected Bayes Factors (given the number of tests that are performed). I assume there is a way I can calculate this using the prior odds and the corrected posterior odds; I am just not sure how (by now it is evident that I'm merely a pragmatic Bayesian). I've added the output, so if someone here could just show me how to do this for one of the postdoc test, I'd be happy!

]]>I want to compare 2 correlations (r12 and r13), whether or not they are significantly different from each other. I would like to do this in both a frequentist and Bayesian way.

Is this (or one of the two) possible in JASP? I can't seem to find it.

Thanks!

]]>I really love JASP and the new options of filters and calculating new columns are just great! thanks for that!

I am trying to remove outlier by condition and by subject number and getting the following error once I add the subject numbers:

Is there any way to remove outliers by condition and subject?

Thanks,

Tali

I know there might be some problems (as variables might be calculated in a certain order), but it would be great, if one would be able to reorder the different analyses. Maybe even with a simple drag and drop?

Stats

]]>Any idea why?

Mark

]]>I just want to make sure I'm reporting it correctly in my write up.

Cheers,

]]>I am having an odd issue in JASP. When there is a dataset with 3 or more conditions, and I use the filtering option to select just two to run an independent samples t-test, I often get "NaN" for the effect size and the descriptives, even though the actual t-test itself runs normally. Have others experienced this? The screenshot demonstrates the problem. Furthermore, as you'll see, the two conditions listed in the descriptives section are different from the ones that were selected and on which the t-test is based.

]]>Anything new, E.J. ?

Best,

Peter ]]>

I've recently used BayesFactor with the default priors scale r. I have been advised to adjust the Cauchy width based on some pilot data rather than relying on the default values.

Can anyone advise how I'd go about that? I already have the null hypothesis t-tests and Cohen's d calculated if that is useful.

Thanks for your time and help,

Boo.

]]>Hey all

I am finding different results between the two when conducting a bayesian repeated measures anova, and I'm not sure if I'm mis-specifying it (in JASP or BayesFactor) or interpreting the output incorrectly.

Example experiment: Subjects are assigned conditions (treatment vs. control) and take two tests (science and math).

```
df <- data.frame(
subj = 1:10
condition = sample(x=c("treatment", "control"), size = 10, replace = T)
science.score = sample(x = 50:100,size =10, replace = T)
math.score = sample(x = 25:75,size =10, replace = T))
df.long <- melt(df,
variable.name = "test",
value.name = "score",
id.vars = c("subj", "condition"))
```

When I perform a frequentist repeated measures anova in JASP and BayesFactor, the results converge:

However, when I do a Bayesian Repeated Measures ANOVA, the results diverge (or, at least I think they do):

I specified both ANOVAs the same way in JASP, so I'm not sure what I'm doing wrong!

Any help would be much appreciated

]]>I saw in an earlier discussion you had suggested that a Bayesian MANOVA was in the works. Do you know when/if this will be implemented? If not for some time, do you know of any R-code/packages or papers that would be a useful starting point to doing this sort of analysis without JASP?

Thank you

]]>I’m interested in the two-way interaction effect. Using BayesFactor package, I compared main effects model with interaction vs. main effects only model (divided one BF by another). I used default priors. And I got substantial evidence for the main effects only model (BF10 = 0.13).

However, a reviewer asked me to use informed priors - as we potentially have an estimate of the expected interaction. I estimated raw interaction size from the previous study using 'emmeans' package. Then I calculated the size of interaction (and it’s SE) in my current study. Then I applied Z. Dienes’s BF calculator here: http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm (half normal with SD = raw interaction as a prior for H1; current interaction size and it’s SE as empirical data). The calculator provided me with BF10 = 0.46. That is, saying that I don’t have enough data in favour of the null.

I understand that in both cases I have some evidence against the interaction, and these numbers just appeared to be on the different sides of the arbitrary boundary. However, it seems that these are two very different approaches. Which one seems more reasonable to you? What should I use in this situation?

Thanks in advance!

]]>Assume we want to test the directional hypothesis that d<0. It seems to me that a frequentist analogue (i.e. a one-sided paired samples t-test), would test the hypothesis d<0 against the the null hypothesis d>=0. However, the standard Bayes factor test would test the hypothesis d<0 against the the null hypothesis d=0. Why is this the case? In what cases would be interested in such point-null comparisons?

I am aware that the Bayes Factor test that d<0 versus d>0 can be obtained by dividing the Bayes factor of the defined null interval (-Inf<d<0) and the Bayes factor of the complement of the interval !(-Inf<d<0), so I would be glad to hear your thoughts on a conceptual level.

Thanks in advance!

]]>after I discovered JASP yesterday, I am just wondering who is first: The henn or the egg - or in better words:

Is there an original and a copy or how do these two fit together?

JASP ( https://jasp-stats.org ) or JAMOVI ( https://www.jamovi.org )

Is there a way to tell which is better (the first or the second ...)? Or does one actually need to install both GUIs?

If you look at the features, they are at least similar, GUI ... very similar...

Thanks alreads

Stats

I'm wondering how I could reproduce the two-panels figure comparing a model with interaction and a model without interaction on the Bayes Factor page:

https://richarddmorey.github.io/BayesFactor/#glm

Anyone knows the r code for plotting this?

Thank you in advance!

]]>I wanted to check my reasoning regarding the choice of prior(s) for a one-sided Bayesian paired t-test.

A previous experiment observed an effect of d=0.74. We performed a conceptually related experiment, though not a replication. Since this other experiment provided the most relevant guide for an effect, we decided to use this prior effect size as the mean for a normally distributed prior when examining our data. However, given that the previously observed effect could be much bigger than what we expected to observe in our own experiment, we also centered a normal prior on half this effect size (0.37). The SD of these priors was set as 0.19, such that the two priors were separated by ~2SD.

My thought here is that this approach allows us to test against both a previously observed effect (that could be inflated, or larger than we might reasonably expect in our study), as well as a smaller effect which also has little overlap with zero and the larger effect size.

Is this approach reasonable? I have more concern over the SDs than the positioning. BF10 was <0.333 in both of the analyses.

The experiment and results are reported in a preprint here (e.g., line 426): https://osf.io/uckhf/

Thanks,

Arran

]]>C'est étrange, j'ai deux ordinateurs sous ubuntu 18.04 et pour l'un l'installation de Jasp 0.9.0.1 n'a pas posé de problème et il apparaît dans les applications et pour l'autre, je suis obligé de le lancer avec la console (flatpak run org.jasp.JASP) mais le système fonctionne mal, la console continue à travailler et Jasp se bloque au bout de quelques temps.

Pouvez vous m'expliquer et m'aider à installer Jasp dans les applications ?

Merci à vous

Très cordialement

]]>I'm running a multiple linear regression analysis in Python's statsmodels and in JASP. I'm using the same dataset and, as far as I can understand, I'm using the same model. I'm pretty sure I'm coding it right in Python (see below). I assume this might be related to some underlying difference between JASP and statsmodels, but not sure. Am I right?

Any comment highly appreciated.

Thanks!

Code I'm using in python:

model = smf.ols('outcome ~ predictor1 + C(predictor2) + predictor1*C(predictor2)', data = dataset).fit()

print(model.summary())

(obs. predictor1 is continuous; predictor2 is categorical)

JASP 0.9.2.0 now comes with different "corrections" and "types" in ANOVA's post-hoc tests. I would like to recalculate/verify JASP's output for all these tests (I will ask the same of my students): can you make the algorithms and procedures available?

Cheers!

Peter K.

]]>I am trying to run a repeated-measures ANOVA on a set of data with two independent factors.

When I try to run the analysis JASP returns the following message "data are essentially constant".

I understand that this is due to the fact that there is not much variation in the data. The problem is that I still need to run the analysis in order to get an output so that I can show that data really are essentially constant. I read that there is some way around this problem in other statistics software, I was wondering if this can be done in JASP as well.

By the way, if I run a regular ANOVA with two independet factors I get no error message. Why is that?

Thank you all for your help

]]>thank you

martin

]]>Using a simple effect revealed in Experiment 1, I want to know the strength of evidence for this effect being replicated in Experiment 2 (two separate experiments where Exp2 is a direct replication of Exp1). To do so, would it be justified to use the Bayesian Paired Samples t-test with an

More specifically: The Cohen's d of the effect of interest from Exp 1 is 0.4 with std=0.07. Can I use these values to inform the prior (selecting Informed prior, normal) for the Bayesian Paired Samples t-test in Exp2?

When I try this, I obtain the following output for Experiment 2:

Figure: Bayesian Paired Samples T-Test. Input in JASP: ConditionA(S_LR_R) vs. ConditionB(S_RL_R); Hypothesis (Measure 1>Measure 2; my expected direction of the effect), Bayes Factor: BF10, Informed Prior (effect size 0.4, std 0.07, which is the effect obtained in Exp1).

My conclusion here would be I have anecdotal/moderate evidence for data|H+ (i.e., anecdotal/moderate evidence for a replication of my previous effect revealed in Exp1).

Many thanks for your help. I am very new to Bayesian analysis and reading through other posts about the prior, I am still confused. But I would like understand and use it appropriately, so your feedback is much appreciated.

I was wondering how we should report very large Bayes factors in a paper? e.g., should I report a Bayes factor of 12233455 as:

- 12233455
- 1.22 × 10^7
- 1.22e7
- 12.23 × 10^6
- BF10 > 100
- None of the above?

Thanks

Butler