I put the three main effects and all the two-way interactions and the three-way interaction into the model. According to the results JASP produced (in attached file), the BF10 of the full model which contains the three-way interaction is 0.240, while the BF10 of the model which only lacks the three-way interaction is 0.054. Dividing them, I could conclude that inclusion of the three-way interaction is around 4 times better than not. However, comparing 0.240 to the BF10 of the null model, which is 1, it seems that the alternative model which contains the three-way interaction is not good at all. I'm confused about how to explain my result properly. Could you help me out?

Thanks! ]]>

- Exclude cases analysis by analysis
- Exclude cases listwise

Can someone give me a detailed explanation about what these two options do?

Thanks and regards.

]]>Can I somehow save settings / analyses to rerun in new dataset?

Relatedly: Sometimes I want to add a variable for one additional analysis, can I update a dataset to include new variable?

I tried simply closing JASP, updating dataset, opening JASP again but that didn't work (everything was gone, basically)

I used JASP for two of my thesis chapters (doctoral thesis, not MSc). The approach I took is a combined one between Bayesian and frequentist ANOVA in JASP or Friedman's test in R. I described it here: http://forum.cogsci.nl/index.php?p=/discussion/3229/its-good-to-worry-about-mistakes-when-doing-stats#latest

My PhD supervisor is a believing but not a practicing Bayesian, as he puts it. So he won't be able to give me much feedback on the write up.

Which aspects of JASP should I include in my write up?

1) For ANOVA, I was thinking P(M), P(M/data), BF(M), BF10, BF01, %error

2) For t-tests, I was thinking P(M), P(M/data), BF(M), BF10, BF01, %error plus CI

Is that all I need? Should I NOT include any of these? Is there anywhere I could recruit someone to look at maybe one of the chapters so that I can then fix that chapter and the second chapter based on the feedback on the first? Or could I copy-paste some analyses & interpretations in here to get some feedback? I'm really thrilled about JASP but I don't want to interpret it wrong or write it up wrong...

]]>I have some questions regarding JASP's Bayesian post-hoc test for ANOVA.

**How are the priors / posteriors / BFs computed?**

Using the tooth-growth sample data,

I have conducted the same t-tests in R using BayesFactor and found the BFs to be the same.

```
> list("500 vs 1000" = df %$% ttestBF(len[dose=="500"], len[dose=="1000"]) %>% extractBF(F,T),
+ "500 vs 2000" = df %$% ttestBF(len[dose=="500"], len[dose=="2000"]) %>% extractBF(F,T),
+ "1000 vs 2000" = df %$% ttestBF(len[dose=="1000"], len[dose=="2000"]) %>% extractBF(F,T))
$'500 vs 1000'
[1] 81800.12
$'500 vs 2000'
[1] 142002125644
$'1000 vs 2000'
[1] 953.5515
```

Does this mean that posterior odds are calculated as BF*(prior odds)? In that case, the correction for multiple comparisons is not on the BF itself, but on the posterior odds - wouldn't we want the correction on the BFs themselves?

Also, I've tried looking up Westfal, Johnson and Utts's paper, but I still don't understand how the prior odds are calculated.

**Constrained / Restricted models vs. post-hoc tests**

Richard has previously detailed in his (old) blog how to calculate BFs for specific hypotheses regarding restricted / constrained models.

When should one conduct these tests as apposed to the methods used in JASP for post-hoc tests?

I conducted a factor analysis in JASP. The method seemed to have worked, however, I am missing important information how the analysis proceeds. What is the method of extraction (maximum likelihood, principle axis factoring, something else)? And what are the parameters chosen for this method? How many iterations were done? How were missing values treated?

Unfortunately, no helpfile for factor analysis exists (yet). Can I extract this information somehow?

Thanks! ]]>

I have got an experiment with two within variables with two conditions and a between subjects factor. I am interested in confirming (as requested by reviewers) the lack of interaction between the within variables and the between subjects factor. I have conducted a Bayesian repeated measures ANOVA and after reading about the baws factor, I assumed that I had to look at the analysis effects across matched models. I know that BFinclusion refers to change from prior to posterior inclusion odds but I don't know how to interpret the numbers I get nor how to report them. I have seen somewhere the criteria set by studies such as the one by Wetzels et al (2011) but I assume these are based on the BF10 comparison between the model and the null model -which it is much easy for me to understand, by the way-, aren't they? Could you please help me? I have got BFinclusion values as different as 19.75 and 0.30...Thank you very much for your help and for JASP which I recently discovered and seems terrific to me. ]]>

if I understood this correctly, the Bayes Factor, BF [p(D/Ho) : p(D/H1)] quantifies the relative evidence in favor of the null hypothesis over the alternative hypothesis. The BF can be directly computed in JASP.

However, I was wondering whether JASP does also directly compute the probability of the Ho given the obtained data [p(Ho/D)]? If not, is there a way to compute this probability given the BF?

Thank you

Jonas

Firstly, thank you for your work in developing JASP, it is a brilliant piece of software that I am sure will go a long way!

However, I am having trouble performing post hoc analysis during an ANOVA. Very much like the example for vitamin vs orange juice supplementation at different dosages, I want to look at changes in muscle mass (dependent variable) between males and females at different ages (fixed factors). However, when I select these for my ANOVA and post hoc, I get a warning underneath the output saying "Singular fit encountered; one or more variables are a linear combination of other predictor variables."

I am not sure if this error is arising due to how I have set up the csv file for import into JASP (see attached), but it would appear that the imported format reciprocates the example, so I am sure why I am getting this error message.

Any help would be greatly appreciated!

]]>I am comparing results from impedance cardiography (HR) between participants with low vs. high levels of antisocial behaviour. I finally finished up writing up a paper with these results, and I was wondering whether the following interpretation is correct? Any help/feedback will be much appreciated.

Ex.:

A different (unexpected) pattern has been observed for Secondary Psychopathy (Figure 19). Here, group 1 demonstrated lower HR (N = 98; M = 78.27, SD = 11.40, SE = 2.68) than did group 2 (N = 78; M = 87.07, SD = 12.93, SE = 3.04). A moderate to strong evidence towards the acceptance of the null hypothesis was detected (BF₀₋ = 8.377).

Thanks,

G

I have to do a regression analysis for my research dissertation, but in the last table there is "NA" in one of my predictor variables.

what can I do for that?

Sarah

]]>I'm measuring reaction times in a repeated measures design. Say I have 10 subjects and 2 conditions with 50 measurements for each condition (overall 1000 observations). Can I use Bayesian ANOVA on the raw observations (not averaged across condition per subject), and enter the subject as a random factor? it seems reasonable to do so (I think it's the same logic of mixed models), but I could not find any reference for such a method with a Bayesian analysis...

Any help would be appreciated, as well as any references to any article you know that used a similar method.

If I'm correct, the analysis should look like this:

Thanks!

Alon

Within a multi-factorial design, I have factor A that gives a significant main effect (e.g. p=.009) in the traditional repeated measures ANOVA, but BF10 in favour of the null (e.g. .31) in the Bayesian versions using JASP. When I label one of the other (strongly supported) factors (factor in the design as 'nuisance', factor A is then supported (e.g. BF10 = 6.15). The factor I'm labeling as nuisance here is of less interest than factor A, but it isn't of zero interest, so I'm not sure if and when its appropriate to do so.

It would also be useful to understand more clearly why factor B is influencing the comparison of the factor A model vs. the null (when the two don't interact) - this makes me less confident about interpreting any factor's model in the context of a multi-factorial design if its potentially going to be obscured by other factors in the design.

I've seen this with other data sets too, so any help appreciated!

]]>Using a simple effect revealed in Experiment 1, I want to know the strength of evidence for this effect being replicated in Experiment 2 (two separate experiments where Exp2 is a direct replication of Exp1). To do so, would it be justified to use the Bayesian Paired Samples t-test with an

More specifically: The Cohen's d of the effect of interest from Exp 1 is 0.4 with std=0.07. Can I use these values to inform the prior (selecting Informed prior, normal) for the Bayesian Paired Samples t-test in Exp2?

When I try this, I obtain the following output for Experiment 2:

Figure: Bayesian Paired Samples T-Test. Input in JASP: ConditionA(S_LR_R) vs. ConditionB(S_RL_R); Hypothesis (Measure 1>Measure 2; my expected direction of the effect), Bayes Factor: BF10, Informed Prior (effect size 0.4, std 0.07, which is the effect obtained in Exp1).

My conclusion here would be I have anecdotal/moderate evidence for data|H+ (i.e., anecdotal/moderate evidence for a replication of my previous effect revealed in Exp1).

Many thanks for your help. I am very new to Bayesian analysis and reading through other posts about the prior, I am still confused. But I would like understand and use it appropriately, so your feedback is much appreciated.

I love JASP and use it exclusively now. The layout is far superior to SPSS and the output is so much cleaner and more organized. I recommend it to all my students now!

I was wondering about filtering capabilities in JASP. Almost all data needs to be analyzed conditionally, often split up by a specific group or factor (e.g., Group A vs. Group Z). It does not appear that JASP has any filtering capabilities. Am I missing how to do this? Will this be released in a future version?

Thanks!

]]>From what I understand, one can set the Cauchy prior width acordding to a speculated effect size when preforming a Bayesian t-test, such that if I think my effect should be around 0.6, I would set the Cauchy prior width to 0.6 (correct me if I'm wrong).

What I still don't understand is the Beta* prior width for correlations - can it be used in the same manner?

If I expect an r=~0.6, would I set the Beta* prior width to 0.6?

Or in other words, how do I translate an expected effect size/correlation to Beta* prior width?

Thanks,

Mattan

I conducted an Experiment with one between-subjects factor (Group) and two within-subjcets factors (MemoryCue, Source). When analysing my results, a mixed ANOVA gives me two main effects for MemoryCue and Source and a sig. interaction between the two, F(1, 128) = 11.92, p < .001.

When I conduct a Bayesian ANOVA using the BayesFactor package I get the following:

Input:

```
# Bayesian ANOVA
FR_bf = anovaBF(SourceRecall ~ Group*MemoryCue*Source + Subject, data = data_final, whichRandom="Subject")
FR_bf = sort(FR_bf, decreasing =TRUE)
FR_bf
```

The output looks as follows:

```
# Output
Bayes factor analysis
--------------
[1] MemoryCue + Source + MemoryCue:Source + Subject : 2.071683e+111 ±3.53%
[2] MemoryCue + Source + Subject : 1.827539e+111 ±2.5%
[3] Group + MemoryCue + Source + MemoryCue:Source + Subject : 1.630972e+110 ±12.65%
[4] Group + MemoryCue + Source + Subject : 1.317137e+110 ±2.54%
[5] Group + MemoryCue + Group:MemoryCue + Source + MemoryCue:Source + Subject : 7.646137e+107 ±4.55%
[6] Group + MemoryCue + Group:MemoryCue + Source + Subject : 7.20429e+107 ±8.43%
[7] Group + MemoryCue + Source + Group:Source + MemoryCue:Source + Subject : 1.263392e+107 ±2.86%
[8] Group + MemoryCue + Source + Group:Source + Subject : 1.154352e+107 ±2.84%
[9] MemoryCue + Subject : 3.594377e+105 ±2.23%
[10] Group + MemoryCue + Group:MemoryCue + Source + Group:Source + MemoryCue:Source + Subject : 6.333031e+104 ±2.96%
[11] Group + MemoryCue + Group:MemoryCue + Source + Group:Source + Subject : 5.811523e+104 ±3.46%
[12] Group + MemoryCue + Subject : 2.818147e+104 ±9.25%
[13] Group + MemoryCue + Group:MemoryCue + Source + Group:Source + MemoryCue:Source + Group:MemoryCue:Source + Subject : 1.253457e+102 ±2.71%
[14] Group + MemoryCue + Group:MemoryCue + Subject : 1.249367e+102 ±2.94%
[15] Source + Subject : 145095.3 ±2.37%
[16] Group + Source + Subject : 9694.612 ±9.93%
[17] Group + Source + Group:Source + Subject : 7.090814 ±1.43%
[18] Group + Subject : 0.06024572 ±2.97%
Against denominator:
SourceRecall ~ Subject
---
Bayes factor type: BFlinearModel, JZS
```

When comparing model 1(including the interaction) with model 2 (only 2 main effects) I get the follwing:

```
> FR_bf[1]/FR_bf[2]
Bayes factor analysis
--------------
[1] MemoryCue + Source + MemoryCue:Source + Subject : 1.133592 ±4.32%
Against denominator:
SourceRecall ~ MemoryCue + Source + Subject
---
Bayes factor type: BFlinearModel, JZS
```

Thus, no evidence for an interaction.

So, why is there such a huge discrepancy between the frequentist ANOVA and the bayesian ANOVA results regarding the interaction?

Thanks for your help.

Cheers,

Ivan

But I realized that some of the values i get with JASP are not quite consistent with what I get using lavaan directly.

It seems that JASP doesn't use the scaled fit indices and chisquare estimates that which are computed with lavaan when using WLSMV on CFAs with ordinal items.

Is that the case? If so, why could anyone elightnen me to why that choice was made, and where I can find literature to support this decision?

Any help will be welcome!

Thanks in advance! ]]>

My question is should I tick X and Y as nuisance in the model? And Why?

Thanks in advance for your help!

]]>- Is it right to say that based on the BFInclusion = 46.659, the data are 46.659 more likely if we consider that disgust has an effect on the DV than if we consider that it has no effect at all?

-Is the formula to compute BFInclusion similar to those used to compute BF10?

-If I have to present results in a paper, do you recommend to present the BMA rather than the full model comparison? Or is it better to present both of the analyses?

Thank you very much in advance

]]>JASP is great, thanks for developing it! An issue that has persistent over the past and current versions occurs when the number of cells in a repeated-measures ANOVA is relatively high.

When running a 3x9 rm-ANOVA, I get the following in JASP 0.8.2:

Error message, including stack trace:

This analysis terminated unexpectedly. Error in complete.cases(x, y): not all arguments have the same length Stack trace analysis(dataset = NULL, options = options, perform = perform, callback = the.callback, state = state) .resultsPostHoc(referenceGrid, options, dataset, fullModel) t.test(unlist(postHocData[listVarNamesToLevel[[k]]]), unlist(postHocData[listVarNamesToLevel[[i]]]), paired = T, var.equal = F) t.test.default(unlist(postHocData[listVarNamesToLevel[[k]]]), unlist(postHocData[listVarNamesToLevel[[i]]]), paired = T, var.equal = F) complete.cases(x, y) To receive assistance with this problem, please report the message above at: https://jasp-stats.org/bug-reports

The message in 0.7.1.12 used to be `Invalid comparison with complex values`

, and has been reported on GitHub before (raised in September 2015; closed in March 2017 in anticipation of whether it was still a bug in the latest version).

Please note that the issue does not seem to be with the data: Running 2x9 rm-ANOVAs on the same data does work, and I've had the same issue with a completely different dataset. The only common factor seems to be the number of cells.

**UPDATE (2017-09-14, 12:34 UTC+0)**: The same data does work with 2x9 rm-ANOVAs on JASP 0.7.1.12, but not on 0.8.2. On 0.8.2 it does work in a 3x6 design.

Any idea what might be going on?

Cheers,

Edwin

]]>Two questions about the effect size used in JASP:

- What is an effect size (delta)? Is it Glass's delta?
- How do you convert Cohen's d into delta? (this is straightforward if delta is Glass's delta)

Cheers,

Hannah

PS. Thank you Team JASP - the program is fantastic!!

]]>I'm wondering if it is possible to generate indirect effects for mediation analysis in JASP? If not, would it be a possible feature in the program in the near future?

Regards,

NT

how sensitive is a model comparison using BF for (multi-) collinearity? Is there any mechanism within bayesian model comparison that is sensitive to collinearity? If so, is it OK to choose models with high BF and high collinearity?

Best wishes,

Ulrich Dettweiler & Christoph Becker

]]>When I run and rerun Bayesian repeated measures ANOVAs in JASP using an identical dataset (Excel csv file), I find that the results will occasionally be different. This isn't dramatic, e.g., 0.046 vs. 0.041, but they are inconsistent.

I'm wondering if anyone knows why this might be and how I can fix it?

Thanks!

Sarah