It is such a basic thing to do in experimental sciences that I feel this must be me missing an obvious big button somewhere..

Any help?

Martin

]]>Not sure whether this is the proper place to report this, but here goes. Maybe someone of you has a solution to fix this.

Best,

Kevin

For one of my studies, we were planning to do a simple mediation model, for which we typically would use Process package in SPSS.

But I was wondering whether and what would be the Bayesian alternative of that? I do not think that there is a way to do that in JASP yet. Any suggestions? Thank you in advance. ]]>

I am trying to determine whether my dataset is appropriate to do regressions on. However, typically the guidelines on the observations needed vary (for a review see e.g., Austin & Steyerberg, 2015). I can't seem to find any references on this topic in regards to the Bayesian regression implemented in JASP: does it depend on the prior distributions used or the sampling method?

I will fit models at the individual level. I have 81 data points with 4 variables. Depending on how I cross these variables I could end up with as many as 15 model terms, but right now I am looking at using 6 model terms.

Thanks for the help,

/Philip

]]>I recently submitted a paper including BFs calculated with JASP.

In the methods section I referred to the "default" prior settings. However, a reviewer asked for a concrete description of the priors.

I wonder whether the following is a correct way of putting it:

"All data analyses were conducted with the statistics software JASP (version 0.8.6.0, JASP Team, 2018) and it’s default settings for Bayesian analyses: I.e., Bayesian repeated measures ANOVAs were computed with a multivariate Cauchy prior with a fixed effects scale factor of r = 0.5, and a random effects scale factor of r = 1. Bayesian paired t-tests were computed with a Cauchy prior with a width of r = 0.707. Priors were centered on zero."

Thanks alot for the help.

Stefan

I select Binomial Test, drag gender into the right-hand box and get the error message:

! Error in if(any(df[[name]]) == '.')) next:

missing value where TRUE/FALSE needed

What have I done wrong? Thanks for any help.

]]>Have moved it to trash and done installation once again, now JASP opens the file but closes when I try to take any actions.

XQuartz preinstalled before JASP installation. ]]>

We got a (friendly) reviewer question - could someone help me phrase a correct answer or perhaps point me to a good reference?

"What I find difficult to understand is how you can have a medium effect size and a very strong Bayes Factor such as seen on page .., d = .60 & BF = 1,720,000." ]]>

I've this dataset that shows a quite puzzling discrepancy between a frequentist rmAnova and its Bayesian version. In particular one 2 x 2 interaction effect has F(1,17) = 21.5, p < .001, partial eta^2 = .56, generalized eta^2 = 0.0015, while the BF in favour of the alternative is BF = 0.31, calculated as the "Baws" factor, aka BF for effects "across matched models" in JASP (see here). The complete design 2 x 2 x 2 x 2.

My suspicion for the reason for this discrepancy is that the interaction effect size is very very small (the generalized eta^2 is indeed very small) but consistent, so the freq. Anova suggests there is an effect, and the BF says there is none, because it is that small and the default priors don't allow for detection of such small effects with this sample size (18).

What would you conclude in such a case? I guess there is no clear answer, but any expert opinion is greatly appreciated.

Thanks,

Chris

You can get the data as tab-delimited text file in wide format here.

]]>Can I open .sav files that are on the OSF server in JASP? It seems to me that it was possible, however, now when I link to OSF I do not see .sav files

you will help? ]]>

I've recently used BayesFactor with the default priors scale r. I have been advised to adjust the Cauchy width based on some pilot data rather than relying on the default values.

Can anyone advise how I'd go about that? I already have the null hypothesis t-tests and Cohen's d calculated if that is useful.

Thanks for your time and help,

Boo.

]]>I am quite new in Bayes analysis and want to make sure that I am interpreting and reporting my results correctly. Let me briefly explain my study. I have two different tests with normative scores.

In my analysis, I have compared the normative scores of each test with my observers' scores. For that I have conducted two one-sample t-test. The results revealed a BF01 of 0.00006244185 for the first test and a BF01 of 8.98 for the second test.

My interpretation: These results reflect that the differences between my observers' scores and the normative scores are 16014.9 (1/0.00006244185) more favored than the null model, for test A. For test B, the lack of differences between my observers' scores and the normative scores is 8.98 more favoured than the differences.

A couple of question about my interpretation. First, is it right? Second, I find confusing reporting BF01 for test A. Would it be possible to report BF10 for test A and BF01 for test B, or is it better to keep consistency? How do you report a big (or small) BF value? I don't find very elegant to report a large number such as BF10 = 16014.9 (or BF01 = 0.00006244185. Could this be reportrted like BF01 < .001, or in a similar way?

Thanks in advance for your help!

]]>We have two studies, one with approximately 200 participants and one with 400 participants. When running a Bayesian contingency table on the data, we find that the BF for the first study is larger than for the second study (BF ~ 4.2 reduces to BF ~ 3.2). As the two designs are very similar, we combined the data set, and the Bayes Factor reduces even further (BF ~ 1.6).

Is this simply because as we increase the sample number, we're getting a better estimate true distrubtion, and our BF is changing to reflect that? Or is there something else that might be causing this?

Thanks for your help.

-J

]]>the column names as the variable names? ]]>

I made a mistake in the original syntax, so some cases have the wrong value.

I corrected this mistake in SPSS, and saved the .csv file in the original location.

Now I want JASP to keep the exact same analyses as I did before, but now do them on this new datafile.

If I just open the analyses, the "old" data is still there.

If I doubleclick the data, normally an excel file opens but now I get an Application Error (the instruction at blablabla referenced memory at blablabla. the memory could not be read. Click on OK to terminate the program).

In another file with another set of analyses, I get the message:

JASP was started without associated dtaa file (csv, sav or ods file), blablabla.

If I then click "Find Data File", I can open the new one, but nothing happens.

In yet another set of analyses, suddenly all of my tables are empty (the headers and notes remain, but all of the actual analyses disappear).

I ran into the same problem earlier when I had approx. 500 new participants (I have been working on this paper for a while, it is crowdsourced data so new participants keep signing up). Then I had to redo all of the analyses.

How can I redo analyses on an updated data file?

]]>Can anyone help me out here, as I am a novice. ]]>

Can anyone help? ]]>

I am using the JASP to estimate the bayes factor. The bayesian repeated measures ANOVA was applied, however, i have a problem to get a stable bayes factor.

The results vary when i change the input names for the repeated measures factors or names of factor levels (other parameters stay the same). This issue doesnt happen when i run the repeated measure ANOVA, indicating that the data are valid.

btw, in the Advanced Options for bayesian repeated measures ANOVA, do I need to specify the samples of my experiment? or just chose the Auto option.

Thanks in advance.

]]>The file does save, but only 150 k., and the plots are not displayed.

The same happens when I export to OSF (I don't want to include the datafile because it is restricted, so I wanted to export to HTML and include that in OSF, but even the normal OSF save doesn't work).

What can I do (except splitting it up in multiple files obviously). ]]>

I'm very new to JASP and I'm quite inexperienced in Bayesian thinking. Please, could you help me with this problem I have?

I have a within-subjects repeated-measures dataset with 32 participants who took part in a behavioural experiment. The model contains two factors: emotion (fear, happiness, and neutral), and exposure times (7 exposures in total). The dependent variable is accuracy (%).

I found no main effect of emotion in a RM ANOVA, but as we all know, no main effect in a frequentist analysis doesn't really provide evidence for null effect. How can I test this in JASP with Bayes Factors? Should I use Bayes t-tests and compare the 3 levels of the factor of interest for every exposure time? I'm not interested in the exposure times, only in the emotion factor.

Thanks in advance!!

]]>I don't seem to have any missing data or odd types of data in those columns, so am not sure why it is just that one figure that is messed up when all the other variables produce it fine.

Does anyone know why this would happen and how to fix it? thanks

Tried different syntaxes for command (in preferences) but to no avail. Although it should work with: "libreoffice".

Also, it is the default editor. It always states: "Unable to start the editor: libreoffice. Please check your editor settings in the preference menu."

On Windows 10 it works fine as it should with LibreOffice.

Any ideas? ]]>

The dependent variable is score, and it can go from 0 to 12. The question is how many items out of 12 they will remember in each time delay. The distribution of my data looks like this:

When I did the Bayesian repeated measures anova everything was ok. Then I did the frequentist version and got a legend saying: "Mauchly´s test of sphericity indicates the assumption of sphericity is violated (p < .05)" under the within subjects effects. If this assumption is violated for the frequentist version, does it mean I shouldn't use the Bayesian one either?

I could do a logistic regression also and instead of using the score as continuous, use it as a binomial variable. However, I think I cannot do that in JASP or in BayesFactor. I also tried to do it in BayesFactor with anovaBF but the Levene´s test was significant, and I think that one is for continuous variables. I still don't fully understand if my dependent variable is continuous or count data or what.

I've read some posts in which you recommend the brms package, but you also say that it won't give us the test and I don't understand what that means. Thank you.

]]>In one of my papers, I calculated Bayes Factors using JASP (great tool!!). Now a reviewer has commented on my results with the following remark:

"*I am no expert on Bayesian Statistics, but as far as I’m concerned you usually use the BF01 to report evidence speaking in favour of the null hypothesis, whereas BF10 reports the probability of the data for an alternative hypothesis. In other words, by using BF10 you are trying to find evidence for H1, which you do not find - in parallel to the frequentist analysis. According to me, it would be better to look at BF01 and check whether you can find evidence for your null finding.*"

I would argue that it doesn't matter to report BF01 over BF10 as ultimately both scores provide the same information as BF01 = 1/BF10. However, perhaps I'm missing some point he/she is trying to make that the direction of the interpretation. Perhaps my phrasing in the original text was a bit unclear.

Here's an excerpt of my own manuscript:

"*A JZS Bayes factor ANOVA(ref55) with default prior scales was performed to estimate the likelihood of the interaction effect. This resulted in a BF10 of 0.005, thereby providing very strong evidence for the absence of an effect(ref56). The effect of time and group alone resulted in a BF10 of 0.209 and 0.158 respectively*"

I've recently run two closely related experiments. In the first, I analyzed my data with Bayesian Independent Samples T-Tests and Bayesian ANCOVA with uninformative priors (to my knowledge, this was the first experiment in my topic). In analyzing the data from the second experiment, I'd like to use the same analyses but with informative priors. Specifically, I'd like to use the posteriors form the first study as the priors in the second. How can I extract the parameters describing the first posteriors in JASP so that I can input them as priors in the second study (i.e., Cauchy location and scale for the T-tests, and r scale fixed effects, r scale random effects, and r scale covariates for the ANCOVA)?

Many thanks for any help you're able to provide!

]]>I was wondering whether it is possible to force the intercept equal to 0 in lmBF. (Or, at least, I think that's what I need to do. Forgive my statistical shortcomings.)

What I have is a dataset in which I want to find the best fit from [linear, quadratic, cubic] fits, with the constraint that the intercept should be 0. I'd like to be able to say something like, "Assuming a polynomial of degree <= 3, and that the y-intercept is 0, the best fit for this data is f(x)."

Any help would be very much appreciated.

Best regards,

Rick

I have the following question, is there a rule of thumb whether to report the Model comparison or the Analysis of Effects for Bayesian ANOVA? In my recent study I have ANOVAs with one or two, but also with 4 factors and I am a bit confused as to what to report in my main results section ? I assume, that if there are more variables it is better to report the Effect Analysis (across all models), but for the simpler analyses may be model comparisons are enough? I know it is best to put both tables (possibly as an appendix), but in the main body text I could not do that. Any suggestions?

Thank you in advance.

Mila

]]>