I am working on a vignette study, I’m trying to write the pre-registration but I am getting stuck, I sure would appreciate any help you may have.

Here is a made-up example that has the same properties as my study:

The established theory is that red hammers hurt the very most when whacked into your thumb. They say red hammers hurt more than yellow hammers, blue hammers, and green hammers. I contend that hammers that are made out of metal hurt a lot while hammers made out of nerf don’t hurt hardly at all and that this is totally irrelevant of the color.

Hypothesis: Across different colored hammers there is no difference in the strength of the effect of a metal hammer on individuals’ experience of pain. In other words, there is no interaction of metal hammers (vs. nerf) with hammer color in predicting individuals’ experience of pain.

To test this I will have participants read a vignette that says they got hit by either a metal or nerf hammer and that the color of the hammer was red/green/blue/yellow. I will also have a condition where the participant doesn’t get told what color the hammer is, just whether it is metal or nerf. So I have a 2 (Material: Metal; Nerf) by 5 (Color: Red; Green; Blue; Yellow, No Color listed) design with the DV being how much pain the participant would imagine experiencing.

I thought that I could do a Bayesian ANOVA where I set up material as a two-level factor and color as a five-level factor. I made some test data and tried to do that but it only tells me if there is a difference, and I guess I could do follow-up t-tests with an adjustment of some kind for multiple tests, but a friend said maybe I should use custom contrasts.

When I created the data I made the yellow hammer slightly more painful to make sure that I could find this difference in my test if it is there. And I can see it on the above descriptive plot but I don't know quite how to get to it.

Any suggestions would be GREATLY appreciated! And sorry for the silly example, clearly it has been a long week.

]]>Not sure if this is the right place to ask for statistics help, but I love the JASP software and I guess there is no harm in trying:)

Anyway, I was wondering if anyone could give me some pointers about how to analyze my thesis experiment.

We had 12 participants, each performed something called "Random dot motion" cognitive tests with coffee

and decaffinated coffee (randomly counterbalanced) on two separate days.

This test gives response times and correct wrong data. (Each block of trials were about 200 choice tasks)

There was also a speed-accuracy trade off condition. One where they had to answer quickly within 1 second

(forced deadline speed) and another one where accuracy was the point and there was no deadline. (I am also struggling with how our experiment actually could show that caffine has an effect on the speed-accuracy trade off but I think I have some ideas about that...)

Now I am wondering which analysis to perform.

In similar studies some have done paired t-tests, and other studies seem to be doing repeated measures ANOVA, and looking at interactions.

The main problem is that there was a lot of learning, so the differences from one day to the other is mainly learning.

From what I can understand it seems I have two options, one is the repeated measures ANOVA,

and seeing if there is an interaction effect treatment*time, the other is to group all caffine

and all non-caffine trials together and doing one way paired t-test? Since the counterbalancing was done randomly is this possible?

Also not sure which assumtions I need to check for the tests to be valid, in the ANOVA I guess it is normality of the residuals and sphericity.

Sorry if this is a stupid question, any help would be greatly appreciated.

If any more information is needed please ask and I will try to elaborate further.

It is my first time I analyze data with JASP. Thank you very much for this intuitive tool! I have a question concerning Bayesian linear regression. In an exploratory fashion, I computed 3 hierarchical linear regressions: As predictor variables, I have included all four subscales of my construct of interest (S1-S4) and I have 3 different DVs (DV1, DV2, DV3).

For DV1, I suppose that models S2+S4, S3+S4 or S2+S3+S4 work the best as they show the highest BF10 (still, no big difference between the three models). But concerning DV2, I am a little bit lost. First, there were no supported correlations (preliminary analyses) between the DV2 and the four subscales. Still, I computed the regression that revealed the following output. Now I am wondering how to interpret the output. Can somebody help me out?

Thank you very much!

Alexa

I’m looking for a bit of advice on reporting ES delta and its 95% credible interval in the context of a one-sided Bayesian t-test computed in JASP.

The figure below is the posterior distribution of ES delta under a default one sided prior distribution. A one sided prior was used as the relevant hypothesis was theoretically derived and directional.The BF is non-diagnostic. This is fine (as they say, data don't care what your hypotheses are), and not the issue that I'm concerned about. The median of the posterior distribution is .17, and its 95% credible interval grazes zero (but can’t extend below it).

Should I report the above ES and the corresponding credible interval?

Or should I re-run the analyses with a two sided prior (see below), ignore the BF, but report ES delta = .10, 95% Credible Interval [-.27, .47]? If so, how should I explain this in the manuscript?

The latter (report and interpret the one sided BF followed by two sided ES and credible interval) appears to be the approach that Wagenmakers et al. (2016) have taken here, in a context which is pretty similar to the one that I am in currently: https://frontiersin.org/articles/10.3389/fpsyg.2015.00494/full. However, this seems inconsistent to me for reasons I can’t articulate. Is anyone able to explain to me why this is the sensible thing to do (or not)?

Thanks in advance for any and all advice received. I’m quite new to this Bayesian business, and still very much a novice. Peter Allen

]]>I've recently used BayesFactor with the default priors scale r. I have been advised to adjust the Cauchy width based on some pilot data rather than relying on the default values.

Can anyone advise how I'd go about that? I already have the null hypothesis t-tests and Cohen's d calculated if that is useful.

Thanks for your time and help,

Boo.

]]>I want to compare 2 correlations (r12 and r13), whether or not they are significantly different from each other. I would like to do this in both a frequentist and Bayesian way.

Is this (or one of the two) possible in JASP? I can't seem to find it.

Thanks!

]]>I am running a serial position experiment, where participants remember varying numbers (1-4) of items, and we probe their memory for items in different positions of the study set. This means the design isn't fully balanced, because for example you can probe positions 1-4 when 4 items were presented, but can only probe positions 1-2 when only two items were presented. So, our design looks like this:

Set size Position probed

2 1

2 2

3 1

3 2

3 3

4 1

4 2

4 3

4 4

When I create two factors in the repeated measures bayesian anova window, it tries to set up a fully crossed design like this:

Set size Position probed

2 1

2 2

2 3

2 4

3 1

3 2

3 3

3 4

4 1

4 2

4 3

4 4

This leaves me with empty cells in the design which I can't possibly fill. Is it possibly to get around this?

]]>My sample size is quite big given the standards in my field (N=75), and I cannot collect extra data. Bayesian t-tests for my contrasts of interest show BFs between 1 and 3, suggesting inconclusive evidence. Some of these go together with p-values around 0.05 in classic t-test (in the unexpected direction).

Can I report/interpret such results in a meaningful way? E.g.,

I am asking for practical reasons. If given these outcomes, I would absolutely have to collect more data to draw any meaningful conclusions, I cannot report the experiment. Because I cannot collect more data. It would be a pitty, because the conclusion that there is not enough evidence to show an effect, would be in line with two other experiments that showed support for the nullhypothesis and are to go into the same publication. ]]>

We got a (friendly) reviewer question - could someone help me phrase a correct answer or perhaps point me to a good reference?

"What I find difficult to understand is how you can have a medium effect size and a very strong Bayes Factor such as seen on page .., d = .60 & BF = 1,720,000." ]]>

So I have run a Bayesian repeated measures ANOVA followed up with post-hoc tests (actually, the comparisons are planned, but nevertheless). JASP gives me uncorrected Bayes Factors, but I'd much prefer to report corrected Bayes Factors (given the number of tests that are performed). I assume there is a way I can calculate this using the prior odds and the corrected posterior odds; I am just not sure how (by now it is evident that I'm merely a pragmatic Bayesian). I've added the output, so if someone here could just show me how to do this for one of the postdoc test, I'd be happy!

]]>It is such a basic thing to do in experimental sciences that I feel this must be me missing an obvious big button somewhere..

Any help?

Martin

]]>For one of my studies, we were planning to do a simple mediation model, for which we typically would use Process package in SPSS.

But I was wondering whether and what would be the Bayesian alternative of that? I do not think that there is a way to do that in JASP yet. Any suggestions? Thank you in advance. ]]>

Not sure whether this is the proper place to report this, but here goes. Maybe someone of you has a solution to fix this.

Best,

Kevin

I am trying to determine whether my dataset is appropriate to do regressions on. However, typically the guidelines on the observations needed vary (for a review see e.g., Austin & Steyerberg, 2015). I can't seem to find any references on this topic in regards to the Bayesian regression implemented in JASP: does it depend on the prior distributions used or the sampling method?

I will fit models at the individual level. I have 81 data points with 4 variables. Depending on how I cross these variables I could end up with as many as 15 model terms, but right now I am looking at using 6 model terms.

Thanks for the help,

/Philip

]]>I recently submitted a paper including BFs calculated with JASP.

In the methods section I referred to the "default" prior settings. However, a reviewer asked for a concrete description of the priors.

I wonder whether the following is a correct way of putting it:

"All data analyses were conducted with the statistics software JASP (version 0.8.6.0, JASP Team, 2018) and it’s default settings for Bayesian analyses: I.e., Bayesian repeated measures ANOVAs were computed with a multivariate Cauchy prior with a fixed effects scale factor of r = 0.5, and a random effects scale factor of r = 1. Bayesian paired t-tests were computed with a Cauchy prior with a width of r = 0.707. Priors were centered on zero."

Thanks alot for the help.

Stefan

I select Binomial Test, drag gender into the right-hand box and get the error message:

! Error in if(any(df[[name]]) == '.')) next:

missing value where TRUE/FALSE needed

What have I done wrong? Thanks for any help.

]]>Have moved it to trash and done installation once again, now JASP opens the file but closes when I try to take any actions.

XQuartz preinstalled before JASP installation. ]]>

I've this dataset that shows a quite puzzling discrepancy between a frequentist rmAnova and its Bayesian version. In particular one 2 x 2 interaction effect has F(1,17) = 21.5, p < .001, partial eta^2 = .56, generalized eta^2 = 0.0015, while the BF in favour of the alternative is BF = 0.31, calculated as the "Baws" factor, aka BF for effects "across matched models" in JASP (see here). The complete design 2 x 2 x 2 x 2.

My suspicion for the reason for this discrepancy is that the interaction effect size is very very small (the generalized eta^2 is indeed very small) but consistent, so the freq. Anova suggests there is an effect, and the BF says there is none, because it is that small and the default priors don't allow for detection of such small effects with this sample size (18).

What would you conclude in such a case? I guess there is no clear answer, but any expert opinion is greatly appreciated.

Thanks,

Chris

You can get the data as tab-delimited text file in wide format here.

]]>Can I open .sav files that are on the OSF server in JASP? It seems to me that it was possible, however, now when I link to OSF I do not see .sav files

you will help? ]]>

I am quite new in Bayes analysis and want to make sure that I am interpreting and reporting my results correctly. Let me briefly explain my study. I have two different tests with normative scores.

In my analysis, I have compared the normative scores of each test with my observers' scores. For that I have conducted two one-sample t-test. The results revealed a BF01 of 0.00006244185 for the first test and a BF01 of 8.98 for the second test.

My interpretation: These results reflect that the differences between my observers' scores and the normative scores are 16014.9 (1/0.00006244185) more favored than the null model, for test A. For test B, the lack of differences between my observers' scores and the normative scores is 8.98 more favoured than the differences.

A couple of question about my interpretation. First, is it right? Second, I find confusing reporting BF01 for test A. Would it be possible to report BF10 for test A and BF01 for test B, or is it better to keep consistency? How do you report a big (or small) BF value? I don't find very elegant to report a large number such as BF10 = 16014.9 (or BF01 = 0.00006244185. Could this be reportrted like BF01 < .001, or in a similar way?

Thanks in advance for your help!

]]>We have two studies, one with approximately 200 participants and one with 400 participants. When running a Bayesian contingency table on the data, we find that the BF for the first study is larger than for the second study (BF ~ 4.2 reduces to BF ~ 3.2). As the two designs are very similar, we combined the data set, and the Bayes Factor reduces even further (BF ~ 1.6).

Is this simply because as we increase the sample number, we're getting a better estimate true distrubtion, and our BF is changing to reflect that? Or is there something else that might be causing this?

Thanks for your help.

-J

]]>the column names as the variable names? ]]>

I made a mistake in the original syntax, so some cases have the wrong value.

I corrected this mistake in SPSS, and saved the .csv file in the original location.

Now I want JASP to keep the exact same analyses as I did before, but now do them on this new datafile.

If I just open the analyses, the "old" data is still there.

If I doubleclick the data, normally an excel file opens but now I get an Application Error (the instruction at blablabla referenced memory at blablabla. the memory could not be read. Click on OK to terminate the program).

In another file with another set of analyses, I get the message:

JASP was started without associated dtaa file (csv, sav or ods file), blablabla.

If I then click "Find Data File", I can open the new one, but nothing happens.

In yet another set of analyses, suddenly all of my tables are empty (the headers and notes remain, but all of the actual analyses disappear).

I ran into the same problem earlier when I had approx. 500 new participants (I have been working on this paper for a while, it is crowdsourced data so new participants keep signing up). Then I had to redo all of the analyses.

How can I redo analyses on an updated data file?

]]>Can anyone help me out here, as I am a novice. ]]>

Can anyone help? ]]>

I am using the JASP to estimate the bayes factor. The bayesian repeated measures ANOVA was applied, however, i have a problem to get a stable bayes factor.

The results vary when i change the input names for the repeated measures factors or names of factor levels (other parameters stay the same). This issue doesnt happen when i run the repeated measure ANOVA, indicating that the data are valid.

btw, in the Advanced Options for bayesian repeated measures ANOVA, do I need to specify the samples of my experiment? or just chose the Auto option.

Thanks in advance.

]]>