However every time I get different results for the same data (minor differences, but still...) I wonder what could be the

explanation for this and if anyone has suggestions what can I do to get a stable BF.

Note that I am using the latest version and that it doesn't happen in T TESTS (in which the BF is stable)

Thanks all and the all a good night:) ]]>

cheers

]]>I ran a few studies with functional near infrared spectroscopy (fNIRS) and while I think it could be a useful method, its still early days. Therefore the results in the literature are messy, everyone reports results in different ways (either reporting oxy, or deoxy signals, or both, or the difference between the two). There are also various options regarding preprocessing, which are widely discussed and no consensus reached. Lastly, as usually the whole fNIRS analyses spits out some beta values which are just analysed in t tests, for cognitive studies, there is the issue of correcting for multiple comparisons. Some people report uncorrected p values (as if corrected p values arent bad enough...), but that seems to be dying down. The problem with correction in fNIRS seems that its very conservative and eradicates everything or most. It's all a bit of a mess!

So I want to use BF to evaluate all of these issues. I can run analyses comparing the different preprocessing ways, analysing the various signals, and getting beta values for all. I do of course get p values too, just as thats what we (sadly) still are expected to do, and in some regards they might be useful in guiding my BF analyses, keeping in mind they are uncorrected p values...?

I'm a JASP (novice/) convert, and I think here it would provide a useful tool in evaluating the evidence from fNIRS data. I'm not sure my supervisor nor reviewers will be okay with this, but I want to try to make the argument. Do you think that makes sense?

Thanks!

]]>Is it possible to do a violin plot in JASP that has these features and layout (see link)? https://datavizcatalogue.com/methods/violin_plot.html

The only option that I can find in the JASP program is a violinplot plus a boxplot.

Thanks in advance

Franziska

I am using JAGS/winBUGS to estimate two parameters of a model for each participant, let's call them

1) Is there a way that I can incorporate this information in my test?

2) Could I use the same information to perform a mixed ANOVA?

3) If JASP is not there yet, is there any R/Matlab package that might have already developed this?

Thank you!

Ondrej

]]>I can't quite see how to manage font size, any input please?

]]>I can't get my head around how "% error" in JASP possibly relates to the Bayes Error Rate (BER). It seems that both types of measure relates to measuring the Irreducible Error in data. But from my understanding the "% error" is a proportional error in the sense that, for example, a 5% error of a BF of 3 means that the BF is with certainty (unclear with what certainty) somewhere between 2.85 and 3.15? Meanwhile, I understand the BER such as the proportion of times that a model lead to wrongful predictions, so that a 5% error would mean that the BF.

Hope I made enough sense to start a discussion...

Best regards,

Philip

]]>I was wondering about how JASP calculates the trends (linear, quadratic, etc.) when asking for polynomial contrasts in an ANOVA (NHST). Is there any reason why it reports the results in a t test rather than in an F test (like SPSS). I guess ultimately the result should be identical when taking the square root of values from the F test, but I do wonder whether there are some crucial differences. I assume that they still are basically simple regression analyses? In this regard, it would also be handy to not only see t and p values but also the dfs for that analysis.

More generally, is there somewhere a documentation on the tests that JASP uses and whether certain corrections have been applied (e.g. I am thinking about the multiple corrections of Cousineau and Morey when it comes to the calculation of within-subject confidence intervals)?

Thanks,

Michif

I am using JASP to analyze the data and bayesfactor of mixed ANOVA, but I got some problem with interpretation.

It's a 3(between)x6(within) ANOVA design. The main effect of within variable(A) is significant, but the main effect of the between variable(B) and interaction( AxB ) are not significant. In the bayesfactor(compare to null model), the BF10 are:

A: BF10 = 1.59e+23

B: BF10 = 0.423

A + B: BF10 = 7.56e+21

A + B + A*B: BF10 = 7.02e+21

Is that I can sure there is a strong evidence for the main effects model and interaction model?

The ANOVA p value is non-significant, I have no idea how to interpret the combine result...

Is the BF10 is as bigger as possible in compare to null model order, and as smaller as possible in compare to best model? I am so confused in these detail...

Thank you very much. ]]>

it's Wonderful, that JAPS creates APA outputs, but how can I use them in Word?

When inserting the output into Word, Word always changes the format to something weird.

it there a way to copy the tables exactly the way they are displayed in JASP?

TNX alot

Patrick

So I have run a Bayesian repeated measures ANOVA followed up with post-hoc tests (actually, the comparisons are planned, but nevertheless). JASP gives me uncorrected Bayes Factors, but I'd much prefer to report corrected Bayes Factors (given the number of tests that are performed). I assume there is a way I can calculate this using the prior odds and the corrected posterior odds; I am just not sure how (by now it is evident that I'm merely a pragmatic Bayesian). I've added the output, so if someone here could just show me how to do this for one of the postdoc test, I'd be happy!

]]>According to the getting started doc (https://jasp-stats.org/getting-started/) , JASP distinguishes 4 variable types: nominal, nominal text, ordinal, continuous. Ordinal variables are "categorical variables with an inherent order". I was wondering whether ordinal variables could also refer to quantitative variables with a discrete scale (e.g., number of words recalled in a memory experiment). Thanks for your help! ]]>

I'd like to control the scale of the y-axis in a simple box plot.

For example, I'd like the scale to range from 1 to 10, instead of 5 to 8 that JASP selects automatically in one of my data sets.

Can this be done in JASP 0.9.1?

Greetings,

DJ

I was wondering about a feature in the JASP graphs. I want to plot accuracy data (hence the max is 1), but when I add the 95% CI, the y-axis goes above 1 (see below). I have received some comments that I need to correct these graphs, but I do seem to find where this is coming from. Any suggestions?

Thank you in advance. ]]>

Hope someone can give me a hand, because I am not sure what to do. I have done an eye tracking study. Basically, I have the score of a participant in a questionnaire (from 0 to 100) and want to see the effect of this in the number of fixations to three different areas of interest. I am trying to find what is the best way to analyse these data. Some people mention that I could do structural equation modelling, but I am totally naive in that kind of analysis. Any suggestion?

Thanks!

]]>Not sure if this is the right place to ask for statistics help, but I love the JASP software and I guess there is no harm in trying:)

Anyway, I was wondering if anyone could give me some pointers about how to analyze my thesis experiment.

We had 12 participants, each performed something called "Random dot motion" cognitive tests with coffee

and decaffinated coffee (randomly counterbalanced) on two separate days.

This test gives response times and correct wrong data. (Each block of trials were about 200 choice tasks)

There was also a speed-accuracy trade off condition. One where they had to answer quickly within 1 second

(forced deadline speed) and another one where accuracy was the point and there was no deadline. (I am also struggling with how our experiment actually could show that caffine has an effect on the speed-accuracy trade off but I think I have some ideas about that...)

Now I am wondering which analysis to perform.

In similar studies some have done paired t-tests, and other studies seem to be doing repeated measures ANOVA, and looking at interactions.

The main problem is that there was a lot of learning, so the differences from one day to the other is mainly learning.

From what I can understand it seems I have two options, one is the repeated measures ANOVA,

and seeing if there is an interaction effect treatment*time, the other is to group all caffine

and all non-caffine trials together and doing one way paired t-test? Since the counterbalancing was done randomly is this possible?

Also not sure which assumtions I need to check for the tests to be valid, in the ANOVA I guess it is normality of the residuals and sphericity.

Sorry if this is a stupid question, any help would be greatly appreciated.

If any more information is needed please ask and I will try to elaborate further.

I am working on a vignette study, I’m trying to write the pre-registration but I am getting stuck, I sure would appreciate any help you may have.

Here is a made-up example that has the same properties as my study:

The established theory is that red hammers hurt the very most when whacked into your thumb. They say red hammers hurt more than yellow hammers, blue hammers, and green hammers. I contend that hammers that are made out of metal hurt a lot while hammers made out of nerf don’t hurt hardly at all and that this is totally irrelevant of the color.

Hypothesis: Across different colored hammers there is no difference in the strength of the effect of a metal hammer on individuals’ experience of pain. In other words, there is no interaction of metal hammers (vs. nerf) with hammer color in predicting individuals’ experience of pain.

To test this I will have participants read a vignette that says they got hit by either a metal or nerf hammer and that the color of the hammer was red/green/blue/yellow. I will also have a condition where the participant doesn’t get told what color the hammer is, just whether it is metal or nerf. So I have a 2 (Material: Metal; Nerf) by 5 (Color: Red; Green; Blue; Yellow, No Color listed) design with the DV being how much pain the participant would imagine experiencing.

I thought that I could do a Bayesian ANOVA where I set up material as a two-level factor and color as a five-level factor. I made some test data and tried to do that but it only tells me if there is a difference, and I guess I could do follow-up t-tests with an adjustment of some kind for multiple tests, but a friend said maybe I should use custom contrasts.

When I created the data I made the yellow hammer slightly more painful to make sure that I could find this difference in my test if it is there. And I can see it on the above descriptive plot but I don't know quite how to get to it.

Any suggestions would be GREATLY appreciated! And sorry for the silly example, clearly it has been a long week.

]]>It is my first time I analyze data with JASP. Thank you very much for this intuitive tool! I have a question concerning Bayesian linear regression. In an exploratory fashion, I computed 3 hierarchical linear regressions: As predictor variables, I have included all four subscales of my construct of interest (S1-S4) and I have 3 different DVs (DV1, DV2, DV3).

For DV1, I suppose that models S2+S4, S3+S4 or S2+S3+S4 work the best as they show the highest BF10 (still, no big difference between the three models). But concerning DV2, I am a little bit lost. First, there were no supported correlations (preliminary analyses) between the DV2 and the four subscales. Still, I computed the regression that revealed the following output. Now I am wondering how to interpret the output. Can somebody help me out?

Thank you very much!

Alexa

I’m looking for a bit of advice on reporting ES delta and its 95% credible interval in the context of a one-sided Bayesian t-test computed in JASP.

The figure below is the posterior distribution of ES delta under a default one sided prior distribution. A one sided prior was used as the relevant hypothesis was theoretically derived and directional.The BF is non-diagnostic. This is fine (as they say, data don't care what your hypotheses are), and not the issue that I'm concerned about. The median of the posterior distribution is .17, and its 95% credible interval grazes zero (but can’t extend below it).

Should I report the above ES and the corresponding credible interval?

Or should I re-run the analyses with a two sided prior (see below), ignore the BF, but report ES delta = .10, 95% Credible Interval [-.27, .47]? If so, how should I explain this in the manuscript?

The latter (report and interpret the one sided BF followed by two sided ES and credible interval) appears to be the approach that Wagenmakers et al. (2016) have taken here, in a context which is pretty similar to the one that I am in currently: https://frontiersin.org/articles/10.3389/fpsyg.2015.00494/full. However, this seems inconsistent to me for reasons I can’t articulate. Is anyone able to explain to me why this is the sensible thing to do (or not)?

Thanks in advance for any and all advice received. I’m quite new to this Bayesian business, and still very much a novice. Peter Allen

]]>I've recently used BayesFactor with the default priors scale r. I have been advised to adjust the Cauchy width based on some pilot data rather than relying on the default values.

Can anyone advise how I'd go about that? I already have the null hypothesis t-tests and Cohen's d calculated if that is useful.

Thanks for your time and help,

Boo.

]]>I want to compare 2 correlations (r12 and r13), whether or not they are significantly different from each other. I would like to do this in both a frequentist and Bayesian way.

Is this (or one of the two) possible in JASP? I can't seem to find it.

Thanks!

]]>I am running a serial position experiment, where participants remember varying numbers (1-4) of items, and we probe their memory for items in different positions of the study set. This means the design isn't fully balanced, because for example you can probe positions 1-4 when 4 items were presented, but can only probe positions 1-2 when only two items were presented. So, our design looks like this:

Set size Position probed

2 1

2 2

3 1

3 2

3 3

4 1

4 2

4 3

4 4

When I create two factors in the repeated measures bayesian anova window, it tries to set up a fully crossed design like this:

Set size Position probed

2 1

2 2

2 3

2 4

3 1

3 2

3 3

3 4

4 1

4 2

4 3

4 4

This leaves me with empty cells in the design which I can't possibly fill. Is it possibly to get around this?

]]>My sample size is quite big given the standards in my field (N=75), and I cannot collect extra data. Bayesian t-tests for my contrasts of interest show BFs between 1 and 3, suggesting inconclusive evidence. Some of these go together with p-values around 0.05 in classic t-test (in the unexpected direction).

Can I report/interpret such results in a meaningful way? E.g.,

I am asking for practical reasons. If given these outcomes, I would absolutely have to collect more data to draw any meaningful conclusions, I cannot report the experiment. Because I cannot collect more data. It would be a pitty, because the conclusion that there is not enough evidence to show an effect, would be in line with two other experiments that showed support for the nullhypothesis and are to go into the same publication. ]]>

We got a (friendly) reviewer question - could someone help me phrase a correct answer or perhaps point me to a good reference?

"What I find difficult to understand is how you can have a medium effect size and a very strong Bayes Factor such as seen on page .., d = .60 & BF = 1,720,000." ]]>

It is such a basic thing to do in experimental sciences that I feel this must be me missing an obvious big button somewhere..

Any help?

Martin

]]>