As Bayes is assuming normal distribution of the residuals, I would also assume that it should only be performed on interval scale data, therefore having the same assumptions as its frequentist counterpart (which is often said just that way, I just never found the requirement written down that the data should also be interval scale).

- Can I also rely on the Bayesian ANOVA results if my data is ordinal?
- I have my DV labelled as ordinal in JASP - am I right that this is not considered when I perform a Bayesian ANOVA? Or does this change the procedure to a rank-based analysis?

Thank you guys for always answering all these stupid questions and for bringing JASP to the world.

Cheers,

C

]]>I have a question on the analyses that I am currently trying to run in JASP. I will limit myself to one example which should then help me coping with the rest.

So, I have a mixed-factors design with one repeated measures factor (scenario, three levels) and one between-subjects factor (Studiennr, 6 levels). I am mostly interested in an effect of Studiennr, i.e., I expect data to show differences across the different studies, and I expect interaction effects of Studiennr and scenario. The results in JASP (default priors) give me the following:

Sorry for blurring the post-hoc tests.

I have been trying to interpret the results and find the correct reporting following https://www.cairn.info/revue-l-annee-psychologique-2020-1-page-73.htm (I understood that I can use BFincl in analysis of effects to determine whether there are significant effects of the factors scenario, Studiennr, and interaction. If BFincl >3, I would assume an effect - please correct me if I'm wrong.) .

If I understand things correctly, the Model Comparison Table indicates that the model only including scenario is the best predicting. Also, considering the Table "Analysis of Effects", it seems like while the scenario most definetely has an influence (BFincl = infinity), the posterior probability of the data decreases both for including the main effect of scenario and the interaction effect.

However, I am not sure whether I correctly interpret the data for the following reasons

a) a frequentist analysis reveals significant main effects of scenario and Studiennr and a significant interaction

b) the descriptives look very much like there is an effect of Studiennr

c) the post hoc tests indicates differences across some of the levels in Studiennr (e.g., 7th line, 14th line).

I would greatly appreciate if someone could help me understand the issues at hand - while there is lots of stuff on t-tests, I still find it hard to cope with the Bayesian ANOVA...

Best regards,

C

]]>Hello.

I would like to specify contrast to perform post hoc on mixed models. Can anyone help?

]]>Is it feasible to include a goruping variable (e.g. gender) in a network, if I don't have power to estimate two separate networks? I know we can estimate using mgm, but it doesn't appear to produce anything too informative with this method, and I wondered if it was statistically feasible to include it as a variable using EBICglasso, which seems to produce a more useful (visually) network?

Thank you!

]]>I run multiple regression analyses on JASP. I wonder which type of bootstrap is implemented (is it "empirical bootstrap" ?).

Then, is it possible to bootstrap part correlations ?

Best regards,

]]>I would like to investigate the relationship between two continuous variables while controlling for two continuous covariates and one categorical covariate. Because one of my covariates is categorical I think (based on previous discussions on this forum) that I need to run a Bayesian ANCOVA, entering the continuous predictor and the two continuous covariates in the Covariates box and the categorical covariate in the Fixed Factors box. I then include the two continuous and the categorical covariate in the null model. My first question is whether I have understood this correctly?

Secondly, I am wondering about the specification of the Prior on Coefficients (under Additional Options) since the default for the r scale values is different for fixed effects and covariates. In my case this would mean that the scale parameter is different for the categorical and the continuous covariates, but it is the same for the continuous covariate and the continuous predictor of interest. Should this be set to the same value in this case and if so, to which value?

I also noticed that it is possible to specify the prior for each term individually, but this does not allow me to specify a prior for a term that is included in the null model. Does that mean that the r scale parameter does not matter for these terms?

Many thanks in advance!

Best wishes,

Julia

]]>I am trying to write up the results for a Bayesian ANCOVA. As there are no specific instructions on conducting/reporting this in the manual, I am facing some issues in ensuring if whatever I am writing makes sense.

Here is my output:

In short, cond = my IV and IPQ_Avg = my covariate.

Am I right to say this: A Bayesian ANCOVA was conducted, where a model containing test condition and sense of presence was compared against the null model containing only sense of presence. The default JASP priors (r scale prior width = 0.5 for fixed effects, r scale prior width = 0.354 for covariates) were used. Given the transitive relationship between Bayes factor, the model with test condition and sense of presence (BF10 = 625,184.35) was compared to the sense of presence model (BF10 = 0.44) by division (625,184.35/0.44 ≈ 1,420,873). Hence, after explaining for the error variance attributable to sense of presence, there was still extreme evidence for the effects of test condition on object location memory despite it being weaker."

My primary concern with this write-up was if my interpretation was correct, by comparing 1,420,873 (after controlling for presence) to 2,961,000 (without controlling for presence) and showing that there is still extreme evidence for the effects of test condition. Am I doing the transitivity thing correctly?

Thanks a ton.

]]>Hello,

I would like to compare this interactions, but it's not clear for me how to run the analysis. Anyone can help me?

]]>I am trying to calculate the Bayes Factor for a one-proportion test. I have a categorical variable in which participants selected one out of three options. Where p is the proportion of people who selected the second option in this categorical variable, my hypothesis is: H0: p = 0.5 and H1: p > 1/3

I get very different results when running the test in JASP (with default priors) and when using the BayesFactor package in R (proportionBF function). For instance, BF in JASP is 0.038 but 0.30 with the BayesFactor package. Would be grateful for any insight on what might be causing this.

Many thanks!

Lewend

]]>In some of my analyses, I only had to select „Additional option“ „Confidence interval“ and then the Upper 95% CI and the Lower 95% CI of Spearman's rho were shown.

In other analyses, when I selected the „Additional Option“ „Confidence interval“, the Upper 95% CI and the Lower 95% CI of Spearman's rho were not shown. I had to additionally select „Confidence Intervals from … bootstraps“ and then the Upper 95% CI and the Lower 95% CI of Spearman's rho were shown.

These are my questions:

With Pearson's r there is always immediately the Upper 95% CI and the Lower 95% CI. Why not with Spearman's rho?

What are the preconditions that cause JASP to handle analyses so differently?

Thank you very much in advance!

BTW:

When I de-select „Confidence Intervals from … bootstraps“, „slightly different“ values for the Upper 95% CI and the Lower 95% CI of Spearman's rho are shown (why?).

When I then de-select „Spearman's Rho“ all values related to Spearman's rho are not shown anymore (which is fine).

And when I then select „Spearman's Rho“ again, immediately all values related to Spearman's rho, including Upper 95% CI and Lower 95% CI of Spearman’s rho (the „slightly different“ previous values), are shown.

I left the JASP file without saving and started a second attempt to calculate the correlation. The behavior of the program was the same. The „slightly different“ values were identical to the „slightly different“ values from the first attempt to calculate the correlation.

It seems there is a bug?

I used JASP version 0.16.4

]]>I used the descriptive plot in JASP to create several plots, but I am trouble with making them in the same scale. It seems I can only drag this point to control the scale, instead of inputing the aspect ratio of the plot. （in “edit image” I found no such option）

Is it possible to set these parameter manually so I can keep the scale of every plot the same?

Thanks!

]]>I am trying to analyze my data using a 2 X 2 X 2 ANOVA but it failed the Levene's test of homogeneity. I proceeded to use the Kruskal-Wallis Test to check for statistical significance of each factor (and found them to be significant) then ran both the Dunn and Games-Howell Post Hoc Comparisons. These reported the significance within each level but shows no interaction between factors. How can I evaluate this in JASP? Is this even possible? And did I follow the correct steps?

Thanks in advance for your help!

I am planning an analysis for data that is structured as follows:

Dependent variable: stimulation strength in mV

Repeated measures factor:

- Level 1: Sham stimulation
- Level 2: Verum stimulation type 1
- Level 3: Verum stimulation type 2
- Level 4: Verum stimulation type 3

Participants came to the lab 4 times, visits were counterbalanced across participants.

For this particular analysis I am not interested in the differences between the different Verum stimulation levels, in fact, I am not expecting any systematic differences there. If levels 2 to 4 would influence the dependent variable differently, then I'd have to assume that there is some weird confound there.

So, what I could do is average across the verum levels and set up the repeated measures model like this:

Dependent variable: stimulation strength in mV

Repeated measures factor:

- Level 1: mV in sham stimulation
- Level 2: average mV across verum stimulation types

As far as I see it, this should not violate any of the assumptions of the Bayesian RM ANOVA model but I am not 100% sure. I guess I may lose information in level 2 here, right? But then again if use all 4 levels, I am not sure if it will be straight forward to interpret the results. I am having difficulties making an informed decision whether this is a good idea or if it brings with it any problems that I dont forsee at the moment.

Could you help me out with this? Is it a good idea to specify the ANOVA with these 2 levels?

Or would it be better to specify all 4 levels as stated above?

Thank you all for helping me with this!

]]>How to set a panel data on JASP? How we can let JASP knows our data is panel that has the cross-section and time dimensions on two different columns.

Stata has a command or syntax to set panel data, the command is "xtset" .

Please inform me, how to set a panel dataset within JASP?

]]>I am trouble with power analysis. I have a 2 * 3 within factor design and want to used the repeated-measure ANOVA to analyze the data. How can I calculate the sample size？

Usually I used Gpower but I am not sure I do it in a right way. If I want to find the minimal subject numbers to detect the interaction, with the power of 0.9 and middle effect size, I will put the parameter as below：

I am most unsure about the number of measurements, I put 5 because (2*3-1).

And I choose repeated-measure, within factor is right? Or do you have any other software or web can do the power analysis. I am very thankful if you can answer my question or provide me something related to read.

Thanks!

]]>Does the same apply to Bayesian statistics?

On a separate issue: in some of my analyses, I have indeterminate evidence (indeterminate inclusion Bayes Factor) for an effect, but the post-hoc t-test analysis shows strong evidence for the alternative hypothesis. Am I justified in making interpretations based on the post-hoc t-test, or should I constrain my interpretations based on the indeterminate inclusion Bayes Factor?

Thank you in advance for your help!

]]>Faulting application name: JASP.exe, version: 0.0.0.0, time stamp: 0x63358ef1

Faulting module name: ucrtbase.dll, version: 10.0.19041.789, time stamp: 0x2bd748bf

Exception code: 0xc0000409

Fault offset: 0x000000000007286e

Faulting process id: 0x790

Faulting application start time: 0x01d8e99db782a773

Faulting application path: C:\Program Files (x86)\JASP\JASP.exe

Faulting module path: C:\Windows\System32\ucrtbase.dll

Report Id: 77d2faeb-20fc-4372-a7e1-c73a50ab6db9

Faulting package full name:

Faulting package-relative application ID:

Please help me fix the problem!

]]>I need help. I did CFA by Jasp. I need to edit the model plot. I have many items, I would like to rotate some pieces (items) of this model.

How can I do that?

I cannot edit it now and don't know how I can do this?

]]>I read this post: http://xeniaschmalz.blogspot.com/2019/09/justifying-bayesian-prior-parameters-in.html

And the author first seems to be suggesting the default prior means: "“*The prior is described by a Cauchy distribution centred around zero and with a width parameter of 0.707. This corresponds to a probability of 80% that the effect size lies between -2 and 2. [Some literature to support that this is a reasonable expectation of the effect size.]”"*

And then amends this to 50% certainty (based on recommendation by you).

My question is, if my prior belief is that the effect size is between 01.1 to 1.1, would I specify the Cauchy at 0.3 (see the table in that link above) - and does that correspond to 50% or 80% cetainty? (And if I want to specify more than 80% certainty, say, 95% certainty, how do I specify that in JASP? I can obtain the vlaues using the R code but not sure what to input in the model for JASP).

I have also read the van Doorm (2021) guidelines paper that yo ucontributed to but didn't find it as helpful regarding informed priors.

I also wondered about specifying the model vs parameter priors in JASP and whether we can do both? (i ask because I read Kruschke's paper https://www.nature.com/articles/s41562-021-01177-7 which was really damning of papers that don't specify both and explain in precise wording how they did this and what it means!)

Thank you, and sorry if this is really obvious!

]]>is there a possibility in JASP to get a measure of effect size when running a non-parametric ANOVA, thus a Kruskal-Wallis test (without having to calculate it manually)?

]]>Anyone know how would the syntax look like for that?

Thanky you in advance!

Lucy

]]>I tougth that the chi square test on PCA analysis could be interpreted as a fiability indicator of the PCA results. A lower p-value would mean that the number of extracted component is not enought. But if i manually add more components the p-value decrease!

Does a p-value lower than 0.5 means that the PCA results is not reliable?

Dimitri

]]>I was running a paired-samples t-test and wanted to graph the mean difference with its corresponding confidence intervals. Even though I can make the raincloud plots for the differences, it is not what I need to report. JASP (v 0.16.4) includes the calculation of mean difference and confidence intervals but not a graph with this information. I was wondering whether you would consider having this feature in future updates.

Best,

Alejandro.

]]>John

]]>Hello guys,

I was wondering if after performing ANOVA in JASP, I can actually rely on the "flag significant comparisons" option in the PostHoc section or I need to manually adjust to my new adjusted p-value?

For example (image + data attached): I performed a behavioural test helping me to divide participants into 3 groups (controls, abnormal and normal). Then, I wanted to check their brain atrophy level as based on this division.

ANOVA is significant (p = 0.046), so I performed a PostHoc (I chose Bonferroni). There are 3 comparisons to make as I have 3 sub-groups of participants, so the new adjusted p-value for Bonferroni is = 0.05/3 = 0.016666 (this one is just manually calculated as I don't see it in JASP).

However, I chose the option "flag significant comparisons" and there is a Bonferroni result p = 0.048 which is indicated by JASP as significant. Still, if I make the above calculation manually where I got the new adjusted p = 0.016666, then this result of p = 0.048 should not actually be indicated as statistically significant.

Still, there is this beautiful note:"P-value adjusted for comparing for a family of 3"

So, is JASP somehow taking the new adjusted p -value and calculating it in a manner that is easy for us to understand and adjusting it to the initial alpha of 0.05 OR "the flag significant comparisons" has another meaning and we should not trust it , but unfortunately manually calculate the new adjusted p-value and make a case by case individual decisions?

Thank you and best wishes!

]]>Thanks,

marios ]]>

How do I resolve this issue? I have JASP version 0.16.03

]]>I am a newbie here and looking to answer my RQ: How does human comfort level with the robot changes given 4 different ways of expressing emotions? using multilevel modelling.

- Independent Variable: 4 different ways of expressing emotions
- Dependent Variable: Human comfort level responses from 0 - 100 on a Likert scale
- This experiment is a between group design i.e, 4 different groups of participants interacted with 4 different ways of expressing emotions.
- Each participant interacted with the robot for 15 minutes with giving human comfort level rating after each minute.
- Robot displays negative emotions always at the start of 5th, 8th, 12th and 13th minute.

With a multilevel model that models the 15 responses (random intercepts for participants), and with a lag factor in the model, I wish to see how much time it take for human comfort to build up again after negative emotions and if that differs across 4 ways of expressing emotions.

The mean responses of human comfort looks like this:

I am not sure how can I do this in JASP(?)

Mark

]]>