I'm tying to figure out how to make an informed prior from pre-existing knowledge in the literature. My main analysis is a Bayesian independent samples t-test. I've currently run this using a default uninformed prior (Cauchy with scale = 0.707). I've also searched the literature for research on the same question and found two papers. However, they are both frequentist analyses with sparsely reported results, but I was able to calculate t-statistics for the results I need. I then used JASP to do a Bayesian reanalysis for these two papers using their t-stats and sample sizes. The results are uploaded to OSF here: https://osf.io/qgcw7.

The first paper's reanalysis is called: **S.S. Bayesian Paired Samples T-Test: J et al.**

**The second paper's reanalysis is called: S.S. Bayesian Independent Samples T-Test: S et al.**

**I wanted to combine the information in both these papers to get an informed prior for my analysis. I've been reading the "**Replication Bayes factors from evidence updating**" here: **https://link.springer.com/article/10.3758/s13428-018-1092-x. But I can't seem to figure it out. I don't think I have enough information from the original publications to be able to "compute the overall t value for the combined data", as was done in Appendix A of this paper. So I'm not sure if I can use this Bayes Factor approach.

I've turned to the “today’s posterior is tomorrow’s prior” approach. The paper says "the posterior for δ in a t test has no known distributional form", but that you can "approximate the posterior on effect size obtained from the t test with a normal distribution; this normal distribution is then used as a prior for the analysis of the replication experiment". Assuming the 95% credible interval could take the place of a 95% confidence interval, I used the posterior 95% interval and the original sample size to calculate the standard deviation of the (normal approximation of) posterior distribution, and I used the median of the posterior as the mean. So taking the mean and standard deviation to approximate the posterior, I used this mean and standard deviation as an informed prior with a normal distribution.

The results are uploaded to OSF here: https://osf.io/qgcw7. The analysis called "**S informed by J" is the same analysis as "S.S. Bayesian Independent Samples T-Test: S et al.", but instead of using the default Cauchy prior, I used the informed prior based on the posterior from "S.S. Bayesian Paired Samples T-Test: J et al". And vice versa for the analysis called "J informed by S".**

**I expected the posterior to be the same for **"**S informed by J" and "J informed by S", as the order shouldn't matter (I thought), but they have different posteriors. And ultimately, I just want to take the posterior of the combined result forward to my main analysis to use as an informed prior. But given my **"**S informed by J" and "J informed by S" don't match, I feel I've gone very wrong somewhere.**

Could you please advise on how to combine the results from two Bayesian reanalyses to get an overall posterior? And how to take forward the combined posterior result to be an informed prior?

Many thanks,

Luke.

]]>I am learning how to use JASP with some book examples from William's book called *Statistics in Kinesiology. *In the 4th edition of his book, he has an example of factorial within-within ANOVA and he has the following data:

He says that this table should be use to run the analysis on computer software. I have two questions:

- Is can this be considered a Two-way repeated measures ANOVA? I have seen people running that type of analysis with different example data.
- Is the ANOVA table results from the book wrong? Here is the table:

This is what I get from JASP output if I run a repeated measures ANOVA with two factors:

Btw, the study is supposed to have two groups (treadmill and stairs) and measure heart rate at different stages (stage 1 - 4). The goal is to assess whether treadmill and stairs mode are have different heart rate values across the stages, whether the stages are significant across both modes, and the interaction effects.

]]>I am not entirely clear about the explanation of the Dunn´s test after a KW-test. Is the p-value presented the Dunn´s post hoc tests (uncorrected) and the two additional ones (Bonferroni and Holm) corrected p-values? If this is the case the wording in the information box should clarified.

When do you present the one (uncorrected) or the other (corrected)?

Kind regards

Per

I want to do a linear regression analysis along the lines of: age, literacy, years of education predict cognitive functions. Naturally, literacy and years of education correlate highly with one another as is to be expected. This means I shouldn't use them as combined predictors in one regression, right?

If I do use them together as predictors, the Bayesian inclusion probability plot suggests for me to only keep years of education for my first outcome variable. So this sounds to me as if education years and literacy are independent enough in their prediction for their effects to be separated from one another?!

Should I now add literacy to the null model or just remove it since it shouldn't be included? What am I doing about my multicollinearity?

]]>I am trying to interpret the results from a one sided Mann Whitney test. I have conducted a Bayesian and a frequentist analysis. The median and CI in the posterior distribution of the Bayesian Mann Whitney differs from the effect size given by the Rank biserial correlation in the frequentist analysis. How can I explain these differences? In my case

comparing group A with B

rrb=0,358 CI (0.199, ∞) corresponds with Bayesian posterior median:0.557 CI (0.21, 0.926)

Comparing group B with C

rrb=0,040 CI (-0.227, ∞) corresponds with Bayesian posterior median: 0,189 CI (0.009, 0.595)

Any ideas how I can interpret these differences. Thanks for your suggestions!!

Mirjam

]]>when calculating an RM ANOVA, I encountered a problem. Under "Descriptive Plots" you can set that confidence intervals are displayed. However, these confidence intervals do not match the confidence intervals that are output by SPSS. The confidence intervals in JASP are much smaller. I have attached screenshots of the JASP graph and SPSS graph. Both analyses are based on exactly the same data set. CI's were set to 95% in both cases.

What could be the reason for this?

I have very little statistical knowledge. I have collected some data from a study and am in the process of analysis this. I'm wondering if it is possible to exclude some data from post hoc analysis in Jasp.

In the example here I have 66 comparisons. Really though I only want to make the comparison of minute 15 to the other time points and I am not interested in the rest of the comparisons and so I only want to make 11 comparisons. Is there some way to do this?

Again here:

I have no interest in comparing PRE, lvl to POST, dh or POST, lvl to PRE dh and so really I only want to make 4 comparisons rather than 6. Can I do this?

I hope this makes sense, thanks in advance.

]]>I'd appreciate it if you could please help me with this question: Running a network analysis with bootstrapping returns figures with 95% CIs. Is there a direct or indirect way to extract these values (95% CIs of the bootstrapped differences between each pair of nodes and the 95% CIs of the stability of the edge)?

Thank you!

MB

]]>I have a predictor X that predicts three mediators variables M1, M2, M3 in a negative way. When X increases M1, M2, M3 decreases. M1, M2, M3 predicts a binary response outcome Y, but M1 and M3 in a positive way, while M3 in a negative way.

I am having some difficulty interpreting the result where the relationships between X and M are parallel, but the effects of M on Y are contrary across M variables. Could you help me?

]]>I have a 3 x 3 factorial design and wish to analyse with Bayesian. I was originally planning on using a mixed repeated measures ANOVA (Bayesian), although my the residuals shown in the QQ plot are not normally distributed and skewed at either end, therefore not meeting the assumptions. So I have attempted other ways of addressing this, transforming the data did not appease the distribution of the residuals. I have since then re-coded the data into categories (clinically relevant decrease, decrease, no change, increase, clinically relevant increase), and have run a generalised linear mixed effect model. However, the results do not include Bayes Factors, so all of the explanations in the JASP Bayesian handbook are not relevant and don't explain the output. I can't find any guidance relating the Bayesian mixed models on your website, just people in forums for several years ago asking for it to be included in the software.

Please help, I don't know if the model fits or not. Or is there any other way to deal with non-normally (skewed at either end) distributed data in a Bayesian approach? I'm trying to do my dissertation but everyone I ask for help says they can't because it relates to Bayesian.

Thank you.

]]>We are working on a study where we plan to use a Bayesian procedure to determine our sample size. Specifically, we intend to stop data collection when the Bayes Factor (BF) exceeds 5.

We have two independent variables, each predicting a different main effect, with no interaction. We wonder whether it is more appropriate to look at BF10 or BFinclusion in this case.

If we understood correctly:

- **BF10** reflects the Bayes Factor comparing a specific model to the Null model.

- **BFinclusion** reflects the evidence for including a specific factor (or main effect) across all models compared to the models excluding that factor.

Given the focus on our two main effects, would it be more effective to base our sample size determination on BF10 or BFinclusion?

All help will be greatly appreciated!

Imbar

]]>i am trying to build a linear mixed model for my data which involves a complex nesting depicted in the following R code. I now that JASP cannot handle more then one random factor (correct?) so i switched to lmer() in R:

lmer(drThrOn ~ environment + (1 |run/lap/Turn), data = data_DVA, REML = FALSE)

The downside is, that i would prefer a bayesian lmm and would like to use the Analysis of Effects table provided by JASP but i dont know how this is comuputed. I only know that is involves some type of model comparison.

Is there a way to include multiple random factors in R or does anyone know how the Analysis of Effects table in JASP is computed?

Thanks so much for your help!

]]>I am a doctoral student majoring in management. I am learning to use JASP for a meta-analysis. The main effects analysis section went very smoothly, but the subgroup analysis was a bit confusing for me. For example, my moderating variable "Culture" encodes 1, 2, and 3 (Figure 1). When I put "Culture" into "Factors", the result that appears is shown in Figure 2. Is the data that should be interpreted "coefficients"? However, this data is inconsistent with the results analyzed by previous researchers (which I obtained from a meta-analysis literature). Also, what does "intercept " mean?

I already spent several days trying to figure it out.I was hoping for it after searching on Google and YouTube. but still couldn't find the answer. It would be highly appreciated if you can give me an answer.Thanks for your help.

]]>- Could you recommend a good starting point for learning JASP, such as a quick tutorial or a concise guide?
- I noticed that there is both JASP and JAMOVI. Are these two programs similar, or is there a significant reason to choose one over the other? I plan to perform basic statistical analyses (e.g., descriptive statistics, T-tests, ANOVA, regression, some non-parametric tests, survival analysis) but nothing highly sophisticated.
- If I perform analyses in JASP and later modify or filter cases in the data editor, will the analyses automatically update to reflect the new data? If not, how can I ensure that the analyses are recalculated based on the updated data?
- Could you recommend a good resource or quick-start guide for conducting power analyses? I see that JASP offers this functionality, but I need more information to properly fill in all the required parameters.
- I created a "Residual vs. Dependent" Residual Plot in the Linear Regression module and saved it as a PNG file. However, I noticed that the "Standardized Residuals" label is slightly cut off on the right side. Is there a way to add margins to the image to prevent this from happening?

Thanks a lot for your help!

]]>According to this information, the MANOVA approach constructs derived variables from the repeated measures (of a single outcome variable) and treats these as multiple dependent variables, applying the MANOVA procedure. This strictly precludes unbalanced datasets, with missing data requiring respondent deletion. Statistical power is also reduced (compared with the univariate RM ANOVA). However, the within-subjects correlation structure allowed in this approach is more general than the univariate version.

The univariate RM ANOVA is restricted to a compound symmetry covariance structure, but can be designed to accommodate some missing data. It is more prone to issues with sphericity, but (when sphericity holds) has greater statistical power than the MANOVA approach.

Which of these approaches forms the implementation adopted in JASP (>=0.18)?

]]>Hi there. I am correct that the Eigenvalues in the JASP's EFA scree plot are the FA (Real data factor eigenvalues); and not the PCA initial Eigenvalues. However, both types of Eigenvalues are available in Output Options > Tables parallel analysis. Would it be possible to include both types of Eigenvalues in the scree plot in the future?

Kind regards,

F

1) Distribution plots

I cannot seem to find any way of adding colour to my distribution plots. It's an option for customizable plots, but not for basic plots it seems?

2) Flexplots (from the Visual Modeling module)

Is there any way of changing from red to a different colour in flexplots? I have tried all the different themes and palettes, but they're all red. (I know I can convert to greyscale, but I prefer using colour, just not red.)

Would be really grateful if someone out there knows how to do this. :)

]]>I didn't experience this with JASP 0.18.3 before deleting it and installing this new version.

]]>