I got the chance to revise the first paper in which I used Bayesian inference. It’s quite a while that I have been working with Bayesian inference. Now, having a little distance to my analyses, I am a little bit confused regarding the prior distribution and I am wondering if I have done a mistake in my analyses.

First, I only computed simple correlation analyses to present the correlations between the constructs (let me say construct A and construct B). I used the default prior distribution. Is that correct?

Second, I computed a regression analysis. Based on theory, I hypothesized that construct A will predict construct B. Again, as there were no previous data on the relationship between A and B, I used the default prior model probabilities (1/2 = 0.5). Now I am a little bit confused if I should have changed the prior width before conducting the analysis?

And one last question: I used both frequentist and Bayesian methods of inference. In a further exploratory regression analysis, I tested if construct A with four predictors (A1/A2/A3/A4) predicted construct B. Bayesian regression results showed that the model with the predictors A2 and A3 outperformed all other models. However, classical regression results did not reveal any significant predictors. One reviewer asked me to explain this result and I am wondering how to do that.

I would be very happy if someone could help me out!

Thank you very much!

Alexa

]]>I would like to ask you a couple of questions:

1) I need to test the differences between two independent groups (young and adult) for x-rays abnormalities of the thoracolumbar tract (14 spaces each). Data are ordinal, so I supposed to run the Kruskal-Wallis test to test if a difference occurs in the two group. I didn't choose the two-way independent ANOVA because that test looks for the effect of the parameters on the variable. Is it correct??

2) I tried to run the Kruskal test but once added the factors the bar display an error: residual df = 0 (in attach you will find a screenshot). How can I manage this error? Do you have any suggestion to how to organise the row data (in attach you will find a screen of mine dataset)?

Thank you very much for any help, hope to hear from you soon.

MCP

]]>I was wondering if anyone had any experience of conducting Bayesian Logistic regressions, in JASP or R. In JASP there's no obvious way to do it (although you could do a bayesian linear regression and set the categorical variable to scale. In R there's no widely used/accessible package that I can see either. Does anyone have any advice on how to conduct Bayesian logistic regressions?

Thank you

]]>I have a question related to performing mediation analyses in JASP. I have a very sim,ple mediation model (1 predictor, 1 mediator, 1 outcome (allof these are mean accuracies) and 1 "counfounding" (age in moths)). Previsously I have used PROCESS in SPSS (I have a very limited experience with mediation in general), but I find the one in JASP much more pleasing :) . However, I have some difficulties understating what is reported. For example, the z-value in the table. Is this the result of the Sobel test? When I plot the path analysis I clearly see the values for the direct effects, but there are certain values over the arrows next to the variables themselves. I can not seem to understand what these values refer to. I see the arrows are dashed and solid, and I assume this is something related with direct and indirect effect, but could someone may be clear these questions for me or just point me to a documentation, where I can read about it?

Many thanks,

Mila

I used linux mint 19 and Libreoffice 6.3. I want to change default spreadsheet editor in JASP with calc

but ı cant change. I tried many times, but the spreadsheet editor can not be activated in JASP

Does JASP meet these standards? If so, it will be easier for American institutions, and individual instructors at those institutions, to adopt and/or continue using JASP.

]]>The output of JASP is the following:

I read that through transitivity you could make the statement that there is moderate evidence against an interaction effect by dividing BF10 of Tijd + Conditie + Tijd * Conditie (1.475) by BF10 of Tijd + Conditie (5.058) = 0.29. Is that a correct interpretation?

Next to that, if I would like to say something about the likelihood of the null-model vs the interaction effect. Is that possible and how should I calculate that?

Best regards,

Ellen

]]>is this feature going into JASP?

Christer

]]>Two questions below I've been pondering lately.

1.) Is there any way to do a post-hoc tests for interactions in a between-subjects 2-way BANOVA in Jasp . From what I can tell I can only post-hoc testing the main effects. I could of course look at if the credible intervals for each group overlap with the mean of another, but that would not correct for multiple testing and might thus be problematic. Or is it feasible?

2.) How do you approach effect size within the Bayesian ANOVA framework. To my understanding JASP does not provide any direct effect size (such as partial eta squared for frequentist ANOVA) which describes e.g. amount of varriance explained by main effect(s) and interaction effects. Generally I guess that the BF factor is correlated with the size of the effect, but to my understanding the BF-factor only provides evidence for an effect being present, and not for the size of the effect. How would you describe the effect size, and is there any neat function in JASP which could be used that I havn't found?

Best,

August

]]>I just want to make sure I'm reporting it correctly in my write up.

Cheers,

]]>For example I have a variable which has three categories 1, 2 and 3. I want recode the variable so that 1 becomes 0 and 2,3 become 1. I'm not talking about labeling.

]]>A clue on how to search this forum without getting every post with the work ifElse in it would be helpful as well.

Thanks much.

I performed a stepwise regression and I cannot find the beta values.

Where can I find them? is it maybe under a different name?

(I'm using the latest version of JASP on windows 10)

Thanks,

Tali

]]>I compared Bayesian linear regression and frequentist linear regression on the effect of fertilizer on yield. The results show very different intercepts and slopes. Can somebody help me with the following questions?

1. Is the interpretation for the intercepts the same for both Bayesian (163.5) and frequentist (51.933)?

2. Is the interpretation for the slopes the same for both Bayesian (0.782) and frequentist (0.811)?

3. If not, how are the Bayesian intercept and slope interpreted?

Thanks in advance for your help.

Cheers,

narcilili

]]>Do I need to use R code directly?

]]>I wonder which formula JASP uses to calculate the t-statistic in post-hoc tests of (frequentist) ANOVAs. To be a bit more specific, I am looking at pairwise comparisons for an interaction term in a mixed-design ANOVA. The documentation seems to be not implemented yet.

Thank you!

]]>For my study, I would need to do partial correlation between 6 variables, controlling for another 3. I would like to report the Bayesian alternative, but I know JASP do not offer yet partial correlations. I know about the BayesMed package in R, which do offer BF for partial correlations (there are couple more option floating around), but only between 2 variable, controlling for 1. So my questions are 1) is it possible (and how) to adjust the BayesMed partial correlation in a way it can handle more variables? (it might be a bit of stupid question, but I have just basic knowledge in R, and I am not sure how this should be done), and 2) if the first is not possible, is reporting the BFs from multiple linear regressions a valid workaround? (I have seen contradicting opinions about this)

Thank you in advance,

Mila

]]>I couldn´t find it, and was thus wondering whether JASP allows me to correct for dependency when conducting a meta-analysis.

Best,

Max

]]>I have a 2 x 2 x 2 repeated measures design and would like to add Bayes Factors to the main and interaction effects of my regular ANOVA results. We are however puzzled by the BF10 & BFinclusion results of 1 of our effects (high vs low salience items). If we understood corrctly:

**BF01 (BF10)**- a comparison of our model including the effect (i.e., salience) to the null model.**BFinclusion**- as comparison of all models that include the effect (i.e., salience) to all models that do not include the effect.

In the regular ANOVA we get a F-value of 13.8 (p < .0001). However, there is a large discrepancy between BF10 (~1.6) and BFinclusion (~460). How is this possible? Also, when computing BF10 for a simple ttest (by converting F to t) and putting the t in the online Bayes Factor calculator, the Bf ~= 40....

Further, we were wondering what is more correct to a add to regular ANOVA results? BF10 or BFinclusion?

Thank you in advance!

Nathalie

Thanks for the help.

Cheers,

Dirk van Moorselaar

]]>I want to use Bayes factor analysis of a Pearson's correlation. When using the "flag supported correlations" it uses stars for BF above 10. I would like to know and cite that this is the praxis but cannot find where this is discussed anywhere. From my understanding for other test such as independent sample t-test the praxis has been to use BF above 3 to claim support for either of the H1 or H0. Is there any reason for this? I can not find any source that discusses it.

]]>I am wondering, Can i run these models in JASP?

Thanks

]]>we are analyzing our data of EXPOSURE (n = 23) vs. SHAM (n = 15); pre-post, and we are assuming that there is NO interaction effect concerning our outcomes (here as one example: somatization).

____

So we are interested in BF01 for the interaction (we don't really care about the main effects). So far, so good. Now, the results look as follows (see image). Is there any way to determine the absolute (not relative) evidence of the interaction?

____

I know from reading through a few papers that you are supposed to divide BF01 (main effects + interaction) by BF01 (main effects). But what you get out of there is the RELATIVE evidence including the interaction (whether it contributes any meaningful evidence to H0 relative to the main effects only model), right?

____

Just to further explain my problem: If BF01(main effects) was, let's say, 10. So this suggests that H0 is ten times likely than H1 for the main effect model. So we could imagine two groups that don't really change or differ at any point in time (hence we got a high BF01 for these main effects). Here, there is also no apparent interaction. Hence, the BF01 (main + interactions) could be even higher, as the null applies even more for this model; let's say, 20.

So, if BF01 (main + interaction) = 20 and BF01 (main) = 10; then 20/10 = 2, meaning that the evidence for H0 is even 2x stronger for the model including the interaction compared to the main effect model, correct?

In our data, it thus appears that entering the interaction does not further support the H0 (4.094/8.232 = 0.497), relative to the main effects model. Hence, adding the interaction does not really change the fact that there is overall support for H0 (main effects), but relative to a main effects model, entering the interaction does not provide any additional support for H0. I find the latter notion really confusing; does that mean we might be wrong in assuming that there is no interaction effect??? Is there any way to test evidence for/ against the interaction in absolute terms?

I hope you can help me in clarifying this.

Kind regards

Leonie

]]>As i am now doing my thesis with methodology : Confirmatory Factor Analysis

I used to see some online tutorial shows that, I can got the results of some fit measurements.

such as : SRMR, CFI, TLI, RMSEA, under "Additional Fit Measures"

but now after I click it, there are no Fit measurement choices exist

(as following screenshot shows)

https://mail.google.com/mail/u/0?ui=2&ik=bff3c544bb&attid=0.1&permmsgid=msg-a:r5969519241647809367&th=16ebdabf9667c243&view=fimg&sz=s0-l75-ft&attbid=ANGjdJ-Pi6WvMLPsH2gseiLwQFrEAsFs7k8iUpNBdoLHbHs7LEtclQAtVZrjNBhhk-jmr9-rCqEl6zv5zX9uNieuFn2Hqh6l9FzOJ_Hc3gEAT0x3OZx6vJVwG5BX9vg&disp=emb&realattid=ii_k3lxj3wo0 There was an error displaying this embed.Can I know is there any change of version, or are there any step i should follow to use those fit measurement function in Confirmatory Factor Analysis ?

]]>when doing a stepwise logistic regression in JASP, I cannot see or specify anywhere how the model selection stepping algorithm decides here (ist it based on significance? At which cutoffs? AIC?). When clicking on analysis info, it is explained that the stepping method can be specified in "Options" - however, this settings menu is not available for logistic regression (only in linear). Have I overlooked anything? Any help ist greatly appreciated!

Flix

]]>I have started playing around with the BAIN module after writing a Bayesian ANOVA tutorial but seem to keep getting an error (attached). Am I just using the wrong syntax?

Cheers

Mark

]]>I have a few questions regarding Bayesian Repeated Measures ANCOVA. To start with some context, we are comparing three measurement conditions/methods:

- Participants performed three trials (within-participant, repeated measures)

- The same data was gathered in these trials with the same instrumentation (5 outcome variables)

- The trials were supposed to be mechanically identical, but they turned out to be not (confounder)

- We are mainly interested in H0

The first thing I looked into was using Intraclass Correlations to examine the similarity or lack thereof between the different trials. However, the analysis would be more sensible if we also included the effect of the confounding factor (mainly from a theoretical perspective).

Previous work from literature typically involved a regular RM-ANOVA and said "p>0.05 so the results between conditions are identical" which doesn't seem like a possible conclusion you can draw with frequential statistics to me. So here we are and now I'm looking into Bayesian RM-ANCOVA with JASP (looking great btw).

Ideally I would be able to compare the three trials in an RM-ANCOVA and do post-hoc comparisons. However, the confounding variable was different for every trial and the data in JASP is in a wide format. So instead I did three RM-ANCOVAs per variable (cond1 vs cond2, cond1 vs cond3, cond2 vs cond3). As the data is in a wide format I figured I could compute a proxy variable for the confounder (e.g. cond1 vs cond2): confounder1_2 = confounder1 - confounder2.

Now I have a single column as a confounder for each comparison which appears to work. Does this sound at all feasible to you? Am I violating some statistical rules that will lead to the end of humanity? Are there corrections I could/should make for multiple testing?

Next, the BF10 between the condition model and the condition + confounder model does not appear to change very much. However, it does become lower, which does indicate to me that after correcting for the confounder the null hypothesis appears more likely. Which is what I would expect. Should I do a model comparison? And if I do this, should I then compare the BF10s or the BF01s seeing as I'm interested in a null result and the 'better' model would be the one with a lower BF10. This seemed a little odd to me.

Finally, if there's any reading material that you could recommend that would be great. I've been scouring the internet but the stuff I found is either 99% formulas with little explanation or about simple t-tests.

Best,

Rick

]]>I have a 2x3 within-participant dataset, that I analysed using a bayesian rm-ANOVA. I really liked the possibility of plotting model averaged posterior distribution of each factor level and their interactions, but I do not understand how they are obtained? That is, if I take my dataset and re-implement the analysis on Python or Matlab, how can they be reproduced? To be clear, getting a model average is not the problem (that's actually pretty straightforward), but getting the posterior of each level's effect size individually confuses me. I am working mainly from the paper from Rouder et al. (2012).

Thank you in advance for your input.

]]>Cheers

Mark

]]>`BayesFactor `

"deals" with random effects is:- Set a wide prior (
`r = 1`

) - No sum-to-zero constraint.

But I see no mention of how the presence of random effect affects the computation of the likelihood of related fixed effects (i.e., is there any difference when an effect is *between*-subjects vs *within*-subject).

The use of random factors as random effects vs fixed effects with a wide prior in `BayesFactor `

seems to have little effect:

library(BayesFactor) data(md_12.1, package = "afex") # BayesFactor - specify "id" as a fixed effect. m0_f_BF <- lmBF(rt ~ id, md_12.1, rscaleEffects = c(id = 1)) m1_f_BF <- lmBF(rt ~ angle + id, md_12.1, rscaleEffects = c(id = 1)) BF_f_BF <- unname(as.vector(m1_f_BF / m0_f_BF)) # BayesFactor - specify "id" as a fixed effect. m0_r_BF <- lmBF(rt ~ id, md_12.1, whichRandom = "id") m1_r_BF <- lmBF(rt ~ angle + id, md_12.1, whichRandom = "id") BF_r_BF <- unname(as.vector(m1_r_BF / m0_r_BF)) c(as_fixed = BF_f_BF, as_random = BF_r_BF) #> as_fixed as_random #> 909.1889 900.1979

However the differences are much larger with other methods (below I use the BIC approx. for simplicity, but `stan`

-based methods also produce differences that `BayesFactor `

does not):

library(lmerTest) BIC_BF <- function(m0,m1){ d <- (BIC(m0) - BIC(m1)) / 2 exp(d) } # BIC approx - specify "id" as a fixed effect. m0_f_lm <- lm(rt ~ id, md_12.1) m1_f_lm <- lm(rt ~ angle + id, md_12.1) BF_f_lm <- BIC_BF(m0_f_lm, m1_f_lm) # BIC approx - specify "id" as a random effect. m0_r_lm <- lmer(rt ~ (1|id), md_12.1) m1_r_lm <- lmer(rt ~ angle + (1|id), md_12.1) BF_r_lm <- BIC_BF(m0_r_lm, m1_r_lm) c(as_fixed = BF_f_lm, as_random = BF_r_lm) #> as_fixed as_random #> 5281.736 3920528.548

**Might this be the root of the somewhat common question here in the forum regarding differences between frequentist and Bayesian rmANOVAs in JASP?**