# Probability of 80% vs 50% in default priors?

Hi EJ - I'm just reading some papers on prior specifications in t-tests.

I read this post: http://xeniaschmalz.blogspot.com/2019/09/justifying-bayesian-prior-parameters-in.html

And the author first seems to be suggesting the default prior means: "“*The prior is described by a Cauchy distribution centred around zero and with a width parameter of 0.707. This corresponds to a probability of 80% that the effect size lies between -2 and 2. [Some literature to support that this is a reasonable expectation of the effect size.]”"*

And then amends this to 50% certainty (based on recommendation by you).

My question is, if my prior belief is that the effect size is between 01.1 to 1.1, would I specify the Cauchy at 0.3 (see the table in that link above) - and does that correspond to 50% or 80% cetainty? (And if I want to specify more than 80% certainty, say, 95% certainty, how do I specify that in JASP? I can obtain the vlaues using the R code but not sure what to input in the model for JASP).

I have also read the van Doorm (2021) guidelines paper that yo ucontributed to but didn't find it as helpful regarding informed priors.

I also wondered about specifying the model vs parameter priors in JASP and whether we can do both? (i ask because I read Kruschke's paper https://www.nature.com/articles/s41562-021-01177-7 which was really damning of papers that don't specify both and explain in precise wording how they did this and what it means!)

Thank you, and sorry if this is really obvious!

## Comments

Hi BJ89,

If you want to use an informative prior you can check out the papers on my website on prior elicitation, and the paper by Gronau et al. (https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1562983).

Should you specify an informed prior, I am not sure the Cauchy is the best choice. I like the "Vohs prior" that we used for a replication effort on ego depletion. I also like the "Oosterwijk prior", which is more wide. But whatever distribution you choose, you can use the "Distribution" module in JASP to plot it, and to see how much mass it assigned to a specific interval.

For the t-test JASP focuses on the Bayes factor, which does not feature model priors. However, you can specify the prior odds yourself, multiply with the Bayes factor, and obtain the posterior odds this way (which you can transform to a posterior probability).

Cheers,

E.J.

Thanks EJ! I hadn't heard of the Vohs prior so I will look that up now. I had looked at the Oosterwijk prior but it didn't seem a good fit to my a priori theory of the group differences. The Cauchy I thougth was suitable as for the group differences, we anticipated quite large effects which could either be positive or negative. I had chosen my own prior but would you say using one of the existing informed priors from your paper is more plasible?

Can I also ask - would you suggest using consistent priors for independent samples t-tests (Mann Whitney Bayesian) to compare two groups on a range of variables, and paired samples t-tests to compare change in variables within groups over two time points? Or if our beliefs are that the change over time would be smaller than the gorup differences, it would be alright to use different model priors?

Also unrelated- have you heard news about the glitch in JASP where it won't open? It starts to and then force closes itself. Literally been trying for 10 minutes to open the program! I think I saw another post somewhere about it.

Thank you!

Hi BJ89,

What is plausible or implausible wrt priors depends very much on the application at hand.

I experienced this glitch myself, but for me it was an OS overload -- too many other processes were hogging resources, and so the system could not add another JASP window and closed it. We will try and create an informative error for that. Maybe your problem is different; I would keep an eye out on the relevant GitHub entry.

Cheers,

E.J.

Thank you EJ! Yes it probably didn't help I was running R and multiple Bayesian analyses at the same time. Probably overloaded.

I have been looking now at model priors for Bayesian regressions. I wondered if you could help me understand why, when I choose a uniform prior for the model (and the JZS default prior), the BF(inclusions) are substantially smaller than for the beta-binomial (1,1) prior. I understand the basic shape of the priors, and that the uniform assigns equal probability to all models whereas the b-b does not, but I wasn't expecting it to change my results so much. Can you help me here? (I have read the papers by Liang etc etc on choosing priors but they tend to not be advising on model priors so I'm a bit stuck).

Thank you for your help.

Hi BJ89,

We discuss the impact of model priors on inclusion BFs here: https://journals.sagepub.com/doi/full/10.1177/25152459211031256#appendix

I agree it is counterintuitive. If you send me the data I can have a look

E.J.

Hi EJ - that's exactly the type of paper I was looking for! Of course you have already written it, very helpful. Thank you :)

I'd love to send you the data, but the JASP file is huge and won't attach to anything, and the raw data Excel spreadsheet probably shouldn't be attached in a public forum.

No worries! If you still want me to take a look you could send the data to my personal email address (but don't feel pressured)

E.J.

Hi EJ! Thank you for the offer I have figured it out now (but what is your email? Maybe I can send you the data anyway and see if you agree!)... but I have another issue which is quite fundamental to another part of my thesis I'm working on and maybe you can help.

I've used Bayesian regression to examine predictors of an outcome. In JASP, I did this with default method of implementing it, which is using BMA. I also used the BMA and brms packages in R to cross-check results. I have been told by a supervisor that BMA is never an appropriate method to use because it is 'step-wise'. "BMA, as the name suggests, takes the average of all the tested models. In the "Posterior Summaries of Coefficients" tables the estimates of the summary statistics will be biased if the submodels are also biased; which they are, due to multicollinearity. Also, the criterion of <0.6 is not adequate; if the correlations between predictors are statistically significant, then you have multicollinearity by definition. The size of the correlations are just a rough estimate of the magnitude of the effects of multicollinearity, but they are overly optimistic (https://link.springer.com/article/10.1007/s00265-010-1045-6; https://summit.sfu.ca/item/17968; https://journalofbigdata.springeropen.com/articles/10.1186/s40537-018-0143-6; and https://www.jstor.org/stable/23566582)."

My variables were not correlated at greater than r = 0.6, and some were barely correlated, but I have been told that even if there is a weak correlation, it still counts as 'multi-collinearity'. This is unrealistic to avoid in psychology because individual-level variables are often moderatley correlated. I really like BMA and find it quite intuitive, and I'm wondering if you can help me defend my choice to use BMA, or whether it is fundamentally flawed and I should use Bayesian glm in R? I thought given that it's the only way of implementing it in JASP, and that you tend to have robust justificaiton for your choice of methods in JASP, you might be able to help me defend this?

Thank you!

Hi BJ89,

You say "I have been told by a supervisor that BMA is never an appropriate method to use because it is 'step-wise'" I am not sure what you mean here, because BMA is a single step method where all models are fit to the data simultaneously. It is in fact all the other procedures that are multi-step.

About multicollinearity: yes it can be a problem, but only if the multicollinearity is very, very high. For usual levels this should not matter at all. One way to convince your advisor perhaps is to take the original data, remove all of the correlations, and execute the same analyses on the synthetic data for which you know that the correlations are zero. If virtually the same results obtain then clearly it did not matter.

E.J.

Thanks EJ - that's great advice. Also what is your email? I'll send you the data to have a look at regarding the priors for a t-test, if you'd still like to provide some guidance!

Thank you!

You can't find it online?! EJ.Wagenmakers@gmail.com