Frequentist vs Bayesian for common tests
Hi, hoping that this isn't too theoretical of a question but would really appreciate help (or a pointer in the right direction).
I'm trying to learn a bit about how Bayesian testing works and feel like I have made some progress but I am a bit lost about the likelihood function in practice. As I understand it, we specify a prior, compute a likelihood from observations, and then compute the posterior. I've done some exercises that use a binomial likelihood function to demonstrate the concept but I fail to see how this applies to a practical scenario using a t-test/correlation/ANOVA etc., and there doesn't seem to be a lot of information about how this is done in practice.
What is the nature of the likelihood function when we have, for e.g., 2 arrays that we want to compare means for as with a t-test? Does it come from a t distribution on the difference, so we then combine the t distribution (with the observed mean?) with our cauchy prior to get the posterior? I have the same question for Pearson correlation, ANOVA, regresison, etc. All I see in Jasp is the Cauchy prior and the posterior, but no likelihood.
Apologies if this isn't clear as I don't have a very strong background in probability theory/statistics. Hopefully someone can point me in the right direction.
The more complicated the test, the more involved all of this becomes. For instance, in the Pearson correlation we really have 5 parameters (correlation, plus two means and two variances). What also complicates matters is that we often conduct a test on effect size, so on delta = mu/sigma. At any rate, some background is in this paper: http://www.ejwagenmakers.com/2016/LyEtAl2016JMP.pdf and in this one: https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1562983