Bayes Factor versus P (M1|data) / P (M2|data) ?
Please help me interpret. I am new to JASP and Bayesian stats, so please point out my mistakes and enlighten me
Richard Moery writes on the Bayesfactor blog: "The Bayes factor is the relative predictive success between two hypotheses: it is the ratio of the probabilities of the observed data under each of the hypotheses. If the probability of the observed data is higher under one hypothesis than another, then that hypothesis is preferred."
My question: " If the probability of the observed data is higher under one hypothesis than another, then that hypothesis is preferred" ???
Why ? The Bayes Factor is the evidence in the data. But if my prior distribution was very informed, I may still prefer that distribution, even after the data. It is only after repeated new data that my prior will be "swamped", right?
In other words, what is more useful:
a) the Bayes Factor (the relative probability of the data under two competing models); or
b) the relative probability of the models, given the data? (which is the Bayes Factor * the relative probability of the models, prior to the data)
In yet other words, what good are Bayes Factors?
I know I a making mistakes here, so please correct my thinking. Thanks!