Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Classification scheme for the interpretation of BF inclusion / posterior inclusion probability

edited May 2020 in JASP & BayesFactor

Hey there,

I read the paper by van den Bergh et al. (A tutorial on Bayesian multi-model linear regression with BAS and JASP) and it explains how to interpret the posterior summary when conducting a multiple regression. However, I am wondering if there is a classification scheme for the BF inclusion, as there is for the BF10/01 (Wagenmakers et al. 2017, Bayesian inference for psychology. Part II: Example applications with JASP)?

I am not sure how to interpret my data. I have conducted a multiple regression with 4 predictors. BF10 is 14 for the model with predictor 3 and BF10 is 12 for the model with predictor 2 + 3. The posterior summary now shows that the posterior inclusion probability for predictor 2 is 0.6 (BF inclusion = 1.5) and the posterior inclusion probability for predictor 3 is 0.7 (BF inclusion = 2.5). So the posterior inclusion probability is only somewhat higher that the prior inclusion proability. Can I conclude from my data that the model with both predictor 2 and 3 is most favorable? In other words: Are predictor 2 and 3 relevant predictors?

Thanks for helping me!

Best Alexa

Comments

  • Hi Alexa,

    There are several perspectives here. First, comparing the model with predictor 3 against the model with predictor 2+3, it is evident that adding predictor 2 makes the model worse. The inclusion BFs support the predictors only weakly. This is difficult to comment on without a screenshot of the output table -- can you provide that?

    Cheers,

    E.J.

  • Hey E.J.,

    thank your for your answer that helps me already!

    Best Alexa!

Sign In or Register to comment.