Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Interpreting Bayesian linear regression in JASP

Hello everybody!

It is my first time I analyze data with JASP. Thank you very much for this intuitive tool! I have a question concerning Bayesian linear regression. In an exploratory fashion, I computed 3 hierarchical linear regressions: As predictor variables, I have included all four subscales of my construct of interest (S1-S4) and I have 3 different DVs (DV1, DV2, DV3).

For DV1, I suppose that models S2+S4, S3+S4 or S2+S3+S4 work the best as they show the highest BF10 (still, no big difference between the three models). But concerning DV2, I am a little bit lost. First, there were no supported correlations (preliminary analyses) between the DV2 and the four subscales. Still, I computed the regression that revealed the following output. Now I am wondering how to interpret the output. Can somebody help me out?

Thank you very much!
Alexa

Comments

  • Hi Alexa,

    For DV2, you see that all BF10s < 1. This means that you have evidence for H0 -- if you set the BF display option to "BF01" instead of the default "BF10" you see how much more likely the data are under H0 than H1 under each of the other models.

    Cheers,
    E.J.

  • Thank you very much for the quick answer!
    OK, yes, but BF01 is just the same as 1/BF10, isn't it?
    When reporting this result, can I write that the results indicate evidence for H0? Does it make sense that I also report that the highest evidence for H0 was in model S1+2+3+4 (20.83)?

  • Yes, but BF10 = .2 is more difficult to interpret than the mathematically equivalent statement BF01 = 5.
    you can report that there is evidence for H0, but the table indicates how much -- it matters whether BF01 = 1.5 or 8. If you want to summarize the table with one statement, you can say that the null model outpredicted the models that contain predictors.
    E.J.

  • Okay, thank you very much!
    Just to understand it correctly: for DV2, the data are about 20 times (1/0,048) more likely to occur under the null model than under the full model (all subscales included, highest BF01) and about 3 times (1/0,328) more likely to occur under the null model than under the S4-model (lowest BF01)? Is it correct /"allowed" to conclude that the data provide moderate to strong evidence for the null model?

  • Yes, that's correct.
    E.J.

  • Hi, I was wondering if there was any documentation on the different priors and options that are available for the Bayesian linear regression? To me, it's hard to understand the difference between the g-prior, hyper priors and the default JZS prior.

    And is the way to interpret the correct Bayes Factor to select 'compare to null' and then BF10, and look at the BFM value (if larger than 3, etc etc?)

  • Yes this is not trivial. The functionality we offer is taken from Merlise Clyde's BAS package. The documentation of that package will refer to a paper by Liang et al. for details. Let me look it up...here it is:
    @ARTICLE{LiangEtAl2008,
    AUTHOR = {Liang, F. and Paulo, R. and Molina, G. and Clyde, M. A. and Berger, J. O.},
    TITLE = {Mixtures of $g$ Priors for {B}ayesian Variable Selection},
    JOURNAL = {Journal of the American Statistical Association},
    YEAR = {2008},
    volume = {103},
    pages = {410--423},
    }

    My personal preference is to select "compare to best model" and then tick BF_01. Then you will know how by much the best model outpredicts the competion.

    Cheers,
    E.J.

Sign In or Register to comment.