Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

How to interpret Bayesian Linear Mixed Models

Hello,

I tried to run a bayesian linear mixed model in JASP, but I can’t understand how to interpret the output. I run a frequentist LMM with the lme4 package /the lmer function in R and I obtained no significant difference (very small F, high p) between my variables of interest. So I run the Bayesian analysis with the aim to establish whether the null hypothesis can be accepted. It takes hours to complete, but finally I have the output, shown below. My question is if and how I can interpret this data to prove the null hypothesis. Thank you.

Intercept

  95% CI    

Estimate    SE           Lower     Upper       R-hat     ESS (bulk)    ESS (tail)

1614.143    23.064    1566.395    1660.104    1.004      205.579       574.364    

 

R  (differences from intercept)

  95% CI    

Level   Estimate    SE       Lower      Upper      R-hat     ESS (bulk)    ESS (tail)

A        -2.230    5.894    -14.058     9.610      1.003     2129.600       3016.883    

B         2.230    5.894    -9.610     14.058      1.003     2129.600       3016.883    

 

dir (differences from intercept)

  95% CI    

Level    Estimate    SE    Lower    Upper    R-hat    ESS (bulk)    ESS (tail)

A    -17.812    10.163    -37.311    1.432    1.001    1037.968    1850.643

T     17.812    10.163     -1.432   37.311    1.001    1037.968    1850.643 

Comments

  • Hi alg,

    I'll pass this on to our experts but I'll note that the mixed models do not (yet) come equipped with Bayes factors, and this means that there is presently no way to quantify the degree to which the data support or undercut the null. Informally, however, you may argue that if the posterior distribution is tightly concentrated on the null value it will be the case that the data support the null.

    Cheers,

    E.J.

  • Hi alg,

    as EJ already mentioned, the Bayesian mixed models do not support Bayes factors yet, so you can't properly evaluate the support for the null hypothesis.

    The default output (that you copied here) corresponds to the estimated differences between each factor level towards the intercept. Your hypothesis is probably more concerned with differences between some specific conditions / or their combinations. You can obtain those estimates by using the Estimated Marginal Means section and the Contrasts option -- where you can set the group difference of interest and see whether the estimate is tightly concentrated around null.

    Cheers,

    Frantisek

  • Thank you very much, EJ and Frantisek. Unfortunately, running Bayesian mixed models takes really too long, and after hours you never know if it will come to an end. I would like to consider just running a Bayesian Anova instead, which allows to establish random factors and should be also easier to interpret. Am I right?

    Best, alg

  • Hi Alg,

    The name random factors in the Bayesian ANOVA is a bit misleading I'm afraid, since what it means in this context is that the factor is automatically included in the null model and has a wider prior distribution by default. If you want to fit mixed effects models you can resort to R and use the generaltestBF or lmBF in the BayesFactor package.

    We also wrote a sort of tutorial style preprint on this subject, which you might find helpful, since it includes code for mixed model comparison using Bayes factors.

    Kind regards,

    Johnny

  • Thank you Johnny, very helpful.

    Cheers,

    Alg

Sign In or Register to comment.