# Bayesian model averaging

nicolas_m
Posts:

**9**Hi everyone, I read the two-part paper on bayesian inference in psychology (Wagenmakers et al., in press) and I found it very interesting. In the Part 2, the authors described BMA (p. 22-23) and they concluded that "averaged across all candidate models, the data strongly support inclusion of both main factors Disgust and Fright" based on the two BFinclusion (46.659 and 1.413e+8, respectively). I would like to be sure that my understanding of the BFInclusion is correct.

- Is it right to say that based on the BFInclusion = 46.659, the data are 46.659 more likely if we consider that disgust has an effect on the DV than if we consider that it has no effect at all?

-Is the formula to compute BFInclusion similar to those used to compute BF10?

-If I have to present results in a paper, do you recommend to present the BMA rather than the full model comparison? Or is it better to present both of the analyses?

Thank you very much in advance

## Comments

363Hi Nicolas,

Generally I'd advocate reporting both analyses. The BF inclusion formula is similar to BF10 in that it quantifies the change from prior odds to posterior odds. But in the BF inclusion case, those odds are computed for all the models that contain the variable of interest vs those that do not.

As far as the interpretation goes, yes, something like that -- I'd explicitly indicate that you contrasted the models with and without the variable of interest.

Cheers,

E.J.

9Hi E.J.,

Thank you very much for your answer. I read your interesting discussion about BF Inclusion with Sebastiaan (here http://forum.cogsci.nl/index.php?p=/discussion/2996/odd-bayesian-repeated-measures-seems-biased-against-interactions-and-in-favor-of-main-effects) and his presentation of what he called "Baws Factors" (here : https://www.cogsci.nl/blog/interpreting-bayesian-repeated-measures-in-jasp). Is it right to say that Baws Factors are just BF Inclusion computed using a different bayesian model averaging procedure than those used in JASP?

Thanks you very much

Best regards,

Nicolas

363Hi Nicolas,

The "Baws Factors" should be in the next version of JASP; they are computed by neglecting certain models. So the procedure is the same, but what differs is the set of models that is averaged over.

E.J.

9Hi E.J.,

Thank you very much.

Have a nice day

Nicolas

9Hi E.J., I was happy to see the last updates to JASP including BMA across matched models. However, when I run my analysis with JASP I don't find the same BFincl as when I compute the BAWS factors by hand. I think that I have probably made a mistake when computing them by hand. I would like to understand where my mistake is. Could you provide the exact formula used to compute the BAWS in JASP please? Thanks a lot

Nicolas

363Hi Nicolas,

Sure, happy to oblige. I'll ask Tim to send the relevant code, or post it here.

Cheers,

E.J.

9Thanks E.J.

I talked about that with Sebastiaan. I attach a file with my JASP results and my by-hand results. I obtained the by-hand results by applying the procedure proposed by Sebastiaan to compute BAWS Factors (https://www.cogsci.nl/blog/interpreting-bayesian-repeated-measures-in-jasp).

Thanks again for your help

4It's a little tricky to give an excerpt of the code as it uses calculated results from other functions.

Let's give it a try though!

I attached the code to this post in a .zip file.

To demonstrate what the calculated results from the other functions look like I also include a .RData file.

This .RData file was generated with a Bayesian RM ANOVA performed on the JASP Bugs dataset.

The column names look a little funky, because they contain (escaped) unicode characters.

Normally the columns are base-64 encoded, but for clarity purposes I decoded them.

If you mimic the structure of the objects

`effects.matrix`

/`model$interaction.matrix`

/`effectNames`

you should be able to run the code.Hopefully you'll find it helpful!

If you have any other questions do not hesitate to ask.

4Could you maybe send me the .jasp file itself, Nicolas?

(t.dejong@jasp-stats.org if you want to keep it private)

9Hi Tim, thanks for your help and for the script. I will send you the .jasp file.

4Omitting the prior inclusion probabilities,

IV 1:

IV 2:

IV 1 * IV 2:

Obviously there is some discrepancy because of rounding, but it seems to match to me.

9Thanks Tim. However, for IV 1 * IV 2, I don't understand how your sum of P(M|data) for in-models and for out-models are not equal to 0.868 and 0.003 respectively. JASP indicate Incl. BF = 250.453 for this effect whereas my by-hand calculation indicates 285.33. Intuitively, I thought that the discrepency should be smaller if it is merely due to rounding.

363Hi Nicolas,

Not 100% sure, but these high BFs translate into very small probabilities, where rounding can make a big difference. To check, you could take an example where the BFs are modest.

E.J.

9Thanks for your answer E.J., I think I finally got it. Your explanation makes sense with the very small discrepancies observed for the BF associated with the main effects.

Thanks to both of you. Best regards

Nicolas

4Glad it makes sense to you now

I indeed made a little mistake with the copy-pasting earlier.

It should be

IV 2:

IV 1 * IV 2:

To demonstrate how strong the effect of rounding is with low probabilities (as EJ mentioned); IV 1 * IV 2 is (roughly) bounded between:

Lower: 0.868 / 0.0034 = 255

Upper: 0.868 / 0.0025 = 347

9Hi Tim, thanks very much. It is crystal-clear thanks to you guys.

Best Regards

Nicolas