Correctly calculating Likelihood Ratio vs. BIC vs. BF
Hi everyone,
I am trying to find the best ways of using Bayesian methods for testing of an absence of a difference in means. Specifically I am interested in quantifying the evidence for H0 over H1, where H1 would be a difference in means, and H0 is no difference in means. In most cases these are what EJW calls 'pro-forma evaluations' in 'A practical solution to the pervasive problems of p- values', which is to say that there is no clear hypothesis concerning H1, and neither is there any commitment to H1. Furthermore, I have no prior knowledge about how large of a difference between means would create concern. However, it is important to know that conditions were well randomized/balanced in measures which would otherwise raise concerns about the design and appropriateness of the rest of the analysis.
I have found three potential options, all closely related, and which I would like to calculate (and understand their relationship):
1) Likelihood ratio
2) Bayesian Information Criterium (BIC)
3) Bayes Factor (BF)
In the above-mentioned publication, EJW (page 798) refers to an example by Glover & Dixon (2004), in which an additive model is compared to an interaction model. They (G&D) suggest the use of the likelihood ratio, corrected for differences in the number of parameters (one less in the additive model) as well as for the low number of observations.
Q1a) Is it possible to extend this approach (including the corrections) to an even simpler comparison between a model with 1 mean (H0) versus 2 means (H1)?
Q1b) Would the following be a correct implementation of this?
n = length(data);
m0 = mean(data);
m1 = mean(data(group==1));
m2 = mean(data(group==2));
SSt = sum((data-m0).^2); % total sum of squares (total variance)
SSe = sum([(data(group==1)-m1).^2, (data(group==2)-m2).^2]); % residual sum of squares (unexplained variance)
SSg = SSt-SSe; % group sum of squares (explained variance)
k1 = 2; % two parameters for H0: mean and variance
k2 = 3; % three parameters for H1: mean, variance and group effect
Qc = exp((k1n/(n-k1-1))-(k2n/(n-k2-1))); % Eq. 6 in Glover and Dixon, 2004
lambda = ((SSt/SSe)^(n/2))*Qc;
% Model 1 unexplained variation / Model 2 unexplained variation, cf. Glover and Dixon, 2004 (page 802)
Q2a) In the BIC there is (also) a correction for complexity of the two models. How does this correction compare with the correction proposed by Glover and Dixon in the example above? Since there is a correction there as well, is the BIC specific for the specific correction?
Q2b) Would the following be an implementation of the BIC (following the code above)?
dBIC10 = nlog(SSe/SSt)+(k2-k1)log(n);
Q2c) Am I right to deduce that I have to interpret the dBIC10 value as evidence for H0 over H1 (rather than the other way around)?
Q3) Can I calculate the Bayes Factor from the previous with BF = exp(dBIC10/2); ?
Thanks so much for reading this far! It would be great to get some feedback on these as my understanding is still too limited to evaluate itself :-)
Comments
Hi Stephen,
This is a while ago. There is this paper: Nathoo, F.S. and Masson, E.J. M. (2015), Bayesian Alternatives to Null-Hypothesis Significance Testing for Repeated Measures Designs. Journal of Mathematical Psychology, http://dx.doi.org/10.1016/j.jmp.2015.03.003.
But in general I would just use the Bayes factors from JASP/BayesFactor without going through the BIC approximation.
In order to address your questions I'd have to free up some serious time, but I will note that the crucial ingredient of the BIC is the correction using sample size. Any method that does not have that is in trouble.
Cheers,
E.J.
Hi Eric-Jan,
Thanks a lot. I will start working with the R library and hopefully improve my understanding working with some data.
Best,
Stephen