Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

BF from multiple regression summary stats

Hi,

I would like to use the Summary Stats function for Bayesian linear regression to obtain Bayes Factors for published studies. The required inputs are number of covariates and R-squared for both null and alternative models, which are usually obtainable when studies performed hierarchical regression.

However, when multiple regression was performed, more commonly, only standardized betas, t- and p-values are reported for the individual predictors. When R-squared is reported, it is usually for the overall model. If I would like to calculate the BF of the relationship between a predictor of interest and the dependent variable over and above covariates of non-interest, would it appropriate if I use the Bayesian one-sample t-test instead? Or is it possible to somehow compute the R-squared from standardized betas, t- and p-values?

Thanks so much in advance!

Best,
Darren

Comments

  • EJEJ Posts: 368

    Hi Darren,

    I am not 100% sure, but I don't think that you can compute R-squared from standardized betas, t- and p-values, unless you are willing to make some strong additional assumptions (e.g., uncorrelated predictors). In other words, I don't think the relation is unique (that is, t and N are not sufficient) -- perhaps Google offers additional insight. For the same reason (dependence between the predictors) I don't think it is correct to use the Bayesian t-test. The extent to which the regular t-test is a good approximation of the full result (i.e., if you had access to the R^2s) could be interesting to study. If the approximation is often good enough, you might feel encouraged to do as you propose. All of this depends on the extent to which you can vary one quantity (say R^2) while keeping the other (t and p for an individual predictor) constant.

    Cheers,
    E.J.

    Thanked by 1dy
  • dydy Posts: 2

    Hi E.J.,

    Thank you so much for the prompt reply! I did an extensive search before I resorted to asking here. Thanks for confirming my suspicions! I agree with the points you make about the dependency on the other predictors, and whether it is a good approximation of the full results.

    The only reason why that possibility came to mind is that (1) using the t-statistic table, one can find p-values from just t and degrees of freedom, independent of the specific test being used (e.g., independent samples, paired samples, one sample), and (2) I read in one of your papers (Wetzels et al., 2011) on how BFs (at least the range) can be approximated from p-values. So, I thought it may be possible to link t values to BFs albeit indirectly. However, I believe the least controversial way is to cite the 2011 paper using the BF range/labels associated with p-values, e.g., p < .01 is likely to be 'substantial/moderate' evidence and p <.05 likely to be 'anecdotal/weak' evidence?

    Thanks!

    Best,
    Darren

  • EJEJ Posts: 368

    Hi Darren,

    p-values are approximations for a test of direction. See http://www.ejwagenmakers.com/2017/MarsmanWagenmakers2017ThreeInsights.pdf

    There are some good reasons to move from the standard "alpha=.05" level to a stricter "alpha = .005" level. See the 2013 PNAS paper by Valen Johnson and the cartoon here: https://jasp-stats.org/2017/06/12/mysterious-vs-mpr/

    Cheers,
    E.J.

Sign In or Register to comment.