Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Bayesian regression analysis using JASP

Hi there,

I am new at using JASP and I was wondering whether you could help me out with some issues I have running a Bayesian regression analysis.

I have been asked to use Bayes factor to strengthen my analyses in order to be able to make more correct inferences about my non-significant results. I was told to do this using JASP but I have a couple of questions regarding how to correctly carry out my analysis.

First of all, I have to carry out a regression analysis that includes binary as well as continuous variables as predictors, but when I choose to do Bayesian regression in JASP it does not allow me to include nominal variables as covariates. Is there a way to get around this i.e. can I treat the variables as scale with the levels 0 and 1? Or is it simply not possible to do Bayesian regression with binary predictors in JASP?

Secondly, if I want to set an objective prior do I select the Uniform option for Model prior in the Advanced options?

Thank you

Thalia

Comments

  • Dear Thalia,

    First, if you set the *model prior* to Uniform this assigns equal prior plausibility to each model (i.e., each unique combination of predictors). This is standard practice, but the problem is that, implicitly, this setting leads to a preference for models with about half of the predictors included. Scott & Berger (2006, 2010) have argued for a different approach that is more sophisticated.

    Within a given model (i.e., a set of predictors) you need to assign a prior distribution to the regression coefficient. This can be done in different ways, and JASP offers plenty of options. I personally prefer the default that we provide (which is the same as the default in the BayesFactor package).

    As far as the non-continuous predictors are concerned, yes, this is a problem. You could close your eyes and pretend everything is OK, but that is dangerous. Let me include Don van den Bergh and Alexander Ly in this conversation, maybe they have some words of wisdom...

    Cheers,

    E.J.

  • Dear Thalia, 

    To perform a Bayesian linear regression with nominal predictors we recommend using a Bayesian ANCOVA. Most likely you’re interested in the effects table. The effects table provides the evidence for the inclusion of a predictor across models. 

    In these linear model there are basically two types of priors: (1) the priors on the models, and (2) the priors on the parameters within a model. 


    1. Priors on the models

    With p number of predictors there are in principle 2^p models. For instance, if p = 8, then there will be 2^8=256 models. These models can be represented by an indicator variable that tells you which of the variables are active. For instance, the first model that only includes the intercept, thus, none of the predictors, can be represented by 

    0, 0, 0, 0, 0, 0, 0, 0

    and the last model that, on top of the intercept, includes all 8 predictors can be represented by 

    1, 1, 1, 1, 1, 1, 1, 1

    In between we have models such as 

    0, 0, 0, 0, 0, 0, 1, 1

    which has two active predictors, namely, the last two. When you choose a uniform prior on the models, then each of these 256 models gets a prior model probability of 1/256. After data observation, these prior model probabilities are updated to posterior model probabilities. If all predictors are relevant then the last model represented by 

    1, 1, 1, 1, 1, 1, 1, 1

    gets a relative high posterior model probability compared to the first model where each predictor is inactive. Similarly, the model 

    0, 0, 0, 0, 0, 0, 1, 1

    should then also get a higher posterior model probability than the first model. 

    When p=8, the effects table summarises the importance of each predictor across the 256 models by weighting with respect to the posterior model probabilities. Note that the prior and posterior model probabilities are discrete. In Bayesian linear regression in JASP you can change this prior on the models to, for instance, a beta-binomial. In Bayesian ANCOVA a uniform prior on the models is used. 


    2. Priors on the parameters within a model

    For a Bayes factor we also require priors on the active parameters within each model. These priors are continuous and as a default we recommend using a multivariate Cauchy prior with scale parameter r for this. This set-up is referred to as JZS in Bayesian linear regression, and in Bayesian ANCOVA the UI doesn’t mention JZS, but it allows users to tune the scale parameter r. 

    For more on these two types of priors and the roles they play, see for instance

    https://psyarxiv.com/dhb7x

    I hope that this helps. 

    Cheers,

    Alexander

    Thanked by 1EJ
Sign In or Register to comment.