Bayesian Logistic Regression questions, problems
I'm very happy to see Bayesian logistic regression included in JASP 0.17.2. I'm looking forward to seeing documentation on it!
In the meantime I'm trying to use it, and am using the Bayesian Linear Regression example (World Happiness dataset) as a guide. However, I'm getting different results in version 17.2 than were produced in 17.1.
I had installed 17.1 in preparation for a workshop on May 11. The presenter walked us through the Bayesian Linear Regression example. I discovered that 17.2 was released that very day, with Bayesian logistic regression, so I installed it as soon as I could. Now when I run the Bayesian Linear Regression example in 17.2, the output in the model and coefficient tables is not the same, e.g. BFM values are different (I can tell by comparing to the recording of the workshop and to the annotation in the example itself).
Is anyone else having a similar problem with that example? One reason why it worries me is that I'm trying to run a Bayesian logistic regression and the output is odd in the same way as the example linear regression output, i.e., the BFM and BF01 values don't corroborate the identification of the best model.
This all may be due to me being an extreme novice, but I'm concerned that there may be a bug, so I would appreciate anyone who can shed some light.
Thanks for reporting this -- I'll forward this to our expert. It would be very helpful if you could add a screenshot, by the way! What could be the case (you could check) is that the new version uses the beta-binomial prior model specification (I thought 17.1 did so as well, but maybe I'm wrong). With the beta-binomial specification, equal prior mass is distributed to models with 0 predictors, 1 predictor, 2 predictors, etc., and then, within each models class, it is evenly distributed among the model instances. This means that the model with the best predictive performance need not be the model with the highest posterior probability (because of differences in prior model probability). BUT: it would be great if you could add some screenshots showcasing the discrepancy to 17.1 (you can also do this through our GitHub page)
Could you share the jasp file or a screenshot? Also, which operating system are you using?
It looks like you're right, EJ. The default is beta binomial priors, as shown in the first screenshot. Here, the BFM values don't match the description in the notes.
When I changed to uniform priors, the results match the notes, as shown here.
I hope you're able to see the details in the screenshots, if not, let me know. I'm using Windows 10 Enterprise. But it seems like the issue is that the default is different now.