Different JASP versions giving different results
Ran same BSEM model in different JASP versions (0.95.1.0 versus 0.19.1) and the BIC was slightly different (7.289 versus 7.446). Does anyone know why? I used the default priors in both cases, have they changed between these versions?
For sensitivity analysis of priors (in the case of just using defaults because no actual priors): is doubling and halving default values a fair strategy?
Comments
Yes, doubling and halving is a fair strategy.
Not sure why BSEM would differ, other than updates in the underlying R package (I will ask Julius to check)
EJ
@EJ Was there any word from Julius RE: the above?
Apologies, I was on holiday and this one slipped through.
How did you run a BSEM model in JASP, using JAGS? Because the SEM module does not have the functionality. If you used JAGS, I dont know if anything changed, someone else is responsible for that module.
Julius
Model has no latent variables.
Constructed using the Bayesian Process module of JASP version 0.19.1.0, with interrelations (all specified as “Direct” within JASP).
All BSEM options in JASP left as default.
Ah cool. Unfortunately, that is also not my module. It could be Maltes. Let me ask him.
@julius Any news?
He is not responding, sorry.
Hi Julius. I have a similar problem with different versions of JASP displaying different results. In a basic t-test, an older version of JASP will display Shapiro-Wilk Test results for each of the 2 x DVs that have two groups. However, on the new JASP, it does not show the two groups, and just shows one group. Please can you advise?
Hi CrewedUp,
Can you make a separate issue for this problem? Ideally with a screenshot and/or an example data set. This is of course a key analysis so if this does not work somehow this gets us into hotfix territory quickly.
EJ
Hi, apologies for the late response. Does your model contain moderators/interactions? After 19.2, interactions are mean-centered by default which was not the case in previous versions. We pinned the version of blavaan to 0.5.2, so it's unlikely that changes there are the reason. Could you otherwise provide more details about the model you are estimating?
Cheers,
Malte
All the processes are "Direct". There is a box to the right of the box where the "Direct" is set, which from memory has options such as moderator, etc. and this was left blank.
Could the mean-centering you speak of be the reason with this fact pattern? Or it is irrelevant since that box was left blank?
I don't think so because all paths are direct and thus the dropdown Process Variable does not apply. Do you have missing data in your variables?
You could also check whether the ticked checkboxes under Residual Covariances are the same (under the path specification; you can also see that in the output).
Otherwise, it might be related to the under-the-hood estimation in Stan, but I need to dig deeper into this.