# SEM-mediation analysis: different results in JASP and R

Currently, I am trying to perform a mediation analysis using structural equation modeling in JASP. I wanted to check these results in R by copying the model from the Lavaan syntax option and testing this using the 'sem' function from Lavaan. However, I do not get the same results as in JASP when I conduct this mediation analysis in R this way. Because my p values differ in such a way that some are insignificant in R while they are significant in JASP, I am desperately trying to find out where this discrepancy comes from. Specifically, I used the bootstrap method with 1000 replications and the bias-corrected percentile type. Everything else was kept at the default.

What I entered in R was the following:

*set.seed(1)*

*sem <- lavaan::sem(fullmodel, data = dataset, se = "bootstrap", bootstrap = 1000)*

*parameterEstimates(sem, *

* se = TRUE, zstat = TRUE, pvalue = TRUE, ci = TRUE, *

* standardized = TRUE, *

* fmi = FALSE, level = 0.95, boot.ci.type = "bca.simple")*

As said before "fullmodel" was simply copy-pasted from what was printed as the Lavaan syntax in JASP.

Both direct, indirect and total effects give me different results.

Is there anything I am missing in these functions?

Thanks in advance for the help!

## Comments

Dear Charlotte,

attached are files that show you how to reproduce results from JASP in R. I can help you get to the bottom of your specific example if you are willing to share your data and your JASP file.

Best,

Simon

Dear Simon,

Thanks for your help up until now! I have opened the example and with that it works for me as well: I have the same results in JASP and in R.

However, when performing the mediation on a simple sample dataset myself, I again have different results. Is there maybe a difference in R and JASP concerning how missing values are handled? That might explain why my original mediation analysis test also performed differently in JASP and R, as it includes NA's. Maybe you can try it for yourself (I have attached the sample dataset, my R file and JASP output here).

Thanks in advance. Hope we can find the solution to this problem soon!

Best,

Charlotte

No worries! Yes, you are right - the difference in your example is caused by the missing values.

By default, JASP uses Full Information Maximum Likelihood to deal with missing values. If you want to make the results in R the same as in the JASP file, this would be by setting an argument missing = "fiml" in the sem function.

If you want to make the JASP results the same as in the R code, you can change the method by opening the "Advanced" tab and switching "Missing value handling" to "Exclude cases listwise".

I hope this clarifies the issue!

Hi!

I have tried both approaches but still did not come to the same results in JASP and R with the sample dataset I've sent you yesterday. Attached, you can find my updated R file in which I also pasted the JASP results. In the first section I've added "missing = "fiml"" in the sem function. In the second section I've deleted this argument and changed the missing value handling setting in JASP. None of these outputs give me the same results anywhere.

Could you maybe see if this is the case for you too and see what else should be changed to come to the same results? Thanks again!

Best,

Charlotte

Update: I found the solution to come to the same p values in JASP and R.

Specifically, I wrote "se = "standard"" instead of "se = "bootstrap"" in the sem function. This I thought of when looking into your script from GitHub (https://github.com/jasp-stats/jaspSem/blob/3a84554160cad41b45313a4fa620c7def41cb7ce/R/mediationanalysis.R). Line 112 said this:

se = ifelse(options$se == "bootstrap", "standard", options$se)Could you maybe explain why "standard" is filled in when asking for bootstrapping? Can this be a bug? Because when I run it, it doesn't seem that bootstrapping is being done in R. Since the p values depend on this, I think It is essential to understand why one should fill in "standard" or "bootstrap".

Another problem I have encountered is that I cannot find the standardized parameters in my R output. When I type "standardized = TRUE" in the parameterEstimates function, my estimates actually do not change. They're the same as "standardized = FALSE". Could you maybe explain which argument asks for the standardized solution or where I can find them in the output?

Thanks for your help!

Best,

Charlotte

the bootstrapping is not done in the sem function because it is done a couple of lines later, calling the lavBootstrap function (https://github.com/jasp-stats/jaspSem/blob/2e91f22bff99e849258b0bee51de3c764abf2e34/R/common.R#L26) so that JASP can display the updating of the progress bar with every bootstrap iteration.

But I do see that performing the bootstrap in JASP does not change the se compared to the standard method, so it's possible that there might be a bug, yes. I will look into it in closer detail. Thank you!

Regarding standardized estimates, the method used in JASP is to set std.ov = TRUE

Thanks for all the help!

I have run both the bootstrapping in the sem function (using se = "bootstrap") and in the BootstrapLavaan function (using the sem object with se = "standard" as the input). I get the same results as in JASP when using this latter bootstrapping function/method. But results are different when using the first function. Not sure why exactly that would be different but glad that I have found the method that JASP uses and now I know what is happening in the background.

Hi everyone. I' am using JASP version 0.18.3 and I performing mediation analysis via SEM menu.

I think I found a bug concerning the computation of standardized estimates.

I am preparing some slides for my students on simple mediation model (manifest variables) so, first, they have to learn the Kenny and Baron (1986) four steps, and further execute 2 linear regressions (OLS step-by-step approach) and then the SEM one-step approach.

I did the same analysis via mediation analysis (in SEM menu) and unstandardized estimates (b) are de same, but the standardized estimates (beta) were not, which was really weird.

So I transformed the original data values of my variables into z-scores and I ran it again via SEM, so the unstandardized estimates are in fact the standardized estimates and I obtained the same values like I got in the OLS regressions.

My question is, what really happens in terms of computation when someone selects "standardized estimates"?

Of course the Z-score transformation is a simple workaround but the "standardized estimates" options should be working properly.

Many thanks in advance.

@Pedro_Rosa that's interesting. It does look like the mediation analysis simply standardizes the variables in order to obtain the standardized estimates, whereas the SEM module presumably obtains the actual standardized estimates. I'm not sure why the mediation analysis doesn't simply provide standardized estimates (instead of first standardizing variables and reporting the 'unstandardized' estimates).

(Also not sure why they don't match, but I assume that's normal.)