Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Reproducing GLMM in R

I have a JASP/R question for you. I'm trying to reproduce results I got from JASP (0.14.1) in R, but I can't seem to get the same results. I'm analyzing categorical accuracy data (Acc: 0 vs 1) where each participant is tested twice (trial factor: let's call it before and after).

In JASP I'm using the generalized linear mixed models module, with all the default settings (Binomal, logit link, type III, likelihood ratio test). Because there are so few trials, only the subject intercepts are included as random effects.

In R, I'm using the following: glmer( Acc ~ Trial + ( 1 | SUB), family = binomial(link="logit"), data=ACData).

For some reason, JASP is giving me a chi-square of 11.76, p < .001, but R is giving me a chi-square of 9.77, p = .0017. I read that JASP is assuming sum contrasts (and not dummy coding), but changing the contrasts in R to (-1,1) doesn't seem to help.

Any thoughts on what I'm doing wrong? Mind you that I am clueless about R. I can post the data of course if needed.

• edited October 2021

Edit: Sorry I spoke too fast, I tried with a factor predictor with more than 2 levels and I'm no longer reproducing the fixed effects, seeing as I would need to change the coding as you mention

How did you get the chi-square in R? when I do the following (with my own dataset):

```model <- glmer( Acc ~ Trial + ( 1 | SUB), family = binomial(link="logit"), data=ACData)
anova(model)
```

I seem to be able to reproduce JASP's results, with the exception that R calls the test an F-test and JASP calls it a chi-square. Note that by default JASP seems to include a random slope as well for Trial, so you need to uncheck the box for Trial under Model -> Random effects

• for the chi-square I used: car::Anova(model, type = 3, test.statistic="Chisq")

I think that JASP is calculating the overall model differently because the fixed effects in JASP produce the same p-values as the fixed effects in R, but not the overall model. The difference becomes even more severe if I add more variables. I think the difference might be JASP's assumptions or how the errors are calculated, because if I add a second factor the difference become even more pronounced.

Here's an example of the output.

• okay, my current guess is that the difference comes from JASP evaluating the model using likelihood ratio tests and R uses Wald tests.

For my purposes, this is a big problem, because I only turned to R because I wanted to use a power simulation. BUT I can't use a power simulation on glmer because it will give me the results of the Wald test, and I can't run it on glmmTMB (that can actually run maximum likelihood tests).

Well... I think it's a dead end.

• Update:

I think I solved the problem.

As far as I can see, the issue was likelihood ratio test vs. Wald test. glmer can't abide with likelihood ratio test, but other packages can, like simr: doTest(model, fixed("IV", "lr")).

Then I did the power analysis using powerCurve:

powerCurve(model, fixed("IV", "lr"),along = "subjects",breaks = c(70,80,90))