Misleading AIC in Linear Mixed Models in JASP?
Hello,
I am not sure this is the right section (I could not find one that fits better).
I recently applied Linear Mixed Models in JASP. As an output, JASP also reports AIC.
However, that AIC is computed on grounds of the REML deviance, and not on grounds of the plain deviance (both of which are given in the same output).
To my knowledge (verified with some colleagues) the so-computed AIC only allows for comparison between models that share exactly the same fixed-effects structure, and are therefore compared only for their random-effects structure.
Clearly, this is not always the case - actually, it is relatively rare. By far the most frequent case is that one wishes to compare models that differ in both structures. This is pretty natural, indeed if one assumes that effect A is present on the average (i.e. in the fixed design), the default choice is that A should also vary across clusters (e.g., subjects), thus, it should also be included in the random design.
Following this line, as a default, AIC should be computed subtracting 2p (twice the N of parameters) from the *plain deviance*; alternatively, JASP might report both an AIC derived from the plain deviance, and an AIC computed from the REML deviance, with a short note explaining that the latter ought to be used only to compare models that vary just in their random components.
Comments
Hi Alessio,
Thanks for pointing this out. We should be more informative about this. The best avenue for such a feature request would be our GitHub page, so that we can add it to our todo-list and assign it to the relevant people. As I understand right now, you could still compute the appropriate AIC, by manually doing this subtraction, but of course it would be neat to include the option to select which deviance to use for the AIC (and to be more transparent about its appropriate applications).
Kind regards,
Johnny
Hi Alessio,
Yes and no. Yes, you are totally right that the likelihoods of models fitted with REML cannot be compared to likelihoods of models with different fixed effects. This extends to AIC.
But no, this also extends to the unpenalised likelihood. Once the model is fit with REML, there is nothing that can be done. The only way to compare likelihoods and AICs for mixed models with different fixed effects is using a ML (i.e., non-REML) fit.
ML fits are used when using likelihood-ratio tests or parametric bootstrap for "tests of model terms".
Best,
Henrik