Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

"Enforce the Principle of Marginality" for random slopes - BRM ANOVA

Hi Team,

While working with the BRM ANOVA function in JASP, I noticed the option "Enforce the Principle of Marginality", with separate checkboxes for fixed effects and random slopes.

Here is what I have understood so far:

  • The principle of marginality states that if a model includes an interaction term (e.g., A × B), it must also include the corresponding main effects (A and B) to maintain a coherent hierarchical structure.
  • Applying this principle to fixed effects seems both logical and necessary for proper model interpretation (selected by default).
  • I suppose that the option is not checked by default for random slopes because enforcing it would make the models more complex (increasing the risk of convergence issues, longer computation times, and possible over parameterization).

Is this understanding correct?


Additionally, I am working with EEG data, where inter-subject variability is often quite substantial. In this context, would it make particular sense to enforce marginality for random slopes to better account for individual differences?

(Note: I compare slope of linear regression compute on a signal window, we have around 150 trials per condition per subject, so the dataset is fairly reasonable for estimating random slopes.)

Do you have any recommendations or best practices regarding the use of this option?

Thank!

Johan

Johan A. ACHARD

PhD Student in Cognitive Sciences

Université Franche-Comté

Comments

  • Dear Johan,

    Correct. The principle of marginality has recently been debated in a series of papers related to these:

    van Doorn, J., Aust, F., Haaf, J., Stefan, A., & Wagenmakers, E.-J. (2023). Bayes factors for mixed modelsComputational Brain & Behavior, 6, 1-13.

    van Doorn, J., Aust, F., Haaf, J. M., Stefan, A., & Wagenmakers, E.-J. (2023). Bayes factors for mixed models: Perspective on responsesComputational Brain & Behavior, 6, 127-139.

    van Doorn, J., Haaf, J. M., Stefan, A. M., Wagenmakers, E.-J., Cox, G. E., Davis-Stober, C., Heathcote, A., Heck, D. W., Kalish, M., Kellen, D., Matzke, D., Morey, R. D., Nicenboim, B., van Ravenzwaaij, D., Rouder, J., Schad, D., Shiffrin, R., Singmann, H., Vasishth, S., Verıssimo, J., Bockting, F., Chandramouli, S., Dunn, J. C., Gronau, Q. F., Linde, M., McMullin, S. D., Navarro, D., Schnuerch, M., Yadav, H., & Aust, F. (2023). Bayes factors for mixed models: A discussionComputational Brain & Behavior, 6, 140-158.

    The upshot is that there is considerable disagreement on the matter. I think that including all models and then model-averaging could be the best way forward...(this was also suggested by one group of discussants)

    EJ

  • Dear EJ,

    Thank you for your clarification. I will examine the papers you cited in detail.

    I agree with your suggestion to average the models!

    Johan

    Johan A. ACHARD

    PhD Student in Cognitive Sciences

    Université Franche-Comté

Sign In or Register to comment.