Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

When to use Bayesian mixed model

alonzivonyalonzivony Posts: 8
edited November 23 in JASP & BayesFactor

I have a repeated measures experiment with three conditions, multiple repetitions per condition, and 15 subjects. The standard in my field is to average the results per condition (disregarding the number of trials) and compare the three conditions (45 data points overall). Using this method with repeated-measures Bayesian analysis in JASP, I get a somewhat unsatisfactory BF (BF01 = 2.79 in support of H0). I thought that maybe this is because the results don't converge well due to the number of subjects. When I use a Bayesian mixed model (using BayesFactor in R) on the same data, with the subject intercept as the random factor, I got a much stronger support of H0 (BF01 = 20+). Intuitively, this seems like a reasonable approach because the BF will converge better if I take each trial as a data point instead of each average (2250 data points). However, while I know there are various justification for using mixed models, I couldn't find any justification that relates to this notion. Is it justified? Is there any article that supports my approach?
Thanks,
Alon

Comments

  • EJEJ Posts: 409

    Hi Alon,

    1. Convergence is most likely not a problem for these models.
    2. The Bayesian ANOVA in JASP is simply the Bayesian mixed model. So you should be able to get the same result out of JASP as you get out of BayesFactors.
    3. I find this difference unsettling; if you have enough trials (and you do) and sufficient participants, then taking the average or using the mixed model should result in the same outcome, imo. I will look into this. (last time I did, this turned out to be the case, also for the classical/frequentist methodology).

    Cheers,
    E.J.

  • Thanks E.J.
    When I conduct frequentist analysis, repeated measures on means and mixed models with subject intercept as a random factor usually give very similar results. But I found this large discrepancy between Bayesian analysis on means and mixed models with the entire data set for several data sets. To me it seemed obvious that the difference is the number of data points... but maybe I'm missing something? Or conducting the wrong analysis? For example, this is what I'm using in R:
    anovaBF(RT ~ IV + Subject, data = RTData, whichRandom="Subject")
    (RTData is the full data set, the dependent is RT, IV is the independent)

    Best,
    Alon

  • EJEJ Posts: 409

    I think that Richard Morey will have more insightful comments. I'll attend him to your post.
    E.J.

  • Thanks, that will be greatly appreciated. I can also post a sample of the data if it would help.

  • Hi EJ. Sorry to bother. Any insight from Richard Morey on the case?

  • EJEJ Posts: 409

    Not yet. Perhaps he will respond when you send him a personal Email?
    Cheers,
    E.J.

  • Oh, I don't want to be a bother. I understand now (at least partially) that its not the number of subjects or observations that made the difference.

    I think I understood the difference between my results, and it simply comes down to error rates. Because the mixed model takes into account within-subject variability, cases with large within-subject variability will increase the BF in favor of the Null hypothesis. Specifically, in my case, it seems that the subjects with the larger effects also have larger variance in their responses. Does that make sense?

    I think that this is the case because the reverse happened to me when I looked at a significant effect with a moderate BF10 (~3) when using averaged results and a large BF10 (~100) when using a mixed model. In that case, it turned out that subjects with the largest effect actually had the least amount of variability in their observations, which increase the BF in favor of the alternative hypothesis.

    Thanked by 1EJ
Sign In or Register to comment.