Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Post hocs / planned contrasts in Bayesian

edited April 2016 in JASP & BayesFactor

Hi all,

We're currently stuck with a four-way mixed ANOVA to analyse some EEG data, but found out that JASP does not seem able to do post tests once factors have more than two levels. Unfortunately, this is often the case in EEG since at the most basic level you at least want to include multiple frequency bands. The analyses look in general like this:

Pathology (diagnosed, undiagnosed) * frequency band (delta, theta, alpha, beta, gamma) * location (frontal, central, posterior) * (median split on some behavioural measure that is hypothesized to influence the relationship).

We thought it might be possible to just do many many many separate analyses, because there's is no inflation of alpha errors in Bayesian so there is no need to correct for that (as most post tests do). However, this would be an aweful lot of work and we're not sure if this truly solves the problem in a proper way. Would we still be able to interpretate the results as if we had performed a four-way mixed ANOVA with post tests?

Following this all, what would be the best (Bayesian please) option for doing a multiple-way mixed/RM ANOVA when at least one factor has more than two levels?

Thanks!

Kristel de Groot
Sebastiaan Remmers

Comments

  • EJEJ
    edited 1:39AM

    Hi Kristel, Sebastiaan,

    As I was about to answer your post, I realized a similar issue can up earlier in the forum. Here's my previous answer, slightly edited:

    "As far as I am concerned, t-test as usually what researchers want to know if they think carefully about their hypotheses beforehand. In the Bayesian framework there is no need for a correction for the number of tests you entertain; however, there is a correction for conducting tests that you only half-anticipated to carry out, and this is through the often-ignored prior model odds. The extent to which you are able to assess prior model odds when the data have led you to consider a particular t-test is interesting and not really resolved. I personally would be happy to report the t-test but specifically note that it is post-hoc and the data led you to consider this test. The seriousness of the problem also depends on the design. Suppose you have a one-way ANOVA with 100 levels; you see that level 5 differs from level 11, and you t-test this difference. Clearly this is misleading. It would be interesting to develop a Bayesian test to provide a default prior model odds "correction" for post-hoc tests, but I recall an interesting conversation with Richard who argued it is impossible in principle. Maybe I misremember. Richard?"

    I would like to add that there is something conceptually weird about the post-hoc test. A real test can never be post-hoc. If you approach your data with the attitude "let us see what we can find", then you are really in the hypothesis-generating stage, not in the hypothesis-testing stage. Only in the latter stage doe your inferential statistics make sense. I think it is a mistake to view statistics as an "effect-discovery device"; it is really more a "testing pre-planned hypotheses device". So my advice is to think carefully about what you want to test, report the associated t-tests and ANOVAs under the level "pre-planned analyses (it does not hurt to preregister these quickly on aspredicted.org, for instance). The more exploratory analyses can be discussed under the level "post-hoc analyses" with the disclaimer that the data led you to those hypotheses. But maybe Richard has additional suggestions!

    Cheers,
    E.J.

  • edited 1:39AM

    What is it that JASP is not letting you do? BayesFactor should allow you to do it, I think, if you don't mind using R.

  • edited 1:39AM

    Thanks you all for shining some light onto this!

    Richard, are you suggesting it is possible to do post-hocs (or planned contrasts, as this is in fact less conceptually 'weird', following EJs reasoning) with Bayes Factor using R? Because no, don't mind using it ;)

  • edited April 2016

    Hi Kristel,

    I found these blog post by Richard really instructive:

    If I understand them correctly the approach is the one detailed in this paper: http://www.ejwagenmakers.com/2014/MoreyWagenmakers2014.pdf

  • MSBMSB
    edited 1:39AM

    Hi All,

    Are there any plans to implement into JASP the methods Richard has described in the BayesFactor blog?
    Or should I add this to the list or reasons to finally learn R?

    M

  • EJEJ
    edited 1:39AM

    Yes there are plans :-)
    But it can't hurt to learn R!
    E.J.

  • MSBMSB
    edited 1:39AM

    If i had known that being a psych-graduate would involve so much coding (matlab, e-prime, opensesame, R...) I might have re-considered 8-)

    Thanks!

  • edited 1:39AM

    The blog posts by Richard are indeed really instructive, thanks a lot! I think I will make that my weekend activity for this week :)

Sign In or Register to comment.