A little stuck in writing up my Bayes Anova followed by Bayes t-tests
Hi All,
I've been using JASP to analyse some experimental results and I'm keen to get on and submit the paper. However, I've reached a mental/capability/knowledge block when it comes to actually explaining the results in a paper. I've looked here (the earlier response to Guillon's post was helpful) I've also read through the writing-up section of Kruschke's book - but this seems to suggest adding more information than JASP makes available (or am I wrong?).
Here's a plot of what the data look like: http://tinyurl.com/j9r8vk3 (dropbox link). (errors are bootstrapped CIs)
Here is what I have from JASP (dropbox) http://tinyurl.com/ho4evut as output from JASP 7.1.12. The data (reaction times) are already relative to controls (i.e. controls have been substracted). I'm mainly interested in whether my manipulations (width, contrast, hasShadows) change performance relative to controls.
Following convention (for frequentist analysis) I'm finally using a one-sample bayes t-test to explore which conditions are likely to be different from zero - from the plot it's easy to see that the rightmost plots very different whereas evidence that the others (left-subplot) are different is weaker.
As the bayes t-tests are telling me which bars are different, do I even need to the ANOVA?
Best wishes,
George
Comments
Hi George,
Thanks for your questions. First, the Bayesian ANOVA in JASP does require more in terms of displaying effect sizes -- this is work in progress. Also, in JASP the classical test provides a plot of the data, but the Bayesian echo does not; we will fix that soon (just added it to the feature request page).
As far as I am concerned, t-test as usually what researchers want to know if they think carefully about their hypotheses beforehand. In the Bayesian framework there is no need for a correction for the number of tests you entertain; however, there is a correction for conducting tests that you only half-anticipated to carry out, and this is through the often-ignored prior model odds. The extent to which you are able to assess prior model odds when the data have lead you to consider a particular t-test is interesting and not really resolved. I personally would be happy to report the t-test but specifically note that it is post-hoc and the data led you to consider this test. The seriousness of the problem also depends on the design. Suppose you have a one-way ANOVA with 100 levels; you see that level 5 differs from level 11, and you t-test this difference. Clearly this is misleading. It would be interesting to develop a Bayesian test to provide a default prior model odds "correction" for post-hoc tests, but I recall an interesting conversation with Richard who argued it is impossible in principle. Maybe I misremember. Richard?
Cheers,
E.J.
Thanks EJ that was very helpful.
The original design of the study did anticipate a comparison of RTs (target detection time) relative to the controls. Hence my subtraction of the control stimuli RTs from the manipulated-condition RTs.
Thanks again,
George