#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

# Difference between classical one sample T-test and Bayesian T-test

edited December 2015

Dear all,

As a new learner of Jasp and Bayes analyses, I re-analyzed some of my old data where the overall performance of my participants was slightly but significantly above chance (>50%; unilateral one sample t-test; t(23)=2.14,p=0.02).

Regarding unilateral Bayesian T-test, I have a Bayes Factor of 2.8 when using the default value of Cauchy prior width (and my Bayes Factor increases, reaching more than the value 3, when I decrease the prior).

I am a bit concerned regarding the conclusions I should draw on my data:

1/ is the Bayes Factor low enough so that I should re-interpret my data and do not conclude in favor of H1 ?

2/ About the manipulation of the prior, I wonder what are the consequences of increasing/decreasing this factor?

3/ I am not sure about the information in the Sequential Analysis plot, could someone give me hints about how to read it ?

Thanks a lot,

JulianeH

• edited 10:31AM

Hi Juliane,

(1) The Bayes factor is close to 3 for the default prior, and the robustness check reveals that it is never much more, no matter how you set the prior width. As the pizza plot reveals, with BF=3 and equal prior odds you have a posterior probability of 3/4. This is not compelling. Yes it is some evidence for H1, but it is in my opinion not sufficiently strong to make a strong claim.

(2) The effects of changing the prior are visible from the robustness plot you included. When you increase the width r, your H1 starts to predict large effect sizes; when r=0, H1 reduces to H0 so the BF is by definition 1. Your data set is a really nice example how the evidence is never compelling in favor of H1, no matter how you set the prior.

(3) The sequential analysis plot shows the evolution of the BF as the data accumulate. Note that JASP assumes the order of the rows is the order in which the data came in. Of course the order does not matter for the end result, but it may be helpful if you want to monitor the evidence for stopping, or if you want to detect outliers. In your case, the evidence oscillates a little but the result that really matters is the last number.

Cheers,
E.J.

• edited 10:31AM

Do you think that it is worth increasing the sample size based on the oscillation of the sequential analysis ?

Best regards,

Juliane

• edited 10:31AM

Yes I think it's definitely worth it! The current result is suggestive and N is not that high.
E.J.

• Hello, EJ and Juliane.
I benefited immensely from this post. However, what's the best way to interpret the first graph that Juliane presented? What is the ratio between density and effect size? In her case, the density is 1 for the prior and .05 for the posterior. Also, what is under the curve of the posterior? I understood the remaining graphs, but the first one seems difficult to me. Huge thanks, G.

• Hi wendt,

For a detailed explanation see for instance, on my website, the paper Wagenmakers, E.-J., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, A. J., Selker, R., Gronau, Q. F., Dropmann, D., Boutin, B., Meerhoff, F., Knight, P., Raj, A., van Kesteren, E.-J., van Doorn, J., Smira, M., Epskamp, S., Etz, A., Matzke, D., de Jong, T., van den Bergh, D., Sarafoglou, A., Steingroever, H., Derks, K., Rouder, J. N., & Morey, R. D. (in press). Bayesian inference for psychology. Part II: Example applications with JASP. Psychonomic Bulletin & Review. URL: https://osf.io/m6bi8/

The dotted line is the prior distribution for effect size, assuming it is non-zero (e.g., under H1); the solid line is the posterior distribution (again, under H1). It just so happens that when you look at the heights of these distributions at the value that is specified by H0 (i.e., delta=0), you obtain the Bayes factor, which is the degree to which the data support H0 over H1.

Cheers,
E.J.