Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

sample size determination

Hi all,

I did conduct some Bayesian t-tests and I was asked to justify my sample size. However, I am not sure whether it makes sense, since I used Bayesian analysis...

Any suggestion or useful paper to read?



  • Hi Elena,

    Here is a paper which discusses bayesian power analysis :

    The authors also developed a web based app that allows you to compute sample size for t tests.

    Additionally there is a thread on this website titled 'Bayesian 'Power analysis' for calculating sample size with fixed N' where I ask about this myself.

    All the best,


  • Hi Elena,

    1. Gabriel's link is the one I'd also point to for design planning;
    2. Given that the data are already in, all that matters is the evidence. So I agree with you that, from an inference perspective (not from a planning perspective) power analyses do not provide useful information. See also


  • Hi guys and thank you so much for the precious suggestions!

    I know I make very naive questions (I'm new to Bayesian analyses and in general not the statistics genius...), but I had another one...

    I was asked by some reviewers to also provide corrections to my t-tests, but I don't see the reason why to correct Bayesian t-tests (somehow for the same why power analysis does not provide useful information). Any suggestions for this one? thanks again!

  • What are the t-test supposed to correct for?


  • I have performed several t-tests to compare different conditions in a within-subjects design. If I used "normal" t-tests, I would have corrected using Bonferroni (or whatever) for the number of comparisons made. Do I have to follow the same "rule" for Bayesian t-tests too?

  • Well, that depends. One could argue that for a subjective Bayesian it does not matter what other hypotheses you did or did not test, the evidence is simply the evidence. However, the fact that you were testing several hypothesis might indicate that each hypothesis was relatively implausible a priori. There do exist Bayesian "corrections" for multiple testing, and in JASP these have been implemented in as post-hoc tests for ANOVA designs. A useful overview is here:

    Bottom-line: the Bayesian correction is in the prior model probability. Opinions differ as to whether or not the correction should be applied, and I think it depends on how committed you are to the hypotheses you tested -- if it was more of a fishing expedition, a correction is in order; if each comparison is theoretically motivated, the tests stand on their own and I don't see a need for correction.


  • you're the best! thank you very much!

Sign In or Register to comment.