Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

"It's good to worry about mistakes when doing stats..."

Hi again!

Following those Tweets on the JASP Twitter I was wondering if I could get an additional opinion on my statistical decision-making?

I studied a team which spent a whole year at an isolated and confined research station. The team was made up of 11 people, 10 of whom participated in the study. 1 team member had to leave the station early due to psychological complications so they did not complete the study – their data will be used as a case study instead of as a group study.

I did cognitive assessments with my team at three time points (autumn, polar night, midnight sun) and well-being questionnaires at two additional points (after arrival, spring).

My current statistical approach has been: if the data are non-normally distributed, I use a Friedman's test and Wilcoxon for follow up as suggested by Field (2009, p. 579-580) in R. For normally distributed data in this within-subjects design, I chose a parametric ANOVA with a Huyn-Feldt correction in JASP. The Huynh-Feldt correction decreases the ANOVA’s chance of erroneously finding an effect that is not present at all (Type I error) despite having my small sample and allows me to use ordinal DV, such as my questionnaire data (Stiger et al., 1998). For ANOVA effect size, I will report omega squared (ω2) because it is reliable with small sample sizes (Levine & Hullet, 2002). I've also been reporting the Vovk-Sellke maximum p-ratio.

I remember enquiring on this forum and EJ saying small sample size was not an issue so I supplemented the above frequentist statistics with Bayesian analyses in JASP. I've usually done a within-subjects ANOVA with paired samples t-tests for follow up. There are no previous studies on teams like mine from which I could derive information to form a subjective prior.

For the JASP Bayesian ANOVA I've been reporting BF10, BF(M), BF(01), P(M), P(M|data), %error. For the t-tests I have been reporting –additionally– the credibility interval. I've been illustrating my PhD chapter with pizza plots for Bayes and bar or line graphs for frequentist stats.

Does this sound okay? Should I do anything else?


  • Yes this sounds OK. In general, I am in favor of reporting a series of different analyses; hopefully they point in the same direction. At the very least, they allow you to answer slightly different questions. I was surprised and intrigued by the suggestion that a Huyn-Feldt correction allows you to apply ANOVA to an ordinal DV. In my opinion, ordinal data require rank-order methods. Sure, many people gloss over this issue, but since you brought it you have the complete reference of the Stigler paper?

  • Hi E.J.
    Sorry for my delayed response – I'm on holiday and haven't had much access.

    The reference is here:

    I hope this helps and thank you so much for the feedback!

    Oh yeah and:

    Usually my BF and my Frequentist results have been pointing in the same direction. Sometimes the p-values point in a direction not supported by the BF but in that case I've made my interpretation something along the lines of "In the case of DV 1, a traditional ANOVA allowed the rejection of H0, suggesting that there is an effect. The nature and strength of the effect were not supported/confirmed by the Bayesian ANOVA."

  • Sound good, thanks for the paper!

Sign In or Register to comment.