Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Bayesian Meta-analysis - how to calculate a standard error of effect size?

edited February 2021 in JASP & BayesFactor

To run a bayesian meta-analysis in JASP, I need two inputs: Effect size & SE of effect size.

I calculated both, however I want to make sure I properly calculated the SE of effect size. Does anyone have a reference I can consult that highlights how to do this?


Thanks!

Comments

  • How you calculate SE depends on your effect size measure (Cohen's d, log odds ratio, etc). For any particular measure of interest, I believe the Wiki entry generally shows how to compute the SE.

    E.J.

  • Im using cohen's d but i couldnt find it on the wiki entry🤔

  • I do see it mentioned in https://en.wikipedia.org/wiki/Effect_size#Cohen's_d

    You take the (pooled) SD and divide by sqrt(n) (or a pooled version in case of the unpaired, unequal N case)

    E.J.

  • Ah I see it, thanks.

    But this begs the question, which formula is correct to calculate a standard error of the effect size since there are many of them?

    (1) SD(pooled) / sqrt(n)

    or one of these:

    https://stats.stackexchange.com/questions/495015/what-is-the-formula-for-the-standard-error-of-cohens-d

  • edited February 2021

    Hi Kindred,

    From the code from the two-sample t-tests in JASP, I see the following:

        num <-  (ns[1] - 1) * sds[1]^2 + (ns[2] - 1) * sds[2]^2
        sdPooled <- sqrt(num / (ns[1] + ns[2] - 2))
        if (test == "Welch")  # Use different SE when using Welch T test!
          sdPooled <- sqrt(((sds[1]^2) + (sds[2]^2)) / 2)
        
        if (optionsList$wantsEffect) {
    
          if (options$effectSizesType == "cohensD")
    
            d <- as.numeric((ms[1] - ms[2]) / sdPooled)
    
          else if (options$effectSizesType == "glassD")
    
            d <- as.numeric((ms[1] - ms[2]) / sds[2])
    
          else if (options$effectSizesType == "hedgesG") {
            a <- sum(ns) - 2
            logCorrection <- lgamma(a / 2) - (log(sqrt(a / 2)) + lgamma((a - 1) / 2))
            d <- as.numeric((ms[1] - ms[2]) / sdPooled) * exp(logCorrection) # less biased / corrected version
          }
        }
    


    Here you can see how the pooled sd is calculated by default (first 2 lines), and how the pooled sd changes if different types of t-test is used (e.g., Welch), or how d is standardized if other effect sizes are requested.

    ns refers to the sample sizes of the two groups (so is a vector of length 2), and sds refers to the standard deviations of the two groups (also vector of length 2).

    So, the default pooled sd that we use for Cohe's d, for the two-sample t-test with equal variances assumed, is this formula. I hope this helps, please let me know if anything is still unclear!

    Kind regards

    Johnny

  • Hello

    Wow thank you so much! So from what I see, the method I used to calculate the effect sizes and SD(pooled) adheres to your code.

    The only thing Im still unsure about is the Standard Error of effect size calculation used. In the RoBMA (Amazing module btw you all did an incredible job), the input is ES & SE of ES. Some of the studies Im using have an equal N between the control group and the mild traumatic brain injury (mTBI) group, while others dont.

    I ended up using this formula, from Hedge & Olkin, 2014

    When using the Wikipedia calculation, given by EJ, I'm getting huge variations in the standard errors being calculated. Im not sure which equation to go with. Or should I be using the wikipedia version if groups have unequal sample sizes and the Hedge & Olkin formula if they do?


    Cheers

  • Hi Kindred,

    happy to hear that you like the RoBMA module.

    Regarding the standard errors for Cohen's, there are, unfortunately, many inconsistencies between different packages and I think that there is no consensus about the best approach. The most important thing is to report which formula did you use. I would not recommend mixing the Wikipedia and Hedge & Olkin's formula within a single analysis - it would make the analysis inconsistent. Personally, I would use Hedge & Olkin's version, but you can also try using the Wikipedia version for sensitivity analysis.

    Cheers,

    Frantisek

  • Great thank you so much for the info everyone!

Sign In or Register to comment.