Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Wilcoxon signed ranked test

I run a Wilcoxon test for the folowing data: pretest: 63 75 75 74 65 63 and post test: 70 48 65 60 68 57

I obtained a W of 17 for a p level of 0.219. I used SPSS and Simstat and for the same data, I have a p level of 0.173 for a Z value of 1.362. I have 2 negative ranks and 4 positive. Why the results are so different? When a do a t test the value of t is the same for the three software.

Thanks

Comments

  • In my last comment, I forgot to bring another point. The matched rank biserial correlation is suppose to be the difference between the number of the number of + ranks/total number of ranks minus the number of - ranks / total number of ranks. The result of 4/6 minus 2/6 is supposed to give 0.333. In the ouptut, I have 0.619.

    Thanks again for your attention

  • Hi, I suppose this question is over JASP? I will move the question to the proper subforum.

    Eduard

    Buy Me A Coffee

  • Hi Statisticum,


    Which test are you running in SPSS and Simstat? Do these programs give the Z value by default for the wilcoxon signed rank test?

    In any case, I checked the data in R, and the wilcox.test gives the same result as JASP for paired observations (see the code below).

    As for the effect size, the method you describe is for the sign test, where you do not take into account the ranks of the absolute difference. For the signed rank test, the magnitude of the difference also matters for the effect size. We follow the procedure as described in Kerby (2014), who writes it as follows:

    "The result is that the matched-pairs rank-biserial correlation can be expressed r = (SF/S) – (SU/S), a difference between two proportions."

    In R, this can be calculated:

    pre <- c(63, 75, 75, 74, 65, 63) 
    post <- c(70, 48, 65, 60, 68, 57)
    
    wilcox.test(pre, post, paired = TRUE) # V = 17, p-value = 0.2188
    
    diffs <- pre - post
    ss <- sign(diffs)
    absRanked <- rank(abs(diffs))
    
    posDiffsRankSum <- sum(absRanked[ss == 1])
    negDiffsRankSum <- sum(absRanked[ss == -1])
    (posDiffsRankSum - negDiffsRankSum) / sum(absRanked) # 0.6190476
    

    Which reproduces the JASP results.

    Kind regards,

    Johnny

Sign In or Register to comment.