Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

F-test of equality of variances

MSBMSB
edited May 2016 in JASP & BayesFactor

Hi all,

I was wondering if there was a way to conduct a Bayesian F-test of equality of variances - I have two groups and I want to show that their variances are equal (H0 is true).

Is there a way to do this with JASP? or R? or any other way (I have a calculator and I'm not afraid to use it!)?

Thanks,

M

Comments

  • EJEJ
    edited 3:03PM

    Hi M,

    This is on our to-do list. A recent paper on the topic is here:
    http://sci-hub.cc/10.1016/j.jmp.2015.08.001

    I'm not sure whether there is R code, but I expect there is.

    Cheers,
    E.J.

  • MSBMSB
    edited 3:03PM

    Thanks EJ!

  • Hi all,

    I was wandering the same thing as MSB, and the link E.J. gave is dead now.

    I am looking for a Bayesian way to compare hypotheses of equal and unequal variances in two groups in the same way as it is usually done with means (with calculating BF). Does anyone know how to do this in R or in any other software?

    Thanks for your help,

    Izymil

  • If you have two groups...one strategy might be to take the F value from the Levene's test...square root it to turn it into a t value...and then use the "summary stats" feature within JASP.

    Thanked by 1Izymil
  • Maybe that is a good approximation (in some circumstances...haven't checked it out) but it's not the real thing (i.e., computing a ratio of marginal likelihoods), and we need the real thing to check whether and under what circumstances the approximation is good.

    E.J.

    Thanked by 1Izymil
  • @EJ - Does it mean, that I need something more than it is currently availabe in JASP? Any chances you might have a clue, where can search for this "real thing"?

  • :-) The real thing is a direct comparison of variances. We have something really cool under development here (there will be a blog post and a preprint once it's done), but there is also recent work in the Tilburg lab of Joris Mulder. This is not in JASP (yet) but will give you an idea, and perhaps they have R code.

    E.J.

    Thanked by 1Izymil
  • I suppose my thinking was as follows


    If we take the levenes test in the frequentist model...it appears that it is essentially a one-way ANOVA across each groups deviation from the mean


    Example 1:

    3 separate groups

    if we run a one-way anova on this and click levenes test


    we get the following for the one-way ANOVA

    F = 2.608, p =0.114

    for the levenes test we get

    F=1.367, p = 0.2916


    now if we want to run the levenes ourselves...we look at each group and take each value minus the mean of each group

    Group 0: 6.4

    Group 1: 4.2

    Group 2: 7.0


    when we take each value minus the mean, we get the following


    now we would turn each negative to a positive


    now if we run a one way anova with "all values made +" as the dependent variable and "group" as indepdendent variable, we get the following

    F= 1.367, p =0.2916


    this is the same result as the original levenes test


    if we ran it through Bayes, we get the following


    BF10: 0.635




  • jploenneke Yes. Another way to put it is: Levene's test is just a oneway ANOVA on dispersion scores (absolute deviations from the group mean) that indirectly tests the equality of variance in the un-transformed scores. So a Bayesian one-factor ANOVA on dispersion scores is equivalent to a Bayesian Levene's test.

    Thanked by 1Izymil

    Richard Anderson

  • Hmm this is interesting, I'll pass this on.

    Cheers,

    E.J.

  • I start to see some bright blue sky shining through the clouds :D Thank you all.

    E.J. - I would gratefully use this cool thing, let us know when you're done :)

    If I would compare variance not between the whole groups but between the within-subject variance in group A and group B, would it change something you all said?

    Participants in group A answer to items x,y and z and so are participants in group B. My hypothesis would be, that participants in group A would have greater variance between x,y,z than participants in group B.

    Best, Izymil

  • Ah I see. Well but then you can simple compute the sample variance per subject and t-test this between the groups? Of course this ignores the uncertainty about the sample variance, but that would be the approach that people are often using currently.

    Cheers,

    E.J.

  • edited May 31

    But the variances would not approximate normal distributions, which would be problematic for a t test. Wouldn't the most straight-forward, well-established way to do this be to simply run a repeated-measures ANOVA with Condition (with levels X, Y, and Z) as a repeated measure and Group as a between-subject measure? In the results, the interaction term would be interpretable as degree to which the variance in the DV, across the levels of Condition, depends on group.

    Richard Anderson

  • What needs to be normal is the distribution of variances across subjects, right? I don't see why that wouldn't be (approximately) normal.

    In your RM ANOVA, I am not sure how your ultimate test involves the variance across the x,y,z. The interaction term would mean that the effect of x,y,z differ depending on the group. This may happen without variances coming into play.

    Cheers,

    E.J.

  • Well but then you can simple compute the sample variance per subject and t-test this between the groups? Of course this ignores the uncertainty about the sample variance, but that would be the approach that people are often using currently.

    E.J. - Can you give an example (a paper) using the method you described? Does it have a name? Is it possible to perform it in some software or is it necessary to make a code? I am R and Bayesian beginner, to date I rarely do something outside the graphic interface, so I am not always aware of how the things are actually computed

    Best wishes,

    Izymil

  • Hi Izymil,

    Well, there are so many papers that take a t-test to compare the mean of two groups concerning dependent measure X. In the case X is the variance, but that is fine, it could have been anything else.

    Cheers.

    E.J.

  • Oh, now I understood how basic test we are talking about - you just said, that we should compute the mean of variances instead of mean of results and t-test them...I think this is just what I need, but I had a blind spot, thank you for clearing this out :)

    Best wishes,

    Izymil

  • edited July 12


    In the attached file, you can see that when an ANOVA is conducted on the raw Y data) along with Levene's test for variance inequality, Levene's test shows that the variance for Group A is significantly different than the variance in Group B (F(1,8) = 23.30, p = .001. JASP calculated the Levene's test result by running an ANOVA not on the raw scores, but on the dispersion scores, where each dispersion score is the absolute value of the raw score from the group mean. The attached file demonstrates this by explicitly running a second ANOVA on the dispersion scores. Notice that the F, degrees of freedom, and p for the second ANOVA (i.e., the ANOVA on the dispersion scores) are identical to those for Levene's test in the first ANOVA. Thus, it is clear that Levene's test tests the equality of variances indirectly by testing whether the mean of the dispersion scores is significantly different for Group A than for Group B. (Also not e that Levene's test is always a one-factor rather than a multi-factor ANOVA).

    -- Rich Anderson

    Thanked by 2EJ Izymil

    Richard Anderson

Sign In or Register to comment.