Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Bayes Factor for single case - control comparison

Dear JASP community,

In my study, I use Crawford modified t test to compare a single patient to a group of controls (n=50). To my understanding, the formula is like the classical t-test, just that in one group n=1, thus the variance is calculated only basing on the second, control group.

In my study I want to calculate the amount of evidence towards the null (the patient is NOT different from controls). To do that I'd like to use the Bayesian t-test. I wanted to compute it in JASP, however the program does not allow for comparison of group with n=1.
Is there any specific reason for that? Is there a way to overcome it? I will be grateful for your suggestion, Kasia

Comments

  • Hi Kasia,

    That's a remarkable test. It does make sense to me to present the quantile that the patient represents in the control population, and perhaps some uncertainty that comes with that quantile (for instance through bootstrapping or a Bayesian procedure). But a test...in the test you mention, you are still comparing the mean of group 1 (the score of the single patient, in this case) to the mean of group 2. But why would you want to know whether the patient differs from the mean of the control group? And H0 in this case would be "the patient does not differ from the control group mean" -- but is this a plausible or interesting null to test? I don't know and this is a gut-level response, but I am a little puzzled.

    At any rate, I don't think there is a specific reason that it does not work in JASP -- the constraint was probably hard-coded in there, but the assumption that there is a common variance should make the test possible, although I am not 100% certain. I am attending Alexander Ly to this post, he might offer some insights.

    Interesting problem.

    Cheers,
    E.J.

  • edited November 2017

    Hi Kasia,

    The default Bayes factor based on the Cauchy distribution on the population effect size is set up in such a way that it returns one, whenever the data are uninformative and data sets with one of the group sizes less than two are considered uninformative.

    The reason why such a data set is perceived as uninformative is because the problem two-sample t-test is parameterised with a grand mean, an effect size parameter in each group, and a shared standard deviation. With only one observation in group A, say, we cannot distinguish whether this observation is due to the grand mean or the presence of a population effect size.

    Mathematically, in group A we have three unknowns (population grand mean, effect size, and standard deviation) and only one known quantity (the single observation). The assumption that the two groups have a shared standard deviation, allows us to remove only one unknown and we are then left with two unknowns (population grand mean, effect size). In effect, we have an equation with two unknowns and only one known quantity and I cannot solve this mathematically, see

    Ly, A., Verhagen, A. J., & Wagenmakers, E.-J. (2016). An evaluation of alternative methods for testing hypotheses, from the perspective of Harold Jeffreys. Journal of Mathematical Psychology, 72, 43-55

    for a more thorough elaboration. In other words, using the default Bayesian two-sample t-test with group A consisting of only one participant returns a default Bayes factor of one.

    To overcome this problem, we have to set up the model a bit differently, that is, assume a bit more based on what is known. There are various ad hoc way to hack it, but it requires a bit more thought. I hope that this answers your question, at least partly.

    Cheers,
    Alexander

  • Dear E.J. and Alexander,

    Thanks a lot for your responses! Indeed, graphically I present my data as categorical scatters and clearly show that my patient falls within the distribution of the control group.

    As for the quantification, the Crawford method is very popular in cognitive neuropsychology, in fact it's the accepted statistical approach to compare a single case to a group of controls (which is usually modest). Using Monte Carlo simulation it was shown robust against Type I Errors. For the details and reprints, you can see here: http://homepages.abdn.ac.uk/j.crawford/pages/dept/SingleCaseMethodology.htm#f

    Actually, using the MATLAB formulas from https://sampendu.net/bayes-factors/ I applied the two-sample Bayesian t-test to my data and it yielded meaningful results.
    I used the t2smpbf formula that gives the "BF10 for a two-sample t-test with t-statistic t and sample sizes nx and ny. (See Rouder et al., 2009 for details)". I wanted to recalculate my results with JASP to see if it replicates the results of the MATLAB function.

    Therefore, I am wondering about your comment Alexander, because the Bayes Factors I have obtained on my data were not at all 1.

    Please note that I am not a complete beginner in the Bayesian Approach. Thus excuse me if my questions/remarks are naive.

    Thanks! Kasia

  • Hi Kasia,

    My approach was via the likelihood of the raw data in relation to the parameters, which I believe is something different from calculating a statistic and entering into a test, because the transformations you do to the data are not necessarily reflected by the parameters. Regardless, I'm glad that you've worked out the problem.

    Cheers,
    Alexander

Sign In or Register to comment.