Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

neuroimaging data and BF

Hi,

I ran a few studies with functional near infrared spectroscopy (fNIRS) and while I think it could be a useful method, its still early days. Therefore the results in the literature are messy, everyone reports results in different ways (either reporting oxy, or deoxy signals, or both, or the difference between the two). There are also various options regarding preprocessing, which are widely discussed and no consensus reached. Lastly, as usually the whole fNIRS analyses spits out some beta values which are just analysed in t tests, for cognitive studies, there is the issue of correcting for multiple comparisons. Some people report uncorrected p values (as if corrected p values arent bad enough...), but that seems to be dying down. The problem with correction in fNIRS seems that its very conservative and eradicates everything or most. It's all a bit of a mess!

So I want to use BF to evaluate all of these issues. I can run analyses comparing the different preprocessing ways, analysing the various signals, and getting beta values for all. I do of course get p values too, just as thats what we (sadly) still are expected to do, and in some regards they might be useful in guiding my BF analyses, keeping in mind they are uncorrected p values...?

I'm a JASP (novice/) convert, and I think here it would provide a useful tool in evaluating the evidence from fNIRS data. I'm not sure my supervisor nor reviewers will be okay with this, but I want to try to make the argument. Do you think that makes sense?

Thanks!

Comments

  • Hi metamorphose42,

    Yes, you can use BFs. Opinions differ on what to make of this in case of multiple comparisons. I personally am in the Jeffreys, Scott & Berger camp, who argue that you ought to adjust your prior odds. Basically, they argue, in case you are fishing for signal, you probably expect most of the results to return noise. So you can either set the prior odds to a specific number (or proportion of tests conducted), or put a prior on it. There is another camp of people who feel that the evidence is just the evidence, and that the prior odds are a function solely of relative plausibility. That may be so, but with 80,000 tests that would be difficult to determine. Bottom-line: this is very useful but not so simple.

    Cheers,
    E.J.

  • Thanks so much. I will certainly read up on this further, but I already tend towards adjusting the prior.

    One more question, as I lost a lot of data, from 25 participants to as few as df=9 for some t tests, but still get a very strong BF, but BF isnt independent of sample size though? And I faintly remember from the workshop that there was a way to explore sample size issues, was it sequential analyses?

    Many many thanks!

  • BF simply quantities the evidence. On average, more participants means more evidence, but it is possible to obtain decisive evidence with few observations.

    The sequential analysis shows the evidential flow as the sample grows.
    E.J.

  • Thanks so much, so very helpful!

  • Hi EJ,

    I too am writing a paper that is in an area almost untouched by Bayesian stats, "second language acquisition". Often you see studies in this area which are underpowered and have uncorrected multiple comparisons. Nonetheless, studies have no problems making very definitive conclusions about the comparisons with p-values less than 0.05. I mainly want to use Bayes factors for a more appropriate and reflective measure of the strength of evidence for particular hypotheses. Now, I myself am going to make multiple comparisons from a relatively small dataset (n = 40, over 2-time points, analysed on about 10 separate repeated measures ANOVAs, as can't do Bayesian MANOVA in JASP yet).

    I would like to provide a discussion of correction from multiple hypotheses. You mention that "There is another camp of people who feel that the evidence is just the evidence" Would it be possible to point me in the direction of that literature?

    Best,

    Gareth.

  • Hi Gareth,

    This is consistent with subjective Bayesianism. I know Dennis Lindley was of this opinion, for instance, but perhaps it is also mentioned in Edwards, Lindman, & Savage (1963). The idea is that as long as you have specified probabilities for the hypotheses you are planning to test, and they really reflect your belief, then Bayes' rule simply updates that knowledge, and it does not matter how many other hypotheses are in the mix.

    So in terms of fMRI, suppose researcher A measures 80,000 voxels and believes that each has a 50-50 shot of being active. Researcher B only measures 1 voxel and believes that it has a 50-50 shot of being active. Now suppose the overlapping voxel shows the same data; according to a subjective Bayesian, the inference for this voxel ought to be the same, regardless of the fact that A also measured 79,999 other voxels.

    Of course, multiple comparisons often signal a lack of prior conviction. Anyhow, the subjective opinion may also have been discussed briefly in the work on objective Bayesian solutions (e.g., Scott & Berger, 2006, 2010).

    Cheers,
    E.J.

  • in my search to find how BF and correction for multiple testing are related I found above posts.
    I determined Bayesian factors for a group comparison on seven variables, using: H1: delta ~ Cauchy(width=0.707).
    Furthermore, I estimated BFs for 7 correlations, using: Stretched beta prior width 1.

    Can I make the argument not to correct p-values for multiple testing by providing the BFs to inform the reader about the likelihood of the presence or absence of an effect? I was thinking along the following lines:

    "Importantly, when a frequentist approach (e.g., t-test) indicates the absence of an effect (e.g., p>0.05), the Bayesian approach can give additional information by assessing the likelihood of this absence. A Bayes factor (BF01) higher than 1 indicates that the data is more likely under the null-hypothesis than under the alternative hypothesis. For example, a BF01 of 1.5 means that the data are 1.5 times more likely to occur under the null hypothesis compared to the alternative hypothesis. A high BF01 (>3), favors the hypothesis that there is no effect, while a low BF01 (<0.33, which is equal to BF10 (i.e., 1/BF01)>3) favors the hypothesis that there is an effect. A BF01 between 0.33 and 3 indicates that the evidence is inconclusive.
    A correction for multiple testing becomes redundant when using Bayesian statistics. Frequentist approaches, e.g., t-tests, use significance levels to draw conclusions about accepting or rejecting a hypothesis. These approaches need to take care of an increase of type I errors by using a correction for multiple comparisons. Conversely, the Bayesian approach allows for assessing the likelihood of a hypothesis. Therefore, type I errors do not exist with this approach and a correction for multiple comparison is not needed."

    thanks
    NvH

  • Hi NvH,

    Yes, this has been argued, mostly by subjective Bayesians such as Lindley. The Bayesian "correction" for multiplicity is in the prior model probability. If you are testing 10 effects, do you really believe that every single test is plausible, just as plausible as if you had set out to test a single one? You are right that the BF is not affected by multiplicity. So subjective Bayesians will argue that no correction is needed (because you base your plausibility assessment on prior knowledge). Objective Bayesians such as Jeffreys and Berger believe that the prior model probability needs to be adjusted. The papers by Scott and Berger on linear regression are informative in this respect. It is an interesting issue.

    Cheers,
    E.J.

  • Hmm I noticed I basically re-entered my earlier answer. Well, goes to show I didn't change my mind about this.
    E.J.

Sign In or Register to comment.