Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

neuroimaging data and BF


I ran a few studies with functional near infrared spectroscopy (fNIRS) and while I think it could be a useful method, its still early days. Therefore the results in the literature are messy, everyone reports results in different ways (either reporting oxy, or deoxy signals, or both, or the difference between the two). There are also various options regarding preprocessing, which are widely discussed and no consensus reached. Lastly, as usually the whole fNIRS analyses spits out some beta values which are just analysed in t tests, for cognitive studies, there is the issue of correcting for multiple comparisons. Some people report uncorrected p values (as if corrected p values arent bad enough...), but that seems to be dying down. The problem with correction in fNIRS seems that its very conservative and eradicates everything or most. It's all a bit of a mess!

So I want to use BF to evaluate all of these issues. I can run analyses comparing the different preprocessing ways, analysing the various signals, and getting beta values for all. I do of course get p values too, just as thats what we (sadly) still are expected to do, and in some regards they might be useful in guiding my BF analyses, keeping in mind they are uncorrected p values...?

I'm a JASP (novice/) convert, and I think here it would provide a useful tool in evaluating the evidence from fNIRS data. I'm not sure my supervisor nor reviewers will be okay with this, but I want to try to make the argument. Do you think that makes sense?



  • Hi metamorphose42,

    Yes, you can use BFs. Opinions differ on what to make of this in case of multiple comparisons. I personally am in the Jeffreys, Scott & Berger camp, who argue that you ought to adjust your prior odds. Basically, they argue, in case you are fishing for signal, you probably expect most of the results to return noise. So you can either set the prior odds to a specific number (or proportion of tests conducted), or put a prior on it. There is another camp of people who feel that the evidence is just the evidence, and that the prior odds are a function solely of relative plausibility. That may be so, but with 80,000 tests that would be difficult to determine. Bottom-line: this is very useful but not so simple.


  • Thanks so much. I will certainly read up on this further, but I already tend towards adjusting the prior.

    One more question, as I lost a lot of data, from 25 participants to as few as df=9 for some t tests, but still get a very strong BF, but BF isnt independent of sample size though? And I faintly remember from the workshop that there was a way to explore sample size issues, was it sequential analyses?

    Many many thanks!

  • BF simply quantities the evidence. On average, more participants means more evidence, but it is possible to obtain decisive evidence with few observations.

    The sequential analysis shows the evidential flow as the sample grows.

  • Thanks so much, so very helpful!

  • Hi EJ,

    I too am writing a paper that is in an area almost untouched by Bayesian stats, "second language acquisition". Often you see studies in this area which are underpowered and have uncorrected multiple comparisons. Nonetheless, studies have no problems making very definitive conclusions about the comparisons with p-values less than 0.05. I mainly want to use Bayes factors for a more appropriate and reflective measure of the strength of evidence for particular hypotheses. Now, I myself am going to make multiple comparisons from a relatively small dataset (n = 40, over 2-time points, analysed on about 10 separate repeated measures ANOVAs, as can't do Bayesian MANOVA in JASP yet).

    I would like to provide a discussion of correction from multiple hypotheses. You mention that "There is another camp of people who feel that the evidence is just the evidence" Would it be possible to point me in the direction of that literature?



  • Hi Gareth,

    This is consistent with subjective Bayesianism. I know Dennis Lindley was of this opinion, for instance, but perhaps it is also mentioned in Edwards, Lindman, & Savage (1963). The idea is that as long as you have specified probabilities for the hypotheses you are planning to test, and they really reflect your belief, then Bayes' rule simply updates that knowledge, and it does not matter how many other hypotheses are in the mix.

    So in terms of fMRI, suppose researcher A measures 80,000 voxels and believes that each has a 50-50 shot of being active. Researcher B only measures 1 voxel and believes that it has a 50-50 shot of being active. Now suppose the overlapping voxel shows the same data; according to a subjective Bayesian, the inference for this voxel ought to be the same, regardless of the fact that A also measured 79,999 other voxels.

    Of course, multiple comparisons often signal a lack of prior conviction. Anyhow, the subjective opinion may also have been discussed briefly in the work on objective Bayesian solutions (e.g., Scott & Berger, 2006, 2010).


Sign In or Register to comment.