Bayesian "multiple comparisons correction" with positively dependent tests?
Dear Bayesianists,
I would like to perform the equivalent of NHST's multiple comparisons correction using Bayesian statistics. My understanding from previous answers on this message board is that one could, for instance, choose the prior odds accordingly that represent one's belief about how likely one expects one test in your test family to show an effect, i.e., H_1 is true. These priors are then integrated with the BFs from one's analysis to yield the posterior probabilities. So far so good.
Now, in permutation-based NHST one popular approach to control the family-wise error is to obtain distributions of the statistic of interest for each test individually and to combine them to take the maximum of each permutation to get a distribution of this _maximum statistic_. Evaluating this distribution allows you to compute controlled p values. The appeal of this approach is that it takes into account positive dependence between the tests in the family. If the tests are independent, it is more or less identical to classical FWE correction like Bonferroni. But, for example, if I have one brain area and a neighboring brain area, the signals between the two will be somewhat correlated. The classical FWE method would be too strict in this scenario. The max-statistic method, however, would be appropriate and provide more test power. For reference: https://doi.org/10.1016/B978-012264841-0/50048-2
How would the multiple comparisons correction method described in the beginning have to be modified to take into account the positive dependence between tests in a family? Would another approach be better when using Bayes factors?
Thanks and kind regards,
Michael
Comments
Hi Michael,
Good question. The "obvious" answer would be some sort of hierarchical modeling to account for the similarity. It would be a good grant proposal :-) But it seems to me that you would have to model the dependence explicitly.
Cheers,
E.J.
Thanks for your reply, EJ. I believe there would indeed be strong interest in a solution for this problem (in cognitive neuroscience for sure). Explicitly modeling the dependence sounds like it would be difficult to converge on a model that is generically applicable to a wide range of scenarios (and test families). The appeal of the permutation approach is of course its flexibility.
My intuition tells me that such a model would involve estimation of a covariance matrix with Wishart prior or perhaps a multivariate approach like a Bayesian Hotelling's T test with pairwise posthoc tests or similar.
Exploring the possibilities is veeery tempting. My spirit animal, however, warned me in my dream that a part-time Bayesianist like myself should better be extremely careful before choosing to go down that path... 🐰
:-)