#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

# Calculating Cauchy prior.

Hi folks,

I've recently used BayesFactor with the default priors scale r. I have been advised to adjust the Cauchy width based on some pilot data rather than relying on the default values.

Can anyone advise how I'd go about that? I already have the null hypothesis t-tests and Cohen's d calculated if that is useful.

Thanks for your time and help,

Boo.

• Dear Boo,

Are you using a t-test? If so, you could take a look at the following two papers:
1. Informed t-test (https://arxiv.org/abs/1704.02479)
2. Replication Bayes factors (https://psyarxiv.com/u8m2s/)

Cheers,
E.J.

• Hi EJ,

Yes, I'm using a t-test (specifically ttestBF). Thank you, I'll have a look at those papers now.

Thanks,

Boo.

• edited October 2018

Hi again EJ,

So having read through the papers you pointed me towards (thanks again), I'd like to run my understanding of them by you:

If I followed the text correctly, in order to run an informed ttestBF with the aid of pilot data (or data from an earlier experiment) I would need to:

calculate the BF for the pilot data (let's call that A) ; then calculate the BF for the pilot and new data combined into the same test (let's call that B ) ; and finally divide the combined data BF by the pilot data BF (which would look like: Informed_BF = B/A)?

Is this correct, and if so, does that mean that I can keep the default priors scale r for both tests?

Thanks again,

Boo.

• Yes, that's correct, but note that for this approach to work you'd have to assume that other parameters (means and variances) are the same across experiments. If that's not the case, you could simply run a first analysis, get the posterior for effect size, and then use that as a prior for the analysis of the second study (either using Josine's code or by specifying an informed prior in JASP).
Cheers,
E.J.

• Hi EJ,

That's great thanks. When you say that the means and variance should be "the same" do you mean that they should be in the same ballpark, as in the effect is similarly reflected in both sets of data?

Also, I have one other question related to the method: Should this approach be nested when doing multiple replications? I'll try to clarify what I mean: Let's say I calculate 'Informed_BF' (as above), should I then use that BF when calculating the informed BF for a hypothetical 3rd data-set, i.e. when dividing the new combined data-sets BF? Or, should I use the original pilot BF (and associated data-set), or the unadjusted BF for the 2nd data-set?

I'm guessing the 'Informed_BF' is the value that should be divided into the combined data-set for this hypothetical situation, but I'm not 100% sure.

Thanks,

Boo.

• Hi Boo,

About the mean and variance: The data-generating process should be the same: it's OK for the sample estimates to fluctuate.

With respect to multiple replications, I think it is conceptually most strong to compute the Replication BF separately for each replication. But what makes sense depends on your purpose. With many replications you could consider a Bayesian meta-analysis (as described for instance here: https://osf.io/preprints/psyarxiv/9z8ch/)

Cheers,
E.J.

• Hi EJ,

I understand now. I generate the data in the same way, so looks like I'm good to go.

Thanks for your answer to my question but I suspect I was unintentionally ambiguous (or I didn't fully grasp your response).

I have pilot data, Experiment 1 (which replicates the conditions of interest from the pilot data), and Experiment 2 (which replicates the same conditions of interest from Experiment 1). What I was wondering was: Since I calculate the informed BF in Experiment 1 via the method you pointed towards, should I then use that informed BF when applying the same method to calculate the new informed BF for Experiment 2?

Sorry about all the questions. You've been a great help and it's very much appreciated.

Boo.

• Hi Boo

Sorry for my tardy reply. So you have used your pilot data to come up with a more informed prior -- note: this does assume that you are confident that H1 holds in your pilot data set. You are trying to construct a prior on effect size under H1, after all. So if the pilot data are really from H0, then it would be a mistake to use that to define the prior under H1.

So you have a prior distribution after the pilot data, and you use it to compute an informed BF for Experiment 1. Now you can do two things. You can either use the same pilot-prior for the analysis of Experiment 2, or you can keep updating. There are arguments for either procedure, but the traditional Bayesian way is to adjust the prior again (after seeing Experiment 1) and then using that for a test of the data from Experiment 2.

Cheers,
E.J.

• Hi again E.J.,

I've rerun my BF t-test using pilot data and the new data from Experiment 1 that followed. Now I'm moving onto calculating the adjusted BFs using the same method for Experiment 2, but have hit a wall in my understanding.

After our previous exchange I plan to take the adjusted BF in Experiment 1 and apply the same method to calculate the new informed BF for Experiment 2. Here's my question: when doing so should I take the data from Experiment 1 only and combine it with Experiment 2 or should I take the Pilot and Experiment 1 data and combine it with Experiment 2 data when applying the evidence updating method?

Sorry for pestering you about this, I just want to get it right.

Thanks,

Boo.

• Hi Boo,

I gather that you used the pilot data for the BF t-test for Experiment 1. If you use the updating method, then you ought to use the knowledge after Experiment 1 for the analysis of Experiment 2. This knowledge includes the pilot data. So it would be incorrect to suddenly drop the pilot data.

Cheers,
E.J.

• Hi EJ,

That makes sense when you put it like that!

Thanks for all your help,

Boo.

• Hi again EJ,

I've had another thought...(I'm sure you're delighted to hear that!).

As per our last correspondence I understand that we always bring forward prior data and accumulate it across experiments so we are increasing evidence after each experiment when dealing with the same conditions, and adjust the BF accordingly.

Here's a case that I'm considering at the moment:

I have pilot data, Experiment 1 data, and Experiment 2 data for condition A;

I also have data for a new condition B in Experiment 2;

I want to compare condition A to condition B. Is it safe and logical to take all the available evidence for condition A (namely the data from the pilot, Experiment 1, and Experiment 2) and compare that to the data for condition B in Experiment 2?

Obviously there'll be no adjustment of a BF here since we don't have a previous BF to work with, but is this approach sound for calculating the BF for condition B? This would mean running an unpaired BF t-test.

Thanks as always for your time,

Boo.

• Hi Boo,

If the data are exchangeable between pilot, Exp1, and Exp2 (a big if!), then you can just label all of that data as "condition A" and compare it to "condition B" (for an unbalanced test, but that's OK).

It is difficult to do things otherwise since, for the unpaired t-test, the test-relevant prior is on the standardized difference between the conditions. If you put priors on the raw means than it's much easier. This is how Zoltan Dienes does it.

Cheers,
E.J.