Sorry, somehow I posted my previous message before I finished. Let me start again:

I am trying to show that there is a negligible correlation between two variables using the correlationBF function. I am using an interval hypothesis at (-.1,.1). I have been playing around with the rscale argument but I am confused by its effect on the results. For example, here are my BFs for a couple conditions:

No interval:

rscale = .1: BF = 1.9
rscale = .5: BF = 3.8

Null Interval (-1., .1):

rscale = .1: BF = 2.8
rscale = .5: BF = 6.5

I understand why BF goes up when I use an interval, I have increased the area of the distribution that is included in the definition of a negligible relationship.

However, I don't understand why the BF (with a negligible correlation as the numerator, rho = 0 with no interval, -.1 < rho < .1 for the [-.1,.1] interval,) is getting larger as rscale increases. I thought as rscale increases the prior distribution is getting wider (but still centred on rho = 0). If the prior distribution gets wider then correlations different from 0 should become more likely.

Thanks for any help you can provide.

Here is an example of the code I am using:

n<-50
x<-rnorm(n)
y<-rnorm(n)

No interval & rscale=.5:
correlationBF(x,y,rscale=.5,posterior=FALSE)

As an aside, my confusion also extends to analyses in JASP (but now with no interval). If I set the "stretched beta prior width" to 1, the BF(01) (for two random variables, rho=0) is about 6, but if I set the "stretched beta prior width" to .1 (much narrower prior, still centred over rho=0) the BF(01) is reduced to about 2. Shouldn't the narrower prior result is a greater likelihood for the null (rho=0) hypothesis?

I am clearly just misunderstanding the effect of the width of the prior here ... any help would be greatly appreciated.

The BF compares the predictive adequacy of two hypotheses:
1. H0, which says that the correlations in the sample are expected to be modest and near zero.
2. H1. When the width is small, this says that the correlations in the sample are expected to be modest and near zero.
So when you decrease the width, the predictions of H1 become increasingly similar to those from H0, and consequently the data become nondiagnostic and the BF approaches 1 regardless of the data.
Cheers,
E.J.

Is it fair to say then that priors have a very different role in Bayes Factor analyses (as opposed to a Bayes estimation problem), as they appear to essentially "set" H1 in BF analyses.

Lets say my goal is to show that there is a negligible correlation (that actually is my goal so it is no stretch). In a Bayes estimation setting, using a narrower prior provides more support for my research hypothesis by narrowing the HDI, but in a Bayes Factor setting using a narrower prior leads to less support for my hypothesis (lower BF01) since this results in H0 and H1 proposing similar hypotheses.

Again, thanks a lot for taking the time to reply and help me with this novice issue.
Rob

## Comments

Sorry, somehow I posted my previous message before I finished. Let me start again:

I am trying to show that there is a negligible correlation between two variables using the correlationBF function. I am using an interval hypothesis at (-.1,.1). I have been playing around with the rscale argument but I am confused by its effect on the results. For example, here are my BFs for a couple conditions:

No interval:

rscale = .1: BF = 1.9

rscale = .5: BF = 3.8

Null Interval (-1., .1):

rscale = .1: BF = 2.8

rscale = .5: BF = 6.5

I understand why BF goes up when I use an interval, I have increased the area of the distribution that is included in the definition of a negligible relationship.

However, I don't understand why the BF (with a negligible correlation as the numerator, rho = 0 with no interval, -.1 < rho < .1 for the [-.1,.1] interval,) is getting larger as rscale increases. I thought as rscale increases the prior distribution is getting wider (but still centred on rho = 0). If the prior distribution gets wider then correlations different from 0 should become more likely.

Thanks for any help you can provide.

Here is an example of the code I am using:

n<-50

x<-rnorm(n)

y<-rnorm(n)

No interval & rscale=.5:

correlationBF(x,y,rscale=.5,posterior=FALSE)

Interval -.1,.1 & rscale=.5:

bf<-correlationBF(x,y,nullInterval=c(-.1,.1),rscale=.5,posterior=FALSE)

bf[1]/bf[2]

As an aside, my confusion also extends to analyses in JASP (but now with no interval). If I set the "stretched beta prior width" to 1, the BF(01) (for two random variables, rho=0) is about 6, but if I set the "stretched beta prior width" to .1 (much narrower prior, still centred over rho=0) the BF(01) is reduced to about 2. Shouldn't the narrower prior result is a greater likelihood for the null (rho=0) hypothesis?

I am clearly just misunderstanding the effect of the width of the prior here ... any help would be greatly appreciated.

The BF compares the predictive adequacy of two hypotheses:

1. H0, which says that the correlations in the sample are expected to be modest and near zero.

2. H1. When the width is small, this says that the correlations in the sample are expected to be modest and near zero.

So when you decrease the width, the predictions of H1 become increasingly similar to those from H0, and consequently the data become nondiagnostic and the BF approaches 1 regardless of the data.

Cheers,

E.J.

Thanks E.J., that makes a lot of sense.

Is it fair to say then that priors have a very different role in Bayes Factor analyses (as opposed to a Bayes estimation problem), as they appear to essentially "set" H1 in BF analyses.

Lets say my goal is to show that there is a negligible correlation (that actually is my goal so it is no stretch). In a Bayes estimation setting, using a narrower prior provides more support for my research hypothesis by narrowing the HDI, but in a Bayes Factor setting using a narrower prior leads to less support for my hypothesis (lower BF01) since this results in H0 and H1 proposing similar hypotheses.

Again, thanks a lot for taking the time to reply and help me with this novice issue.

Rob

Yes, that's correct, and an interesting difference between estimation and testing

E.J.

Thanks ... much appreciated E.J.