MSB
About
 Username
 MSB
 Joined
 Visits
 251
 Last Active
 Roles
 Member
Comments

If you're using R, you can get a posterior CI based on the null + two tailed + one tailed (all weighted based on the BF). You can read more about why you should do this here: https://doi.org/10.31234/osf.io/h6pr8 Here is an example using BayesFacto…

It sounds like you are posthoc setting a prior (the null interval is a type of prior, on the null). This is illadvised. If you don't specify the null internal, a point null of 0 is used.

The interval is on the scale of Cohen's d  the standardized difference.

I don't think so  because the second model's parameters are not the same as the parameters of the first (it has one more). But @EJ would probably know best (:

Yes, this should generally work for any K datasets from replications  but note that all data must be from exact replications for this kind of analysis to make sense. Good luck!

library(BayesFactor) # say you have 2 data sets iris_1 < iris[1:75, ] iris_2 < iris[(1:75), ] # To get a replication BF you need: ## 1. BF of the first data set BF1 < lmBF(Sepal.Length ~ Sepal.Width + Petal.Length, data = iris_…

From the two options, I would suggest BF incl  make sure to mark "compare across matched models". (See explanation here.) You can also check out a reporting guideline for BF incl I wrote for my students here. (Instead of citing bayestestR…

Jeff has addressed this issue in this recent presentation: https://www.youtube.com/watch?v=PzHcwS3xbZ8

Use extractBF with `logbf = FALSE` library(BayesFactor) data(attitude) output < regressionBF(rating ~ complaints + privileges, data = attitude, progress = FALSE) output #> Bayes factor analysis #>  #…

http://forum.cogsci.nl/uploads/842/2Q4J9C8L28XK.png Same r, different BF

Adding to EJ, On a syntax level, you would need to also specify "ID" in the formula itself. lmBF (Weight ~ height + ID, data = test, whichRandom = "ID")

The differences between the BFs for "salience" might be explains as stemming from the fact the other models that also include "salience" are much better than the models that do not include "salience" (which is the defin…

Thanks. I've since reached out to Jeff Rouder  I will update here if I hear from him. Thanks again and happy holidays!

Generally, BFs are suppose to converge to the "truth" as N increases. So I depends what "truth" your betas are closer to, I guess. For determining N, you might also be interested in https://github.com/nicebread/BFDA

You can use as.vector(): library(BayesFactor) data(puzzles) result < anovaBF(RT ~ shape*color + ID, data = puzzles, whichRandom = "ID", progress = FALSE) result #> Bayes factor analysis #>  #> [1] sha…

Hi @EJ, any news on this?

I don't think this should matter. But, upon further reflection, the lme4 formula should be: percent_looking ~ Book * Condition + (1 + Book  Trial:Subject) + (1 + Book + ...  Subject) to account for the fact the trials are nest…

Hmmm... Given your data and design, probably the most correct analysis would be a multinomial logistic regression... But let's stick to an ANOVAlike design. It seems %A and %B are dependent (negatively). You can deal with this dependance in two way…

How dependent are A and B? If they are completely dependent (say 100% gaze = GazeA + GazeB), than no need to put both measurements into the model  the intercept will give an indication for both, the main effect for condition (X/Y) will actually be …

Hi Gabriel, Until the JASP R package is available 😅, you can use bayestestR::bayesfactor_inclusion()(gives the same results as JASP). Mattan

Everything seems like it should work... Sorry I couldn't be of more help...

And the JASP data format is in the wide format? Very weird indeed...

JASP uses BayesFactor under the hood, so they should produce the same results... The only thing I can think of is if the data is a repeated measures design, and the ID random intercept is missspecified somewhere..

Hi Lior, Bayes factors are the ratio of P(DataM) (the likelihood), not P(MData) (the posterior probability of the model). Hope that helps!

Hi Flaihai, If you are interested in accounting for knowledge gained in the first study, you can use a replication Bayes factor (for a simple application you can read DOI:10.3758/s134280181092x), assuming this is a direct / exact replication. Goo…

It would see that the matrix is organized with row as the numerator and column as the denominator (@EJ this does seem counterintuitive...), so 31735.222 represents how much more likely H1 is compared to H2. You can also see that the PMP (Posterior…

What you want is to orderrestrict your H1 model. This can be done in one of the following: Newest version of JASP has a BAIN module seems to do just this (but is in BETA) with "model constrains". More info here >> Write some custom …

But maybe replace order of magnitude on base 10, with order of magnitude on basee (which is basically the classical cutoffs for BFs)..

> but still human I resent your assumptions, sir! But yes, you are right  it would affected my interpretation somewhat 😞 (But that is a change of X4, what you described was a change of <X2, no?)

As you mention these fluctuations are minor (I would say that a change less than an order of magnitude is not substantial for a BF that is a ratio, after all). What to do? Tread lightly  I think it is reasonable to explain in ms that due to instabi…