MSB
About
 Username
 MSB
 Joined
 Visits
 288
 Last Active
 Roles
 Member
Comments

No, please see here: https://easystats.github.io/effectsize/reference/rank_epsilon_squared.html https://easystats.github.io/effectsize/articles/anovaES.html#forordinaloutcomes

Two things are going on: JASP is using either Satterthwaite or KenwardRogers estimates of degrees of freedom, while MATLAB and lme4 use residual dfs or no dfs (in R you can use lmerTest instead to also get Satterthwaite / KenwardRogers dfs). JAS…

You might be interested in the effectsize package for both these issues: https://easystats.github.io/effectsize/articles/bayesian_models.html#eta21 https://easystats.github.io/effectsize/articles/simple_htests.html#differenceinranks

You can do this with the R package effectsize : https://easystats.github.io/effectsize/articles/simple_htests.html#rankbasedtests1

Awesome! I am the package maintainer of effectsize so feel free to contact me on https://github.com/easystats/effectsize/issues

Just like any correlation: 0 means no difference, and non 0 values indicate the size and direction, with positive values mean that the first group tends to be larger than the second negative values mean that the second group tends to be larger than …

For ANOVAs, you can calculate partial Eta squared from the F statistic. See here: https://easystats.github.io/effectsize/articles/from_test_statistics.html Calculator: https://easystats4u.shinyapps.io/statistic2effectsize/

In R, all the follow up analyses are done with the {emmeans} package. Contrasts with emmeans::contrasts Simple effects with emmeans::joint_tests Etc...

AKAIK there is no effect size for the Bayesian MannWhitney test  the W is the test statistic. In the nonBayesian, the rankbiserial correlation is indeed given.

AFAIK JASP doesn't support adding random slopes (to indicate that the effects are RM). If you work in R, you can do this with the lmerTest package.

This type of model can be built as a liner mixed model  rmANOVA does not support it.

Definitely possible  Bayes factors are ratios, and they can be arbitrarily larger than 1, or arbitrarily smaller than 1. E.g., : <code> library(BayesFactor) BF < generalTestBF(len ~ supp * dose, ToothGrowth) bayestestR::bayesfactor_in…

If you're using R, you can get a posterior CI based on the null + two tailed + one tailed (all weighted based on the BF). You can read more about why you should do this here: https://doi.org/10.31234/osf.io/h6pr8 Here is an example using BayesFacto…

It sounds like you are posthoc setting a prior (the null interval is a type of prior, on the null). This is illadvised. If you don't specify the null internal, a point null of 0 is used.

The interval is on the scale of Cohen's d  the standardized difference.

I don't think so  because the second model's parameters are not the same as the parameters of the first (it has one more). But @EJ would probably know best (:

Yes, this should generally work for any K datasets from replications  but note that all data must be from exact replications for this kind of analysis to make sense. Good luck!

library(BayesFactor) # say you have 2 data sets iris_1 < iris[1:75, ] iris_2 < iris[(1:75), ] # To get a replication BF you need: ## 1. BF of the first data set BF1 < lmBF(Sepal.Length ~ Sepal.Width + Petal.Length, data = iris_…

From the two options, I would suggest BF incl  make sure to mark "compare across matched models". (See explanation here.) You can also check out a reporting guideline for BF incl I wrote for my students here. (Instead of citing bayestestR…

Jeff has addressed this issue in this recent presentation: https://www.youtube.com/watch?v=PzHcwS3xbZ8

Use extractBF with `logbf = FALSE` library(BayesFactor) data(attitude) output < regressionBF(rating ~ complaints + privileges, data = attitude, progress = FALSE) output #> Bayes factor analysis #>  #…

http://forum.cogsci.nl/uploads/842/2Q4J9C8L28XK.png Same r, different BF

Adding to EJ, On a syntax level, you would need to also specify "ID" in the formula itself. lmBF (Weight ~ height + ID, data = test, whichRandom = "ID")

The differences between the BFs for "salience" might be explains as stemming from the fact the other models that also include "salience" are much better than the models that do not include "salience" (which is the defin…

Thanks. I've since reached out to Jeff Rouder  I will update here if I hear from him. Thanks again and happy holidays!

Generally, BFs are suppose to converge to the "truth" as N increases. So I depends what "truth" your betas are closer to, I guess. For determining N, you might also be interested in https://github.com/nicebread/BFDA

You can use as.vector(): library(BayesFactor) data(puzzles) result < anovaBF(RT ~ shape*color + ID, data = puzzles, whichRandom = "ID", progress = FALSE) result #> Bayes factor analysis #>  #> [1] sha…

Hi @EJ, any news on this?

I don't think this should matter. But, upon further reflection, the lme4 formula should be: percent_looking ~ Book * Condition + (1 + Book  Trial:Subject) + (1 + Book + ...  Subject) to account for the fact the trials are nest…

Hmmm... Given your data and design, probably the most correct analysis would be a multinomial logistic regression... But let's stick to an ANOVAlike design. It seems %A and %B are dependent (negatively). You can deal with this dependance in two way…