Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

MSB

MSB
Joined
Visits
288
Last Active
Roles
Member

• No, please see here: https://easystats.github.io/effectsize/reference/rank_epsilon_squared.html https://easystats.github.io/effectsize/articles/anovaES.html#for-ordinal-outcomes
Comment by MSB November 2022
• Two things are going on: JASP is using either Satterthwaite or Kenward-Rogers estimates of degrees of freedom, while MATLAB and lme4 use residual dfs or no dfs (in R you can use lmerTest instead to also get Satterthwaite / Kenward-Rogers dfs). JAS…
Comment by MSB September 2021
• You might be interested in the effectsize package for both these issues: https://easystats.github.io/effectsize/articles/bayesian_models.html#eta2-1 https://easystats.github.io/effectsize/articles/simple_htests.html#difference-in-ranks
Comment by MSB July 2021
• You can do this with the R package effectsize : https://easystats.github.io/effectsize/articles/simple_htests.html#rank-based-tests-1
Comment by MSB April 2021
• Awesome! I am the package maintainer of effectsize so feel free to contact me on https://github.com/easystats/effectsize/issues
Comment by MSB February 2021
• Just like any correlation: 0 means no difference, and non 0 values indicate the size and direction, with positive values mean that the first group tends to be larger than the second negative values mean that the second group tends to be larger than …
Comment by MSB February 2021
• For ANOVAs, you can calculate partial Eta squared from the F statistic. See here: https://easystats.github.io/effectsize/articles/from_test_statistics.html Calculator: https://easystats4u.shinyapps.io/statistic2effectsize/
Comment by MSB February 2021
• In R, all the follow up analyses are done with the {emmeans} package. Contrasts with emmeans::contrasts Simple effects with emmeans::joint_tests Etc...
Comment by MSB February 2021
• AKAIK there is no effect size for the Bayesian Mann-Whitney test - the W is the test statistic. In the non-Bayesian, the rank-biserial correlation is indeed given.
Comment by MSB January 2021
• AFAIK JASP doesn't support adding random slopes (to indicate that the effects are RM). If you work in R, you can do this with the lmerTest package.
Comment by MSB December 2020
• This type of model can be built as a liner mixed model - rmANOVA does not support it.
Comment by MSB December 2020
• Definitely possible - Bayes factors are ratios, and they can be arbitrarily larger than 1, or arbitrarily smaller than 1. E.g., : <code> library(BayesFactor) BF <- generalTestBF(len ~ supp * dose, ToothGrowth) bayestestR::bayesfactor_in…
Comment by MSB October 2020
• If you're using R, you can get a posterior CI based on the null + two tailed + one tailed (all weighted based on the BF). You can read more about why you should do this here: https://doi.org/10.31234/osf.io/h6pr8 Here is an example using BayesFacto…
Comment by MSB July 2020
• It sounds like you are post-hoc setting a prior (the null interval is a type of prior, on the null). This is ill-advised. If you don't specify the null internal, a point null of 0 is used.
Comment by MSB July 2020
• The interval is on the scale of Cohen's d - the standardized difference.
Comment by MSB July 2020
• I don't think so - because the second model's parameters are not the same as the parameters of the first (it has one more). But @EJ would probably know best (:
Comment by MSB May 2020
• Yes, this should generally work for any K datasets from replications - but note that all data must be from exact replications for this kind of analysis to make sense. Good luck!
Comment by MSB May 2020
• library(BayesFactor) # say you have 2 data sets iris_1 <- iris[1:75, ] iris_2 <- iris[-(1:75), ] # To get a replication BF you need: ## 1. BF of the first data set BF1 <- lmBF(Sepal.Length ~ Sepal.Width + Petal.Length,       data = iris_…
Comment by MSB May 2020
• From the two options, I would suggest BF incl - make sure to mark "compare across matched models". (See explanation here.) You can also check out a reporting guideline for BF incl I wrote for my students here. (Instead of citing bayestestR…
Comment by MSB May 2020
Comment by MSB May 2020
• Use extractBF with `logbf = FALSE` library(BayesFactor) data(attitude) output <- regressionBF(rating ~ complaints + privileges,                        data = attitude, progress = FALSE) output #> Bayes factor analysis #> -------------- #…
Comment by MSB March 2020
• http://forum.cogsci.nl/uploads/842/2Q4J9C8L28XK.png Same r, different BF
Comment by MSB March 2020
• Adding to EJ, On a syntax level, you would need to also specify "ID" in the formula itself. lmBF (Weight ~ height + ID, data = test, whichRandom = "ID")
Comment by MSB February 2020
• The differences between the BFs for "salience" might be explains as stemming from the fact the other models that also include "salience" are much better than the models that do not include "salience" (which is the defin…
Comment by MSB December 2019
• Thanks. I've since reached out to Jeff Rouder - I will update here if I hear from him. Thanks again and happy holidays!
Comment by MSB December 2019
• Generally, BFs are suppose to converge to the "truth" as N increases. So I depends what "truth" your betas are closer to, I guess. For determining N, you might also be interested in https://github.com/nicebread/BFDA
Comment by MSB December 2019
• You can use as.vector(): library(BayesFactor) data(puzzles) result <- anovaBF(RT ~ shape*color + ID, data = puzzles,          whichRandom = "ID", progress = FALSE) result #> Bayes factor analysis #> -------------- #> [1] sha…
Comment by MSB December 2019
• Hi @EJ, any news on this?
Comment by MSB December 2019
• I don't think this should matter. But, upon further reflection, the lme4 formula should be: percent_looking ~ Book * Condition +           (1 + Book | Trial:Subject) +          (1 + Book + ... | Subject) to account for the fact the trials are nest…
Comment by MSB October 2019
• Hmmm... Given your data and design, probably the most correct analysis would be a multinomial logistic regression... But let's stick to an ANOVA-like design. It seems %A and %B are dependent (negatively). You can deal with this dependance in two way…
Comment by MSB October 2019