I was wondering if someone could clarify for me what assumptions JASP makes for unequal cell designs for mixed-design ANOVAs. I have 2 repeated-measures (Congruency, DemandCue) and two between-subjects measures (Feedback, Experiment). In one experiment, I had N = 60; the next experiment doubled the sample size (N = 120), so Experiment is what has the unequal cell design.

I have been trying to recreate the results that JASP outputs with many different R packages. JASP, for instance, writes that the F statistic for DemandCue is F(1,176) = 26.286, p < 0.001. The F statistic for Congruency is F(1,176) = 147.102, p < 0.001.

After factorizing subject, DemandCue, Experiment, Congruency, and Feedback (and rawRTData <- read.csv('SC_ANOVA_RT.csv') - csv file linked at the bottom), I have tried the following in R:

SC_RT_runANOVA <- aov(RT ~ Feedback * Experiment * DemandCue * Congruency + Error(subject/(Congruency*DemandCue)), data = rawRTData)

summary(SC_RT_runANOVA)

AND, using the lme4 package --

anova(lmer(RT ~ (Feedback*Experiment*DemandCue*Congruency) + (1|subject) + (1|DemandCue:subject) + (1|Congruency:subject), data=rawRTData))

These two produce the same result: F(1,176) = 172.1329, p < 0.001 for congruency and F(1,176) = 35.7272, p < 0.001. I saw on the internet that maybe I had to specify contr.sum for the contrasts and type 3 sums of squares, but that did not change the output. I also know that it is the "Experiment" factor at issue here: when I removed it from both the R code and JASP, I was able to reproduce the JASP output with the R code. I also converted the dummy coding of Experiment from 0/1 to E1/E2, and that changed the F stats to 171.9760 for congruency and 35.7607 for DemandCue. So that seemed to me like there are some issues with R handling that I don't get, but that the R peculiarity may not fully explain the JASP/R difference.

I tried looking at the JASP R code, and it had so many dependencies and references to other parts of the code that I found it harder to understand. I also trust JASP more than this simple code, because I think you all have spent more time thinking about how to best to portion out the variance than I have, but I would like to know how to reproduce the JASP results. What underlying assumption am I missing here? Are there actually different assumptions, or is this a merely R peculiarity that I hadn't discovered (e.g., the dummy coding)?

(If you want to use the same long-form data that I mention here, here is a link: https://www.dropbox.com/s/gjc1czqzeir4s2c/SC_ANOVA_RT.csv?dl=0. The wide-form JASP data is here (the first four columns indicate RM1.1, RM1.2, RM2.1, RM2.2): https://www.dropbox.com/s/ag9mmqykp8wudpn/RT_wideform_both.csv?dl=0)

Thank you!

]]>my name is Jan and I'm currently working on my Bachelor's thesis. My task is to replicate a study. In the original study, four group means were analysed by an ANOVA with the hypothesis, that the means would not show any differences (i.e. the groups means being comparable). In a private mail, the authors stated that they didn't focus on NHST but simply analysed the data "in a loose way" by comparing means for each item.

Now, for the replication, I have identified an item which did show quite a statistically significant difference and effect size (see attached file) - so it seems suitable for a replication project. To determine, whether there actually exists a difference between groups, I'd like to use Bayes factors.

Now, here are my questions:

1) I wish to calculate the BF by hand from the rregular ANOVA output. Using the equation provided by Wagenmakers et al. (2007) I first calculate deltaBIC10 = n*ln(SSError/(SSError+SSEffect))+ (k1-k0)*ln n

So far, after consulting further literature, I still do not understand, what value to insert for k1 and k0 in my case. And what is the underlying routine in JASP for the BAYES ANOVA/Bayes factor calculation?

2) Running the Bayesian ANOVA in JASP for that item with the statistically significant differences and the large effect size, results in a Bayes Factor10 of 2.49e+15

Is this a plausible result?

Thank you for your much appreciated hints and help,

Jan Krebs

I would like to know why, when we use "residual variance" as the option for "factor scaling" feature in Lavaan, we get no p-values for the factor and its factor predictors (in the case of a second order factor). Additionally, what is the difference beteeen "factor loadings" and "residual variance" as options for "factor scaling". My model only converge using "residual variance" or "none" option.

Thank you all a lot for the attention

best regards

cesar

]]>I applied pre and post tests to small sample (8 people) to evaluate their lexical inventory. I'm trying to interpret the results but I'm just familiarizing myself with Bayesian and I'm afraid I might make a mistake.

I'd very much appreciate the help!

]]>By the way, it seems that the most attracting feature of such graphical interface is that it's intuitive for data cleaning. For example, some type error can be captured by just glancing. It's depressing that JASP is not able to ease the data cleaning process.

]]>Thanks

Mateu

]]>I was wondering if there was a way to conduct a Bayesian F-test of equality of variances - I have two groups and I want to show that their variances are equal (H0 is true).

Is there a way to do this with JASP? or R? or any other way (I have a calculator and I'm not afraid to use it!)?

Thanks,

M

]]>I have a question concerning the default Cauchy prior width used for Bayesian ANOVAs in JASP. For t-tests, a prior width of 0.707 is recommended by e.g. Rouder et al., and this corresponds to the default width implemented in JASP. For ANOVAs however, the default width is set to 0.5. I've been having trouble researching what the reasoning behind it is. Does anyone know/can anyone point me to a corresponding paper? Any help would be greatly appreciated.

]]>I ran a meta-analysis in JASP (using Effect size and Standard Error) and in another software, but I found different results.

Could someone take a look in this excel data sheet (within-group effect size and between-group effect size) and tell me if the Effect size and Standard Error are those requested by JASP?

I appreciate any help,

Ricardo

I am new at using JASP and I was wondering whether you could help me out with some issues I have running a Bayesian regression analysis.

I have been asked to use Bayes factor to strengthen my analyses in order to be able to make more correct inferences about my non-significant results. I was told to do this using JASP but I have a couple of questions regarding how to correctly carry out my analysis.

First of all, I have to carry out a regression analysis that includes binary as well as continuous variables as predictors, but when I choose to do Bayesian regression in JASP it does not allow me to include nominal variables as covariates. Is there a way to get around this i.e. can I treat the variables as scale with the levels 0 and 1? Or is it simply not possible to do Bayesian regression with binary predictors in JASP?

Secondly, if I want to set an objective prior do I select the Uniform option for Model prior in the Advanced options?

Thank you

Thalia

]]>I was excited to see the MANOVA option implemented in the newest JASP, because the dataset I am currently working on needs exactly that! However, I do not see any effect size or power analysis output. Am I missing something? Are there other ways of calculating such coefficients?

Thanks!

]]>(sorry for my very bad english)

I discover the version 10 and i have a question : why did you remove the Dunn's post hoc test in non parametric ANOVA?

Hervé

]]>Thank you

]]>I'm running a 2 x 2 x 9 mixed model Bayesian Repeated Measured ANOVA and using the baws method (effects - across matched models). In the analysis, I am getting NaN for the BFinclusion for the main effect of CS and block but B10 values are available in the model comparison. Any idea why I'm not able to get BFinclusion values for these factors?

Thanks:)

(This strikes me as something for the GitHub issue tracker, but since that has been closed (?) I'm posting it here.)

I'm running Ubuntu 18.04, and I've installed JASP through flatpak. That seems to work OK, in the sense that I can launch the application and also open some `.jasp` files (such as the examples). However, when loading one of my own `.jasp` files, it crashes with the attached terminal output. Unfortunately, I cannot share the actual data for the moment.

Cheers!

Sebastiaan

Or if not, are there any plans to add functionality in future?

Or if not that, is there any way an willing masochist to try and add the functionality themselves?!

]]>(not sure if this is the right forum... sorry!)

I am running an exploratory factor analysis using **oblique **rotation (Promax) and **maximum likelihood**. I am not sure, though, which matrix appears in JASP: pattern or structure matrix? We know that we should report both matrices when using oblique rotation, so this is a bit of a problem, so I am trying to complement the results form JASP with those of SPSS. However, the factor loadings reported in JASP do not match those I get in SPSS (EFA with ML and Promax). Similarly, the correlations between factors are very different between SPSS and JASP.

Since I would like to report the goodness of fit indices that JASP reports (which is great!), I would like to understand why I get these differences. The results I get in other tests, say, reliability, are identical both in SPSS and JASP, so I am confident I am using exactly the same data set.

]]>Two questions about the effect size used in JASP:

- What is an effect size (delta)? Is it Glass's delta?
- How do you convert Cohen's d into delta? (this is straightforward if delta is Glass's delta)

Cheers,

Hannah

PS. Thank you Team JASP - the program is fantastic!!

]]>I'm trying to analyse some new experimental data with a Bayesian repeated measures ANOVA. I have 2 repeated measures factors (with 2 levels each), and 3 between subjects factors (also categorical with 2 levels each). Everything works fine if I use my 2 repeated measures factors and only 2 of the between subjects factors, but if I try to include all 3 between subjects factors the analysis takes far too long to run. It seems like it will never finish running on my laptop. Is there a workaround for this under the Bayesian framework? Or am I stuck doing a frequentest analysis?

Many thanks for any help you can provide,

RBJ

]]>I have some problems understanding some parts of the output of a bayesian repeated measures anova. In my study, I have a between subject variable (condition) and a within subject variable (face condition). In the standard analysis I obtain a main effect of face condition. the main effect of condition or interaction between the factors were not significant.

In the Bayesian repeated measures anova analysis I obtained the following results:

As you can see BF10 for the variable face condition is 1.000. What it strikes me is that if I change the bayes factor to BF01, I obtain the same value

Not very sure how to interpret this and it doesn't look right to me, especially when I compared these values with the null model... Any idea?

thanks!

]]>I see that Mann-Whitney is implemented in the Bayesian Independent t-test but there is no Wilcoxon alternative in the Paired Samples t-test. Is there any reason for this?

Cheers

Mark

]]>Is it still possible to do Bayesian polynomial contrasts in your program using the following transformation?

The goal is to check the cubic contrast for 7 levels.

In this case, the cubic contrast is -1 1 1 0 -1 -1 1.

Can I

*multiply Level1 by -1 Level2 by 1, Level 3 by 1 and so on…

*Then add level 1,2 &3 to a new variable, for example, var 1

*Then add levels 5,6 &7 to a new variable, for example, var 2 (absolute values)

*In the next step, can I use a T test to check the differences between the two means

and then look at the Bayesian value to give me an indication if there is or isn’t a cubic trend?

]]>I did a Anova on a set of data and it showed me, that the data is statistically significant. After that I tried to do a linear regression with the same data and all of a sudden it isnt statistically significant anymore. This is a study I have to analyse for uni and we have almost only done linear regression and correlational matrix in class, so I would be surprised if I had to do something else this time.. Does anyone have an idea what the problem might be?

I would be really grateful if you could help me!!

]]>Does anyone of you know if it's possible to test for a quadratic (instead of linear) relationship in JASP? I only have two variables I want to correlate this way. Oh, I'd like to get Bayes Factors for this quadratic correlation. Thanks in advance!

Kevin

]]>Does Jasp have anything equivalent to run cubic splines? I am trying to determine the periodicity, or interval length of estrus cycles. Any other suggestions on how to do this would be appreciated. Time series is something I have considered.

]]>Thanks!

]]>I would like to run both versions as v 0.9.2.0 runs well on our faculty's servers, and want to explore v 0.10.0.0's new features. Thanks for any tip!

Peter

]]>I would like to conduct Kendall tau's correlational analyses. I am aware that Tau-a does not account for ties, whereas Tau-b and tau-c account for ties. The article by van Doorn and colleagues (van Doorn, J., Ly, A., Marsman, M., & Wagenmakers, E. J. (2018). Bayesian Inference for Kendall’s Rank Correlation Coefficient. American Statistician, 72(4), 303–308. http://doi.org/10.1080/00031305.2016.1264998) uses the formula for Tau-a. However, in JASP, Tau-b is being used. I am confused, and am not sure which is actually being used in the source code. May I know which formula is actually being used in JASP?

Thanks!

Best,

Darren

]]>