Hello.

I would like to specify contrast to perform post hoc on mixed models. Can anyone help?

]]>I'm looking for some tutorial and/or explanations about the use of covariates in a repeated measures anova. My aim is to test the impact of the age (time point at 3 different ages for each individual) and the impact of the life conditions (#treatment). Our idea is to use body weight as covariate. I'm trying to understand the different options available with JASP (for example, use the covariate as a factor in the model...)... and I need your help.

Thank you

Elo

]]>I recently tried to reproduce some analyses my professor conducted in SPSS. I could reproduce the ANOVA, however the post-hoc paired t-tests gave minimally different results. I then reproduced the post-hoc t-tests in R, which gives identical results to SPSS. After further experimentation I noticed that also the t-test function in JASP gives the same results as SPSS/R.

Therefore, I´d like to know how the calculation of the paired t-tests in JASP is different when using the t-test function compared to doing ANOVA post-hoc t-tests.

The differences are quite small, but noticable, for example t = 27.28 vs t = 27.35.

Best,

Max

]]>I've installed JASP via flakpak on my brand new ubuntu 20.04. This is what I can see after starting it - what is needed? It worked like a charm on ubuntu 18.04.

Thans,

Feri

The dependent variable is the score obtained in the test, the fixed variable is the timepoint (6 timepoints: baseline, 24h, 3d, 1 week, 3 weeks, 5 weeks) and the random effect grouping factor is the subject (only 6 subjects).

This is what I obtained for fixed effects estimates. I would like to compare the first timepoint (baseline) with the rest of the timepoints and see if there is a significant variance. But here I am not sure what the intercept means, as it seems that the last timepoint is being compared to the rest.

Can someone help me interpret this results?

]]>Can anyone help?

I've been trying to run a linear mixed model to compare a variable at different times (won x draw x loss).

I understand that I need Specify Contrasts, but I still don't know how to do the comparisons in pairs.

My doubt is which value to specify to perform the comparisons (Win x Draw / Win x Loss / Draw x Loss)

]]>Thank you very much for your amazing tool.

I would like to draw your attention to one issue that I have recently face with JASP (v 0.13.1).

I attached my data. I am trying to perform a one-sample t-test to test whether variables PersonalM, RewardM and EmotionM are significantly different from zero. Variable EmotionM violates the assumption of normality and I used Wilcoxon signed rank test. BF(H1):BF(H0)=0.197:5.079. However, if I am using the Student t-test BF(H1):BF(H0)=425000:2.35e-06. I cannot understand why these two test provide so huge differences. Even with different calculation procedures for the Student and Wilcoxon tests, the difference in BF should not be so dramatic.

I would greatly appreciate your help/comments/ideas.

Thank you in advance,

alla

I would like to know if it is possible to obtain a Grouped scatter plot in Descriptives module?

I performed a K-Means Clustering analysis, but I would like to represent the distribution of the clusters, as well as their density, for each variable.

Therefore, I wanted to "join" both the attached plots.

Thanks in advance,

Mateus

I have some data that still appears non-normal even after Log and Sq root transformation. Is there anything I can do to do some sort of non-parametric mixed ANOVA in JASP or in R? I've seen some say frequentist linear mixed models are better with non-parametric data and I see the new release of JASP has a bayesian linear mixed model function - Would that help? A bit confused as I'd seen previously that the Bayesian Mixed ANOVA was already a linear mixed model? Any tips would be greatly appreciated!

All the best,

Gabriel

]]>As per advice from https://github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations I have set my coefficient priors for main effects and interaction effects to student_t(4,0,2.5).

I wanted to check if this was an appropriate prior for the calculation of Bayes Factors for model comparison as the BFs I am getting seem somewhat implausible from looking at the data and doing equivalent frequentist tests.

Happy to share more info if needed.

]]>I am using Windows 10 Home edition (version 1909), on a 2017 Dell Inspiron laptop, with an Intel CORE i7-7500 quad processor.

What am I missing, please? Thank you!

]]>I have been using JASP and very happy with it. Perhaps it would be interesting and useful if JASP could make some kind of markdown like R Markdown where one could integrate the statistical results dynamically with the manuscript draft. This would be immensely helpful for us to avoid entering wrong numbers or manual copy and paste.

(Just a suggestion - I don't know how much work it would take to make it).

Cheersies,

intan

]]>I am using Windows 10 Home edition (version 1909), on a 2017 Dell Inspiron laptop, with an Intel CORE i7-7500 quad processor.

I exchanged posts on this Forum with someone there, but when I wrote back to say that his method for solving the problem did not work, I have not heard back from him.

In addition, I have now tried loading historical versions of JASP, going back as far as version 8.0 -- and all of them fail to run, just like the latest version 13.

As recommended, I ran the dependency walker, and sent the log file back to you on this Forum.

The person I was corresponding with wrote, "Aah yeah, I see what is going wrong." He told me to delete the folder "c:\program files\adoptopenjdk\jre-11.0.7.10-hotspot\bin\" which he said was what "leads JASP astray."

I did this, but the problem still persists. I wrote him back that the solution had not worked, but have heard nothing back from him.

Can someone please work with me to help me solve this problem? Thank you!!

]]>1 - I change the prior settings “r scale fixed effects", r scale random effects" "r scale covariates”, yet, my P(M) column in my output model comparison table does not change values. I have 5 models so each one is given a P(M) of 0.200, no matter what I change in the settings.

2 - Can you have a strong inclusion Bayes factor (BFincl > 30) for a specific predictor AND strong+ evidence for the Null (BF10 < 0.04) in the model comparison output? Because if the BFincl isn’t high, then we have to exclude it and if we do that, then the BF10 can’t be interpreted, right?

3 - Where are the repeated contrasts in JASP? I've seen people talking about it but I can't find it in JASP

Thanks!

]]>I am working towards my master’s degree and lately a Bayesian reanalysis of the literature seems to be an option to develop my dissertation – given our pandemic context. However I am struggling with the data necessary to undergo a Bayesian reanalysis on JASP – as presented on the “Webinar: Theory and Practice of Bayesian Inference Using JASP”. Specifically, I would like to know the feasibility of this method once some studies report their ηp2 while others provide me the η2 value.

As an example:

“Absolute error scores (cm) for practice were analyzed using a 2 (Choice: Self, Yoked) × 3 (Decision: Before, After, Both) × 6 (Block) mixed-model ANOVA with repeated measures on Block. All groups showed a reduction in AE across practice blocks, which was supported by a significant main effect, *F*(5,210) = 39.20, *p* < 0.001, **ηp2 = 0.48**”.

(https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4237043/)

“Accuracy scores (AE) on the 30 practice trials were averaged across blocks of 5 trials and analyzed in a 2 (group: self-control versus yoked) x 6 (blocks of 5 trials) analysis of variance (ANOVA) with repeated measures on the last factor. […] Both groups reduced their AEs across blocks of trials (see Figure 2, left). The self-control group had generally smaller errors than the yoked group. The main effects of both block [*F* (5, 140) = 9.37, p<0.001, **η2=.25**] and group [*F* (1, 28) = 4.70, p<0.05, **η2=.14**] were significant”.

(https://www.scielo.br/scielo.php?script=sci_arttext&pid=S1413-35552012000300004)

** **

Are ηp2 and η2 equivalent so I can use either one in the reanalysis? And also, would it be appropriate to draw conclusions based on, for instance, the mean values of the obtained Bayes Factors?

Thank you for your time and I look forward to hearing from you.

Best regards,

Nathalia Venancio

]]>Cheers

Richard

]]>But... I have a complex dataset with multiple groups (for example, adult/child). For some analyses, I want to look only at adults; for others only at children; for others, I want to compare both. I know I can copy the datafile, and use different datafiles for different analyses but for quick statistics I'd like to turn filters on and off to compare the different groups.

But each time I try to do that when there are a lot of statistics already run, JASP hangs as it tries to recalculate the old statistics again. Is there a way to stop it re-running statistics until you want it to, or to not create live updates (only to calculate and stop until you tell it to re-run them) ?

Thank you.

]]>Does anyone know why JASP and R have given me different output for the same meta-analysis?

For context, I attempted a meta-regression in Jasp after completing it in R using the metafor package. I uploaded my file to JASP after calculating effect size using the escalc function in metafor and then I plugged in the ES (yi) and SE (vi) values into the model.

The meta-regression coefficients seem similar, so I am not too worried about this. However, the heterogeneity estimates differ more substantially. Here are some examples:

I^2 in R: 36.82%; I^2 in JASP: 97.052%

Tau squared in R: 0.036; Tau squared in JASP: 0.097

H^2 in R: 1.58; H^2 in JASP: 33.927

I've been trying to find some more information about this online, but I can't seem to get anywhere. Can someone explain these differences? Since JASP is built using the metafor package, I assumed these would be the same.

Thank you in advance for your insights!

]]>I noticed in the last release of JASP (0.12.2) that 95% CI in the RM ANOVA plots have changed (i.e. they're usually wider than they used to be). Is there a reason for that, was there (or is there now) a bug?

Also, I am not well versed in statistics but I was wondering how I could get the values for these CI as JASP does not output them in a table form.

Best regards,

Martin Constant.

]]>I have two questions:

1) When I do Bayesian tests for correlations and get a confidence interval in JASP - is that a confidence interval for the effect size or for the correlation?

2) When changing my hypothesis to be one-way (in a certain direction), the confidence interval also changes. Would it, for this reason, be better to do a two-way test instead, so that I get a confidence interval that better represents what the confidence interval looks like in the population? Like, even if I expect the correlation to go in a certain direction, wouldn't the confidence interval for the non-directed hypothesis give me the best approximation of what the effect actually looks like "in the population"? Like, should the outcome really change based on if my expectation change? I mean, my expectation could be terrible...

]]>I have a problem with discrepancies in the Bayes Factor computed with SPSS and with JASP when running a Bayesian Linear Regression, and would be very thankful for some help.

So, here is what I did:

First, I have run Bayesian Linear Regressions with SPSS for several dependent variables. For this, I added metric covariates and sticked to the default options. The resulting BF10 range from 0.001 to 0.005.

Second, I have run the Bayesian Linear Regression again with JASP. Here again, I used the same metric covariates and sticked to default options. The resulting BF10 ranges now from 0.028 to 0.092. Therefore, there is a difference of about one decimal place when comparing the outputs of both programs...

I already checked whether there might be a difference in the data I used, but when I run a frequentist linear regression, both programs show exactly the same output - so, this is most likely not the problem. Additionally, I compared the default options of both SPSS and JASP and came to the conclusion that it should be generally the same (at least as far as I understand it with basic statistical knowledge), e.g. SPSS uses JZS as prior just like JASP.

Is anyone familiar with this problem? Is there maybe something I might have missed when either conducting the analysis with SPSS or JASP?

Thanking you very much for your help in advance!

Miriam

]]>I have a directional hypothesis I want to test with ttestBF (R-BayesFactor-package) using a half-Cauchy prior. According to the documentation the null interval can be used to test a directional hypothesis and should contain the lower and upper bounds of the interval hypothesis to test. The null interval should be in standardized units but I'm not quite sure what is meant by standardization here.

For example, I have a dataset in which the values can theoretically be between 0 and 1. Most of the values are, however, around chance which is 0.25 or above. The hypothesis I want to test is which values are larger than chance (I don't want to test whether anything is smaller than chance). So the null value of the mean is 0.25. Should my nullInterval be [0.25, 1] or something else (because of the standardization)?

Any help very much appreciated.

Cheers,

Lina

]]>I've just installed the 0.13 version and I am using the Mixed Models module.

Anybody can help me with the following issues?

The plots part does not show the variables to transfer to the plot boxes. It does not seem to be working.

Moreover, I would like to be able to do the following:

- Analysis without random effects: Of course, this will be equivalent to linear model, but I would like to get the AIC, etc., in order to compare with models with random effects.
- It seems random intercept is compulsory. That is, I cannot run a model with random slopes and no random intercept.

Of course, as always, grateful for your work on improving this great software.

Cheers,

Guillermo

]]>I just tried installing 0.13 (July 2020) to upgrade my old installation of JASP. The MSI seemed to work, but JASP was still at the old version and kept prompting me to upgrade.

Solution: During installation I chose a new path. The installer then ran fully, deleted the old version of JASP from the old folder and correctly installed the new version, which now works.

]]>Does JASP meet these standards? If so, it will be easier for American institutions, and individual instructors at those institutions, to adopt and/or continue using JASP.

]]>