I am new you JASP and the use of Bayesian statistics, and i have trouble with the interpretation of my output.

For the analysis i made use of a repeated measures anova with two factors (3 levels per factor). The analysis was for a working memory test with 3 levels (very low wm load, low wm load and high wm load). The participants did this task in 3 different sessions (2 weeks in between). The experimental condition was the difference between an active and placebo group.

My main interest is the difference between the groups. Does the active group get a higher or lower score in the working memory task than the placebo group?

As it is my first time using this method, i am unsure about how to interpret the effects (what value do i look at?) and how to report those.

In the literature i can only find examples of more easy tasks, but not of a complex analysis as this one.

Can you please help me?

Our reviewers want us to do a Bayesian analysis in order to see if our results actually speak against a null-effect, or if we just have a lack of power in our measurements to detect a group difference. I find this very exciting because I have never done Bayesian statistics before and neither has any previous study asking a similar research question so I really think this could improve the field a lot!

I have read a lof of blog posts on these and I am not really able to apply them to our present issue so I thought I'd just try to post it here.

We have a memory test with three different item types that can be either neutral or negative and we have two different groups doing these tests. Thus, this is a 3 (Item Type) x 2 (Emotion) x 2 (Group) ANOVA. The regular ANOVA revealed no effects whatsoever and what I want to find out is - is this a "proper" finding revealing no group differences or interactions, or at we just not able to make anything out of these findings.

The results of the Bayesian repeated measures ANOVA looks like in the attached file.

As I said, I've been trying to read some blog posts about this but I still haven't really understood exactly how this should be interpreted. Is there anyone who has an easy way to get the BF01 for all the two-way interactions as well as for the three-way term, in a way so that they are compared to the null model?

]]>pens the main page and then get this message

]]>I was wondering if there was a way to conduct a Bayesian F-test of equality of variances - I have two groups and I want to show that their variances are equal (H0 is true).

Is there a way to do this with JASP? or R? or any other way (I have a calculator and I'm not afraid to use it!)?

Thanks,

M

]]>I'd like to conduct a Bayesian re-analysis of regression coefficients using only summary statistics. I know that JASP/Bayes Factor allows you to obtain a Bayes factor for the R^2 using summary statistics, but I have a few questions regarding a method for obtaining a Bayes factor for the regression coefficients themselves:

1) Is it valid to obtain the t-statistic used to evaluate the statistical significance of a particular regression coefficient, and use that to obtain a corresponding Bayes factor by using the JASP summary statistics module for a one-sample t-test (while subtracting the number of predictors from the degrees of freedom)?

2) If this approach is not valid, is there an alternative way to obtain a Bayes factor for a regression coefficient using only summary statistics?

Thanks in advance,

Adam

]]>I performed a Bayesian Linear Regression in JASP. but I do not know how to read the result table. What does P(M)、 P(M丨data)、BFm 、BF10 mean？What are their values indicating that the model fits well?

Thank you very much in advance，

Hongxia

]]>I am new to using bayesian analysis. I am doing replication study and the target study stated that there is gender differences observed in their study and the analysis was conducted using independent sample T-test and they gave the values of Mean,SD, t, df.

I want to examine the evidence in the light of new data and i want conduct bayesian T-test using informed prior by using the values provided in the target article. Can you please explain me, in the jasp software, in the bayesian t-test module under the prior option there is two options default and informed and under the informed, I selected T and besides there is three boxes denoting Location, Scale, df. I don't understand what values should i fill in these boxes. Can you please give one example from the attachment(image) provided below like what values to enter and how those values are calculated and choosed.

Thank you in advance for your help.

I am writing to ask about network analysis on JASP. I and my research group have performed a behavioral study collecting data from three groups of 50 participants (150 in total). Our data consists of accuracy and response time values, which are not normally distributed. What type of estimator do you think that is better to use for our analysis? Is it possible to compare the network plots?

Many thanks in advance,

Elisa

Can anyone help me in understanding how to interpret the Durbin Statistic table for a two way rmANOVA where using non-parametric analysis. What is the difference between the p and the pF significance values?

Thanks

]]>I would like to conduct Kendall tau's correlational analyses. I am aware that Tau-a does not account for ties, whereas Tau-b and tau-c account for ties. The article by van Doorn and colleagues (van Doorn, J., Ly, A., Marsman, M., & Wagenmakers, E. J. (2018). Bayesian Inference for Kendall’s Rank Correlation Coefficient. American Statistician, 72(4), 303–308. http://doi.org/10.1080/00031305.2016.1264998) uses the formula for Tau-a. However, in JASP, Tau-b is being used. I am confused, and am not sure which is actually being used in the source code. May I know which formula is actually being used in JASP?

Thanks!

Best,

Darren

]]>I am trying to run a Bayesian regression in JASP 0.9.0.0. but the program does not allow adding categorical variables to my covariates -- only continuous ones. Anyone knows why?

Thanks!

]]>I did a survey on the importance of corporate values. Five situations were described and then the importance of 8 different values was asked (interval scaled). The aim is to experience the importance of transparency. I am very unsure which Anova test to take. Through the Anova I want to find out if there are differences between the importance of transparency and the importance of other company values in the individual situations.

To find out where these differences lie, I have already done a t-test for connected samples. It compares the mean values of the individual company values against the mean values of transparency in the different situations.

It would be very nice if someone could help me :)

]]>I'm new to JASP and I ran an ANOVA with 12 factors. I obtain for each effect a partial eta-square similar to the eta-square. That should not happen, what am I doing wrong ?

Thank you !

Raph

I've ran some Bayesian independent samples t-tests in JASP, and found that the median and 95% CI estimates change very slightly each time I run the analysis. The two figures attached were the same analysis ran on the same data. Can anyone tell me why there's always a slight variation in these estimates please?

Thanks

Chris Brydges

I am writing to gain clarity about setting prior model probabilities. In Wagenmakers et al. (2018), the authors state "P(M) indicates prior model probabilities (which the current version of JASP sets **to be equal** across all models at hand)." However, in the current version, when I run my own Bayes regression for the Auction data, I find that P(M) has variable values

QUESTION: Do users have the ability to set P(M) or is it automatically generated by JASP? How were the priors set to have variable values?

Many thanks,

Caroline

]]>library(truncnorm) seed <- round(runif(10000)*1000000) makedata = function(x) { set.seed(x) #Generate condition 1 values a1 = rtruncnorm(500, a=0, b=1, mean = 0.55, sd = .22) #Generate condition 2 values b1 = rtruncnorm(500, a=0, b=1, mean = 0.46, sd = .26) a2 = as.data.frame(a1) b2 = as.data.frame(b1) a3 = a2 b3 = b2 dat = bind_cols(a3, b3) }

The distributions of condition means appear as expected:

..and for the effect size distribution, the results are reasonable as well:

However, where it gets weird are the corresponding Bayes Factors for the mean differences. Here are the descriptives for the BF10s:

**These values seem absurd (particularly the range/variability), at least given the seemingly reasonable values of group means and effect sizes. I'm aware that large BFs are possible with large samples/large effects, but I'm wondering if the higher variability in BFs is par for the course for Bayes Factors, or if something is going wrong with my simulations (which I'm happy to provide further details about).** I've included the rest of the (highly unoptimized) code used to generate the datasets and means/effect sizes/BFs below:

datasets = lapply(seed, makedata) #Make datasets BFresults = lapply(datasets, function(x) ttestBF(x = x$a1, y = x$b1, paired = FALSE)) #Bayesian t-tests for each dataset BFresultsdata = lapply(BFresults, function(x) as.data.frame(x)) #Make them data frames BFs = lapply(BFresultsdata, function (x) x[["bf"]]) #Get Bayes Factors BFsD = as.data.frame(BFs) BFsDT = as.data.frame(t(BFsD)) names(BFsDT) = c("BF10") BFajrjcont1000 = BFsDT #Final data frame with all Bayes Factors effresults = lapply(datasets, function(x) cohen.d(d = x$a1, f = x$b1, paired = FALSE)) #Effect sizes for each dataset effsizes = lapply(effresults, function(x) x$estimate) #Get Cohen's ds effsizesdata = as.data.frame(effsizes) EffsDT = as.data.frame(t(effsizesdata)) names(EffsDT) = c("d") Effajrjcont1000 = EffsDT #Final data frame with all Cohen's ds ameanresults = lapply(datasets, function(x) mean(x$a1)) #Get condition 1 means bmeanresults = lapply(datasets, function(x) mean (x$b1)) #Get condition 2 means ameandata = as.data.frame(ameanresults) ameandataDT = as.data.frame(t(ameandata)) bmeandata = as.data.frame(bmeanresults) bmeandataDT = as.data.frame(t(bmeandata)) meansdataajrjcont1000 = bind_cols(ameandataDT, bmeandataDT) #Final data frame with condition means describe(BFajrjcont1000$BF10) describe(Effajrjcont1000$d) describe(meansdataajrjcont1000)

cheers,

narcilili

]]>Can anyone be helpful??

]]>I just wanted to canvass opinion here of people who know much more than me on 2 related issues.

**Issue 1**

Say we have 2 datasets both with the same treatment and control in. We could analyse in 2 ways:

- Perform our test on Dataset 1, with a default 0.707 prior. Use the posterior as the prior to perform the test on dataset 2

Or

- Put all the data together and perform the test with the 0.707 prior.

Are these two equivalent, are they very close or coudl they feasibly be very differnt?

**Issue 2**

What are the board's thoughts on using Bayesian for exploratory analysis and using the data from which the hypotheses were generated in a more confirmatory analysis, after collecting more data? My opinion is, as long as you are open about how many exploratory outcomes you tested (the more tested the more likely findings are spurious) and as long as you collect a decent chunk of data in a more confirmatory study (Maybe 2-4 times the amount as in an exploratory study) this should be fine. In other words, as long as you are open about what you did, and the evidence can be judged accordingly. Obviously, if you look at 10,000 outcomes, and only add 2x the data in the hypothesis generating group, your results are not going to be very credible.

Specifically, in some data collected for an MA we explored 8 possibly models. We found anecdotal evidence for an effect in (n=51) participants. We collected another n=79, to probe this outcome further, and we find extreme evidence for this effect. I tend to believe that the effect is not spurious based on that evidence.

Best,

Gareth.

]]>How can I represent in the same box-plot graph the boxes of the different variables in the same graph rather than several graphs, one for each of the measures.

Thanks by advance for your help.

]]>Is it correct that the 95% CREDIBLE intervals for the mean that are calculated in Bayesian Independent Samples t-Test when Descriptives and Descriptives Plots are selected are the same as the 95% CONFIDENCE intervals for the mean?

To illustrate, in the screen capture below, SPSS output illustrating confidence intervals is on the left and JASP 0.9.2 output illustrating credible intervals is on the right. Both sets of output were calculated using the same data, obviously.

This is probably all perfectly normal, but it just seems a bit odd to me that two different statistics should be exactly the same...

Regards, Peter Allen

]]>Could you please help me understand why there is a NaN?

thank you

martin

I have to eliminate about 30 participants because of their number of missings. I have a variable named "exclusion" to distinguish them, but I can't make the filter work to select the participants I want ( with 0 value).

Thanks!

]]>Here a print screen from the .htlm exported result file.

However, there are no legend labels for both levels of the 2nd factor. On the other hand, if I make two separate plots, the labels appear.

Thanks in advance for your help.

]]>For a study I would like to do a hierarchical linear regression, in which in the first step I enter my control variables (e.g., age, gender, etc) and in the second step the remaining three variables of interest. I have done this only in SPSS previously, so I was wondering how to approach this from Bayes perspective using JASP? Should I add my control variables to the null model and then report which model (of the remaining 3 variables) explained the data best, or add nothing to the null model and just say which model explains the data best (for example, the model including the control variables and one of the variables of interest)?

Thank you very much in advance,

Mila

]]>I'm using JASP for a while and I think is a really great tool.

I ran both classical and Bayesian 2 ways - RM -ANOVA. by using the the **classical** analysis I got main effect to one variable ("cong") and I didn't get main effect to the second variable (go_nogo). In addition, I also didn't get interaction.

However, when I ran the **Bayesian** analysis I got evidence to the existing of the main effect to the second variable go_nogo).

As far as I understand Bayesian statistic, the pattern should be the same. I understand that it possible to find that by using Bayesian analysis effect will disappear, but I'm not really sure how it possible to find different effects.

For being sure that I didn't miss anything, I ran more 2 tests:

- one-way (both classical and Bayesian) RM-ANOVA to the 'cong' variable.
- paired samples t-test (both classical and Bayesian) for the 'go_nogo' task.

The results for these 2 tests were same (at least in the pattern):

There was a different between the levels of the 'cong' variable (F=22.16, p<.001, BF10=21,142.844) and no different found in the go_nogo variable(t=1.385, p=.184, BF01 = .539).

However, the results for the two-way ANOVA (as I described before) in the Bayesian analysis:

is different from the classical one:

In addition, the "behaviour" of the Bayesian analysis is also different from the results given by the analysis of each variable separately (as given from the one-way ANOVA and from the paired samples t-test)

I can't find any mistake in my steps and I'll really appreciate your comments.

In any case, the jasp file (that includes the latest analysis) is attached as a zip file: https://github.com/jasp-stats/jasp-issues/files/2872991/jasp_inc_ttest_anova.zip

Thanks a lot in advance,

Ronen.

Lavaan uses [ inspect(fit, 'r2') ] for R-squared but a table for "estimates" in JASP does not show R-squared.

Does my syntax in a model have a problem or is it JASP that does not provide an output for R-squared?

Then, how can I calculate R-squared for endogenous latent variables using factor loadings (estimates) and path coefficients?

Thank you!

]]>