I Have conducted a repeated measures Bayesian ANOVA and stated the default prior: r scale fixed effects =0.5; r scale random effects = 1). I have also cited and read the appropriate articles (e.g., Rouder et al., 2012). But the reviewer wants a specific justification. I do not expect a big effects from the experimental condition when I compare the models.

Any idea how to formulate the justification this without going to the math? The point is that it is a default prior that should fit most cases in experimental psychology. But the reviewer wants more than that.

Best regards,

Ester

I have been dipping my toes into Bayesian statistics and have seen some strange anomalies between the normal and Bayesian regression coefficients (see attached file). This is a very simple regression to predict kicking distance with right leg strength- both methods show that the model is good but the coefficients are in an order of magnitude different.

Linear regression intercept = 57.1 Bayesian intercept = 486.1 (same as the mean of the outcome variable!)

Linear regression R_strength = 6.425 Bayesian = 5.497

Using these in a simple linear prediction equation y = b0 + b1*x the normal coefficients are fine but are way out using the Bayesian coefficients.

Am I interpreting this incorrectly?

Cheers

Mark

I was trying to plot a BF object and I got this error

plot(myanova)

Error in seq.default(floor(rng[1]), ceiling(rng[2]), 1) :

'from' must be a finite number

Could anyone point in me in the right direction to solve this?

Thanks.

]]>I'm new to Bayesian statistics and JASP, and have a question about how to interpret the output from a multiple linear regression. Looking at the BF10, the model with openness as the only predictor is the strongest.

What does the BFM represent? There are three models with BFM > 3, so I am wondering whether I need to interpret this, and if so, how to do it. When I add the other predictors to the null model, the model with openness has a BF10 = 225, but the models with q1sum and q2sum each have BF10s < 1.

Thanks,

Sarah

]]>I have a document in which the Pearson correlation is reported, but I am interested in the corresponding BFs. However, I do not have access to the data file, so I can not reanalyze it in JASP (or R, or whatever). Do you have any suggestions onto how to asses the BFs in such cases?

Many thanks in advance

Mila

I am writing to ask about network analysis on JASP. I and my research group have performed a behavioral study collecting data from three groups of 50 participants (150 in total). Our data consists of accuracy and response time values, which are not normally distributed. What type of estimator do you think that is better to use for our analysis? Is it possible to compare the network plots?

Many thanks in advance,

Elisa

My ultimate goal is to understand how the scale parameter impacts the prior probability allocated to various standardized effect sizes.

Here's my issue: my math (which is probably wrong) suggests a different relationship between the scales for ttestBF, anovaBF, and generalTestBF. Here's Richard's tweet:

"if X is the t-test prior scale, then X/sqrt(2) is the ANOVA scale, and X/2 is the corresponding regression scale."

However, here's the scales that I get:

1) t-test scale: (m1-m2)/sigma | g ~ Normal(0, g)

2) ANOVA scale: (m1-m2)/sigma | g ~ Normal(0, 2g)

3) generalTest scale: (m1-m2)/ sigma | g ~ Normal(0, 4g)

Here's my math (assume equal cell sizes throughout).

1) t-test follows directly from definition of t-test with prior defined on effect size.

2) ANOVA codes regression coefficient as sqrt(2)/2.

The difference in means is therefore: m1 - m2 = (alpha + betaANOVA * sqrt(2)/2) - (alpha - betaANOVA * sqrt(2)/2) = sqrt(2)*betaANOVA. So betaANOVA = (m1-m2)/sqrt(2). The ANOVA parametrization is:

betaANOVA | g ~ Normal(0, sigma^2 * g)

therefore, (m1-m2)/sigma | g ~ Normal(0, 2g) since betaANOVA*sqrt(2) / sigma | g ~ Normal(0, 2g)

3) Suppose I manually code as (+1, -1). Recall, the regression approach doubly normalizes the effect sizes.

m1 - m2 = (alpha + betaGT) - (alpha - betaGT) = 2*betaGT

so betaGT = (m1 - m2)/2

betaGT | g ~ Normal(0, sigma^2 * (XtX)^-1 * N * g)

With equal cell size,

betaGT | g ~ Normal(0, sigma^2 * g)

(m1-m2)/sigma | g ~ Normal(0, 4 g) since (2 betaGT)/sigma | g ~ Normal(0, 4 g)

Of course, the Bayes Factor package won't tie to my equations because it is programmed to take into account the relationship between the scales the way that Richard has defined them. It seems like with Richard's approach, there are terms that should be squared that are not (since Var(a*x) = a^2 Var(x)).

What am I doing wrong? Please help!

]]>I'm a bit confused about the rscaleFixed value that is applied to an interaction. The interaction is coded as the product of two main effects, with the latter coded as sqrt(2)/2 implying the former is coded as 1/2. Does that mean that, for a fixed rscaleFixed value entered into the BF package, there are different scales applied to the main effects and the interactions?

I know that this isn't the case when a whichModels is set to "bottom" because the BF package is coded to default to a ttest in this case (so the difference in coding doesn't matter). But what if whichModels is set to "top"?

]]>I've wrapped some custom functions I use in R alongside `BayesFactor`

into a mini-package that you can find here>>.

This (for now) includes two functions:

1. `inclusionBF`

to compute inclusion BFs (an R implementation of what can be found in JASP - produces identical results).

2. `restrict`

to compute BF for restricted models. See Richard's excellent post on the subject here>>.

Hope you find this useful!

]]>(from www.jasp-stats.org) ]]>

But, when I compare mean values calculated by clicking descriptives and using rmANOVA-> Descriptives, I found that those values are not same.

Descriptive values are identical with values calculated by Excel 'Average' function.

Can anybody explain this ?

Descriptives

rmANOVA -> Descriptives

Thanks again for this marvelous software !

Can you tell me if I am right / wrong before submitting something : )

I want to examine potential differences between groups on a dependent variable (LTPA) while controlling for another variable (AP_GLTQ) through an ANCOVA.

Here the results

Is it correct to say that: " Bayesian ANCOVA indicated that the data were 12 times more likely to be observed under the null hypothesis than the alternative (BF01 = 11.569), which indicates strong evidence in favor of the null hypothesis." ?

Or, as I am interested by the "GROUPE" effect (controlling for AP_GLTQ), should I reported the BF01 = 5.189 ?

Thanks in advance for your help

]]>I have 2 independent variables, A and B. I have done a Bayesian ANOVA to compare the following model:

- A vs Null
- B vs Null
- A + B vs Null
- A + B + A*B vs Null

So far so good! But I also need to compare the following:

- A + B + A*B vs A + B
- A + B vs B
- A + B vs A

I am assured this is easily achievable in JASP! I tried to add variables to null hypothesis under the 'model' tab, but I'm unsure this is the correct way. I can obtain Bayes Factors using this method for 6 and 7, but adding A, B, and A*B to null model for 5 produces an error under nuisance variables.

Any pointers greatly appreciated!

H.

]]>I don't see a function to output individual factor scores of a factor analysis. Is this possible ?

TIA

ftr ]]>

I have administered an attachment test to the two partners of married couples and of couples in LTR; this test yields four attachment styles (nominal scale). I wish to test:

1. If there is an association between the attachment styles of the two partners. I guess I will do this by calculating a contingency coefficient (or Cramer's) for the 4x4 table (rows: attachment styles for partner A, columns: attachment styles for partner . But how can I test these coefficients for statistical significance?

2. If the association is the same for married partners and for partners in LTR. I guess I will do as in (1), but include the relationship status as a layer. But how can I compare the contingency coefficients with each other (the way a Fisher z-test would do for Pearson r coefficients)?

As for the rest, congratulations for this amazing program! A fresh way of (studying again and) doing statistics!

]]>For example, I have a dataset that has gender and income data, as well as the more pertinent info. Can I run a correlation or even just the descriptives where it splits the data into gender groupings, and then those groupings into income bands before calculating?

Does this make sense?

Thanks!

]]>I'm a psychologist and statistics isn't my best field, so i'm looking for an advice.

Let's say t test gave me result saying that men and women do not significantly differente in the level of their marriage satisfaction (p>.05). If i get this result with a normal t test analysis, can i enrich my results chapter with Bayesian t test showing that there is more evidence either towards null hypothesis or alternative hypothesis?

]]>I'm relatively new to JASP and Bayesian statistics. Is there a way to do Bayesian a hierarchical regression?

Kind regards

Mvs

I am teaching a research and design graduate course and I am trying to teach my students parametric and non-parametric tests. When I click on ANOVA tabs, I don't see an option for a Kruskal Wallis test. Does anyone know if JASP is capable of running a Kruskal Wallis test. Also, I am wondering how to run a multivariate ANOVA (MANOVA) for multiple dependent variables vs. a multi-way ANOVA for independent variables. Any assistance would be greatly appreciated.

Thanks.

]]>It is my first time I analyze data with JASP. Thank you very much for this intuitive tool! I have a question concerning Bayesian linear regression. In an exploratory fashion, I computed 3 hierarchical linear regressions: As predictor variables, I have included all four subscales of my construct of interest (S1-S4) and I have 3 different DVs (DV1, DV2, DV3).

For DV1, I suppose that models S2+S4, S3+S4 or S2+S3+S4 work the best as they show the highest BF10 (still, no big difference between the three models). But concerning DV2, I am a little bit lost. First, there were no supported correlations (preliminary analyses) between the DV2 and the four subscales. Still, I computed the regression that revealed the following output. Now I am wondering how to interpret the output. Can somebody help me out?

Thank you very much!

Alexa

If I run the classical RM ANOVA in SPSS, by default it produces a table of Mauchley's Test for Sphericity, thus allowing one to check for assumptions and where it is violated, to apply Greenhouse-Geisser corrections for instance. When I run Bayesian RM Anova in JASP, I do not get this assumption displayed in the output and there is no option for checking this assumption. My question is, does Bayesian RM Anova make assumption for sphericity? If it does, how can this be tested? Secondly, does it make assumptions for normality? It will be interesting to point out what assumptions are there for using Bayesian tests (e.g., ANOVA, T-test, etc..). My colleagues have asked me including my students and I have no answer for this or where to refer them to. I truly love JASP an have been encouraging them to use Bayesian statistics as opposed to frequentist statistics.

Thanks for your response.

Tom

]]>I am very new in Bayesian statistics and have got some questions about how to interpret the results of a three-way mixed Bayesian ANOVA (including factors TIME, GROUP, TASK), in particular in direct comparison to an “orthodox” 3-way mixed ANOVA.

Since my results of an “orthodox” 3-way mixed ANOVA revealed significant as well as not significant main effects and interactions, I am requested by reviewers to perform a Bayesian repeated measure ANOVA. To do so I used JASP, which is a very nice and intuitive tool. Thanks to the developers!

The results of the 3-way mixed ANOVA revealed a significant main effect of TIME (p < .001), but no significant main effect of GROUP or TASK (p > .05). Further a significant three-way interaction emerged between TIME * GROUP * TASK (p = .024) as well as a significant two-way interaction emerged between TASK * GROUP (p = .005). All other interactions are not significant (p>.05).

However, the results of the Bayesian repeated measure ANOVA revealed a slightly different result as indicated in the following table

JASP Team (2018). JASP (Version 0.8.6) [Computer software].

Am I right, that the BFInclusion indicated decisive evidence for HA for the factor TIME (6.846e+40) as well as for the two-way interaction of TIME*TASK (1.064e+6). But the small value of the BFInclusion for the three-way interaction (0.142) indicate substantial evidence for H0 (an conversely indicate substantial evidence against HA)?

And if my description is right, how do I report and interpret this conflicting result (significant 3-way interaction vs. substantial evidence against this 3-way interaction as indicated by BFInclusion = 0.142) ?

It will be great, if someone can help me!

Cheers

Nils

just started to look at introducing JASP to my undergraduate students (Psychology, Uni of York, UK) as a companion to SPSS. The interface and output are all so much more usable.

One thing i took a while to find was how to check normailty. I found it via the t-test, which works fine, but is a bit odd when I can get the plots from the Desc. Stats menu. Be nice to be able to check that box in the same place (DS) and get a Shapiro-W at the same time.

cheers for awesome package

Rob

]]>I would like to ensure that I correctly interpret and report the results of one-way bayesian ANOVA (different samples, not repeated measures). I attach an output of my analysis (using JASP and SPSS). My questions are:

1. If I read the output correctly, in JASP I get Bayes factor (BF10) 0.175. Given that it is < 0.33, can I say that this supports H0?

2. When I report my JASP analysis I say that the priors were based on cauchy distribution. This is what I understood from this paper: https://link.springer.com/article/10.3758/s13423-017-1323-7 Would it be correct statement?

3. For comparison, I ran the same analysis in SPSS (Jeffreys–Zellner–Siow method). in SPSS the Bayes factor is 0.022. Is the discrepancy between two programs because of different priors?

Thank you a lot,

Vadim

To me JASP is quite comprehensive of pretty much everything now, the only important test I see is still missing is the non-parametric one-way Anova. Could you please add a Friedman Anova? That would be very much appreciated... Thanks!

Mat ]]>

Would it be possible to know how the R-squared for Bayesian linear regression is calculated in JASP?

Can I use the R-squared for a Bayesian linear regression to compute an effect size or should I stick to the traditional R-squared?

Thank you very much,

Best,

Sophie

I was wondering if it is possible for JASP to perform a Bayesian Linear Regression with Repeated Measures. If so, how do you recommend performing this analysis?

]]>I am new to Bayesian analysis, but I am hoping to use JASP for some analyses of contingency tables. Jamil et al., (2017) has been really helpful with this, however all the examples I can find are for 2x2 contingency tables. Is it still possible to use the Bayesian contingency tables option in JASP with contingency tables that are 2x3 or larger?

Thank you for your help, and for this great piece of software!

M

]]>TIA

ftr ]]>