I have conducted RM ANOVA for my study with 2 groups and 4 conditions. 3 are insignificant where as 1 is significant. Therefore, I have decided to apply Bayesian RM ANOVA on the data. Given I’m new to Bayesian Statistics, I’m hoping to seek some guidance to ensure that I’m on the right track. Below is the p-value vs Bayes Factor:

**My questions are:**

1. Bayes Factor for Variable 3 is extremely large compared to Variable 1 & 2. Is this unusual? What is the reason behind such a large Bayes Factor

2. Variable 4 has a significant p-value but no evidence in Bayes Factor. What is the explanation behind this?

Thanks for your help.

N_C

Here is the output from JASP

]]>I am running a Bayesian independent samples t-test where I have specified that my hypothesis is that Group 2 has scored higher than Group 1. The sequential analysis plot looks as attached. Initially I thought the reason why the first ten participants do not move the evidence in either direction is because they are in Group 1, but Person 10 is in Group 2. Does this mean that the first 10 participants do not contribute to the evidence and only the information of the second group contributes? Is that a good thing or does this imply something may be wrong with my sample?

Kind regards

eniseg2

Why is the prior flat for Pearson's at width 1, but only flat for Kendall's at width 2?

]]>I ran a few studies with functional near infrared spectroscopy (fNIRS) and while I think it could be a useful method, its still early days. Therefore the results in the literature are messy, everyone reports results in different ways (either reporting oxy, or deoxy signals, or both, or the difference between the two). There are also various options regarding preprocessing, which are widely discussed and no consensus reached. Lastly, as usually the whole fNIRS analyses spits out some beta values which are just analysed in t tests, for cognitive studies, there is the issue of correcting for multiple comparisons. Some people report uncorrected p values (as if corrected p values arent bad enough...), but that seems to be dying down. The problem with correction in fNIRS seems that its very conservative and eradicates everything or most. It's all a bit of a mess!

So I want to use BF to evaluate all of these issues. I can run analyses comparing the different preprocessing ways, analysing the various signals, and getting beta values for all. I do of course get p values too, just as thats what we (sadly) still are expected to do, and in some regards they might be useful in guiding my BF analyses, keeping in mind they are uncorrected p values...?

I'm a JASP (novice/) convert, and I think here it would provide a useful tool in evaluating the evidence from fNIRS data. I'm not sure my supervisor nor reviewers will be okay with this, but I want to try to make the argument. Do you think that makes sense?

Thanks!

]]>Hey all

I am running a one way RMANOVA with independent variable (pair type) and two covarites (T_Sc_Diff, N_Items_Diff).

I know that in ANCOVA a main assumption is that the interaction between the IV and the covariate be non significant, does this hold in RMANOVA too?

Put simply should I be worried that the "within subject" interaction of PairType* T_Sc_Diff is signifcant (Yellow box) or is should I be looking at the "between subject" where T_Sc is non significant (green box)

Thanks

Yoni ]]>

is there any way to do that ?

Thank you

]]>If it were just a simple 2-way repeated design, (with no c) then the BF10 for the a

However, we have read that the principle of marginality requires that one should not speak of a “pure” interaction effect in the absence of main effects. This always strikes me as puzzling since a perfect 90 degree crossover interaction would yield no main effects but could very well yield an interaction in a standard ANOVA. ( 1. Comments?)

Assuming for now that the interaction BF10 (noted above) by itself is 2,meaning the model with the interaction is twice as likely as the null, what happens when one expands the model to include the between groups c effect? SEBASTIAN has described “Baws factor” procedures to deal with this issue, which entail dividing the sum of all the BF10’s for terms with the interaction by the sum of the BF10’s for terms lacking it. He notes that the result is often similar to the BF10 for the interaction based just on a and b. I have observed the same thing: The Baws and the simple Bayes rarely differ by an order of magnitude. That is, they rarely differ by an amount that would change their interpretation from say “anecdotal” to “substantial” (Jeffries, 1961).

It therefore seems to me that one should stick with the simple BF10 for (a+b+a

(2. Comments?) ]]>

How can I report effect sizes for the Bayesian repeated measures ANOVA?

Thanks

Andrew ]]>

today ı find out that in recent release of JAPS 0.9 there is no efffect size calculation option in regression analsys.

ı wonder if you can add that function.

thank you .. ]]>

We are new to Bayesian statistics and JASP, and got a result that looks weird to us - we assume that we have a problem with our interpretation.

Our design is a factorial design of 3X3X2 - all within subjects.

We performed both a repeated measures ANOVA and a Bayesian repeated measures ANOVA, and the results seem to differ.

In order to estimate the interaction, we divided the simplest model that includes the interaction with the model without it.

For example, the interaction between Ron and Hermione (which was significant p<0.001):

BF = Ron+Hermione+Ron*Hermione / Ron+Hermione =3.26e64 / 3.2e65 = 0.1,

which we interpret as evidence against the interaction.

(We attached the JASP results)

This seems strange to us. Did we estimate the effect correctly, or need to do something else?

Thank you very much!

S&T

ID <- rep(c("1","2","3","4","5","6","7","8","9"), times = 1, each = 9)

training <- rep(c("high","medium","low"), times = 3, each = 9)

time <- rep(c("day","week","month"), times = 9, each = 3)

item <- rep(c("i1","i2","i3","i4","i5","i6","i7","i8","i9"), times = 9, each = 1)

score <-c(2,2,1,2,0,0,1,0,0,2,2,1,2,0,0,0,0,1,1,2,0,1,0,0,0,0,0,2,2,2,1,1,1,0, 0,0,0,1,0,1,0,0,0,0,0,2,2,1,2,0,0,0,0,1,1,2,0,1,0,0,0,0,0,2,2,2,1,1,1,0,0,0,2,0,2,1,2,2,2,0,0)

my.data <- data.frame(ID, training, time, item, score)

My dependent variable is score. Each item was a phrase, and I scored them with 0, 1 and 2 (incorrect, partially correct, and correct, respectively).

I could make a sum of scores and make it a continuous variable ranging from 0 to 6 -in the real dataset 0 - 32. I found that item has a different intercept. Can I use two variables as random factors with anovaBF?

I asume that "training" is categorical, but it is the number of repetitions subjects had before being tested, so it could also be an interval variable, and then I should use the lmBF function.

Or I could leave score as 0s, 1s and 2s but it would be ordinal data (multinomial) and I couldn't find a function in the package that allows me to do so. I will appreciate any advice you can give me.

]]>I need to explain the extent to which alcohol (mediating variable) explains associations between negative mood, impulsivity and academic performance (DV) ..

When I run the analysis negative mood is not significant at stage one of two of the model (when I entered alcohol use in).

As such how would yo interpret these findings to fit the research question?

Any help would be appreciated!]]>

I was just wondering if there are any bayesian ways to carry out such analysis, and if so, whether they are possible to do via either Jasp or the Bayes Factor package? Would it be feasible to simply carry out a Bayesian contingency test between Group 1/Group 2, Group 1/Group 3, and Group 2/Group 3 or would that lead to multiple comparison issues?

Many Thanks!

]]>Thanks,

marios ]]>

I don't know many terms, and what it's super-difficult is also to learn JASP language and tools in JASP, how do they work.

I am mining data on tennis. I have a lot of data about a tennis player.

I want to find some correlations.

I target tennis players. I want to know if there is a correlation between a tennis player's victory and the number of ACEs made during his last 30 matches. Can you help me write this in JASP?

What distribution should I choose? Poison? What to put on the axis of the abscissa and what to put on the axis of the ordinates? What are the variables in this case? How to write this in statistical terms? what distribution to use?

ALSO: How to have a model that predicts what would happen if the number of aces were below a certain threshold? Thank you so much if you could answer to me

Thank you

]]>I am trying to run a Bayesian regression in JASP 0.9.0.0. but the program does not allow adding categorical variables to my covariates -- only continuous ones. Anyone knows why?

Thanks!

]]>I have some questions regarding JASP's Bayesian post-hoc test for ANOVA.

**How are the priors / posteriors / BFs computed?**

Using the tooth-growth sample data,

I have conducted the same t-tests in R using BayesFactor and found the BFs to be the same.

`> list("500 vs 1000" = df %$% ttestBF(len[dose=="500"], len[dose=="1000"]) %>% extractBF(F,T),

- "500 vs 2000" = df %$% ttestBF(len[dose=="500"], len[dose=="2000"]) %>% extractBF(F,T),
- "1000 vs 2000" = df %$% ttestBF(len[dose=="1000"], len[dose=="2000"]) %>% extractBF(F,T))

$'500 vs 1000'

[1] 81800.12

$'500 vs 2000'

[1] 142002125644

$'1000 vs 2000'

[1] 953.5515`

Does this mean that posterior odds are calculated as BF*(prior odds)? In that case, the correction for multiple comparisons is not on the BF itself, but on the posterior odds - wouldn't we want the correction on the BFs themselves?

Also, I've tried looking up Westfal, Johnson and Utts's paper, but I still don't understand how the prior odds are calculated.

**Constrained / Restricted models vs. post-hoc tests**

Richard has previously detailed in his (old) blog how to calculate BFs for specific hypotheses regarding restricted / constrained models.

When should one conduct these tests as apposed to the methods used in JASP for post-hoc tests?

I need to perform a logistic regression with two categorical predictor variables (two levels each). I am trying to figure out whether I can use the `lmBF()`

function from the `BayesFactor`

package to do this. I could not find any information on this in the documentation. Bringing up `?regressionBF`

in R gives me this information:

The vector of observations y is assumed to be distributed as: y ~ Normal(α 1 + Xβ, σ^2 I).

This suggests to me that binomial `y`

s are not appropriate.

I did go ahead and tried it anyways. And `lmBF`

will happily fit the models and give me results. I just don't know whether they actually *mean* anything. Specifically, I compared the output of

`glmer(y ~ f1 + f2 + f1:f2+ (1|subj) + (1|item), data=data, family = binomial)`

with the output of

`lmBF(y ~ f1 + f2 + f1:f2, whichRandom = c("subj", "item"), data=data)`

and they corresponded quite closely.

I constructed a simpler example (without the random effects etc.) to test whether the outcomes converge. And they seem to:

```
set.seed(3)
data <- data.frame(y = rbinom(100, 1, .5),
f1 = as.factor(sample(rep(LETTERS[1:2], 50))),
f2 = as.factor(sample(rep(letters[1:2], 50))))
# Traditional log. regression:
m.trad <- glm(y ~ .,family=binomial(link='logit'),data=data)
# Using lmBF:
m.bf <- lmBF(y ~ ., data=data)
chains <- posterior(m.bf, iterations = 10000)
coeff.est <- colMeans(chains)
# Comparing param. estimates for observation in f1 = B and f2 = b
# Trad. glm:
invlogit <- function(X) { 1 / (1+exp(-X)) }
invlogit(sum(coef(m.trad)))
# lmBF:
coeff.est['mu'] + coeff.est['f1-B'] + coeff.est['f2-b']
```

Can someone put my mind at ease and confirm that I am doing this right and that `lmBF`

does return meaningful parameter estimates (etc.) for binomially distributed `y`

s?

Thanks a lot!

- Florian

So I analyzed my data (a simple two group comparison) with JASP following NHST and BHT. NHST gives me a p of .047, while the BF is 1.0 (evidence is not in favor of either hypothesis). How can I best interpret this? I want to report both BHT as NHST in my paper. Do you have any good papers on this topic that could help in explaining this to my readers (who are usually not statisticians or methodologists)?

]]>I understand the objective prior, and my implementation looks identical to that in JASP. But I'm not clear on the Likelihood function, and the likelihood I calculate appears different from what appears in JASP. I have tried a few but here was my closest:

sample.likelihood =

@(delta) tpdf( (sample.mean - (delta * sample.se * sqrt(sample.n)) ) / sample.se, sample.n-1);

JASP doesn't provide any likelihood info, and the R code may as well be hieroglyphics for as clear as it is to me. So I can really only judge by comparing the graphs in JASP to the ones I produce (example below). It appears that my prior matches but the posterior is slightly taller and narrower, presumably because of the likelihood calculation.

So, is my likelihood calculation off? Thanks if anyone can help.

By the way, probably aggravating this confusion is that I'm not clear what the x axis represents in this analysis (labelled as "Effect Size delta"). Am I correct in thinking this is Cohen's *d*? But we use t distributions so I get confused whether it is related to the_ t _stat or the delta parameter from a noncentral t distribution.

I'm interested in using Bayesian linear regression and am trying to understand how to interpret correctly BF10, the 95% credible intervals, and the model comparisons when I add the other predictors to the null model. I have been reading Rouder and Morey (2012; as referred to in other posts on this forum), for an example of how to write up a Bayesian linear regression, and testing the covariates as described there.

In the example attached, the best model contains all three predictors, but the 95% credible interval for one of the three, IQ, includes zero. When I compare each of the predictors to a null model which contains the other two, the Bayes Factors indicate that there is very strong evidence for openness as a predictor, but only anecdotal evidence for IQ and leisq1_sumfreq.

I'm not sure how to interpret this, when the best model includes all three predictors. Rouder & Morey say that testing covariates doesn't allow for correlations between them, and there are some significant correlations between the predictors (openness and IQ, Pearson's r = 0.12; openness and leisq1_sumfreq Pearson's r = 0.39). Should I therefore ignore the results of covariate testing, and focus on the main model comparisons? How should I consider a 95% credible interval that includes zero?

Thanks in advance for any help you can give

Sarah

]]>I'm a bit new with Bayesian statistics. But I was wondering why my posterior mean for a predictor variable (in the Bayesian linear regression) has the same value as the unstandardized beta (in the frequentist linear regression) for that same predictor?

My second question is about the plot of the posterior distribution. A small horizontal line with a value on the left and on the right of that line is presented above the posterior distribution (so not the grey line). I was wondering what these two values from that line represent. They are not similar to the credible intervals. My guess is that these values represent the 2 standard deviations above and below the posterior mean of that variable. Is that correct or do they represent something different?

Hope you can give me an answer to these questions.

If you would like that I add the relevant output, let me know! ]]>

I do have JASP-question that I hope someone can help me with.

I have performed a 2

Thanks to all for the dedication and creativity applied to this project. It seems I cannot load any data set without a near-immediate crash. The error report gives no code or status.

Thanks in advance for weighing in on the issue. I'm running 64-bit Windows 10.

-Kris

]]>Group A's plot renders the Y axis as "Density"

Group B's plot renders the Y axis as "Frequency"

Is this a bug?

]]>Why is that? Can someone explain what might be going on with this? Many thanks!

]]>https://forum.qt.io/topic/65435/qt-qml-renderig-bad-quality-windows/3

The bad rendering is only in the data window, the tools window and the reports window work good.

With Jasp 0.8.6 everything works good.

I would like to try to solve the problem (as suggested in the forum qt) adding tha string

"// Options on Windows; set before instantiating QApplication QCoreApplication::setAttribute( Qt::AA_UseSoftwareOpenGL );"

to main.cpp file, but I don't know in witch file in the Jasp script I should add the string.

I'm not a programmer and I don't know much about it.

Could anyone help me please?

Thank You very much

Oliviero ]]>