does that mean the p-value was so small that JASP rounded it down to 0?

Or should I be concerned about something?

]]>I'm trying to run a generalized linear mixed model with 5 fixed variables (one of them on a scale score the rest are nominal variables) and one random nominal variable, and a binary outcome variable (see below). Last time I ran a similar model it took the night to calculate it, however, this time, it just keeps on "thinking" after calculating the first variable (see blow) I tried many different computers and gave it over a night to calculate. any ideas please??

"Errors were detected in 9 analyses. These analyses have been removed for the following reasons:

1: Module "jaspRegression" had a problem: Module is not available, to load this JASP file properly you will need to install it first and then retry."

Is there any way I can recover this analysis?

Thank you

this msg keep alert when I do network analysis - graphical options - group name and then I match the groups with variables :).... How can I fix this one?

]]>Even though I give names to group variables, it just doesn't appear on my result.

How can I fix it?

]]>The data is collected from a survey and the participants were given 4 options, as illustrated in table 1. For the first question, all options were selected by one or more respondents, so the contingency table looks good and I believe the data was analysed correctly.

However, in the next question only 2 of the 4 options were selected by all participants, and so 2 were selected by none. The contingency table produced doesn't even display the options that were not selected, and so I worry that the test was run incorrectly and the result is skewed data. How can I let JASP now that there should be a total of 4 options on the horizontal axis?

1. Is it appropriate to use JASP software to conduct Bayesian Linear Regression for non-stationary(can't be transformed) small sized (N=21) time series data? The purpose of such estimation is to see possible impact of certain exogenous variable on the dependent one. Furthermore, I have included three additional variables (also potentially related to the dependent variable) for robustness.

2. If Yes, how should i choose priors and model priors in the setup section? ( For now, im leaning towards AIC with Beta binominal a 1 b 1, as i've found info that AIC helps to select a model that has enough complexity to capture the data patterns in non-stationary time series and Beta binominal considering the size of the model, assuming the whole combination of variables more robust can explain the dependent one )

Thanks in advance!

]]>I am new here and new to Bayesian statistics - however, loving JASP!

Here is my problem:

I am looking at a student's dataset and have run the Bayesian Paired T-test (from the JASP default menu and settings), and I get a BF10 of 6402.23 (frequentist output, if of help: t = -11.72, sample size = 8).

However, in the Summary Statistics module, inputting the results from the frequentist outputs (t = -11.72, sample size = 8 [the same default prior]), the BF10 is 2413.02.

And again, if I run this in R (I am new to R but have input the same frequentist outputs (t = -11.72, sample size = 8 [the same default prior]), the BF10 is 7.78.

Can anyone help?

(I know there are limitations with small sample sizes - here, n=8; however, it's more for teaching purposes. Perhaps this is the issue?)

Many thanks,

Jon

]]>I would like to create a trajectory with multiple variables as boxplots. Is it possible with JASP?

I'm only able to create Boxplots per each variable, but I want them to be on the same diagramm ==>

x axis: variables (e.g. v192, v191 ...) [equal to time in hours]

y axis: boxplot with 'mean' and 'sd'

Better explanation in the following picture!

Thanks in advance for your help :)

Sebastian

]]>I'm working on my undergraduate dissertation with eye-tracking data, and I'd like to analyse the following gaze data (fixation duration) with a Linear Mixed Model in JAS.

But I'm not sure if its residuals are (approximately) normally distributed based on the Q-Q plots. Since I finished the initial data cleaning, I'm not sure whether I should further exclude more data. Any advice on this would be very much appreciated.

Thanks in advance for your help!

]]>I am completely new to code and structural equation modelling.

I am investigating relationships between mindfulness, psychological capital, stress and several workplace outcomes (engagement, burnout, turnover and job satisfaction)

This is the code I am trying to use:

Mind =~ Q1_1 + Q1_2 + Q1_3 + Q1_4 + Q1_5 + Q1_6 + Q1_7 + Q1_8 + Q1_9 + Q1_10 + Q1_11 + Q1_12 + Q1_13 + Q1_14 + Q1_15

Psycap =~ Q2_1 + Q2_2 + Q2_3 + + Q2_4 + Q2_5 + Q2_6 + Q2_7 + Q2_8 + Q2_9 + Q2_10 + Q2_11 + Q2_12 + Q2_1_A + Q2_2_A + Q2_3_A + Q2_4_A + Q2_5_A + Q2_6_A + Q2_7_A + Q2_8_A + Q2_9_A + Q2_10_A + Q2_11_A + Q2_12_A

Stress =~ Q3_1 + Q3_2 + Q3_3 + Q3_4 + Q3_5 + Q3_6 + Q3_7 + Q3_8

Eng =~ Q5_1 + Q5_2 + Q5_3 + Q5_4 + Q5_5 + Q5_6 + Q5_7 + Q5_8 + Q5_9

# regressions

Eng ~ Psycap + Stress + Mind

Psycap ~ Mind

I am getting the following warning message:

lavaan error unordered factor(s) detected make them numeric or ordered

Could you please help me with this?

Thanks a lot!

Is there some information here or is it just random?

For me their vertical ordering is just as I predict them to be in a pathway. But perhaps this just happenstance?

]]>I'm conducting moderation analysis with risk as the predictor, change as the moderator, and ethics as the outcome variable for four different scenarios (testing this moderation effect of change four times corresponding to four levels of risk and ethics).

I have two questions:

1- H1 model includes the predictor, moderator and interaction (predictor*moderator); to obtain R-squared change and report the interaction effect, the null model (H0) should include the predictor and the moderator or only the predictor?

2-how to address possible type I error resulting from conducting the moderation analysis four times? Each analysis tests the relationship between risk, change, and ethics within one scenario, with responses provided by the same participants across all four scenarios.

could you please suggest a video or learning material for moderation in JASP?

Thank you,

]]>I'm interested in conducting a Bayesian regression analysis. I have various continuous predictors, which I can add as "Covariates". However, I would also like to add Gender, which should be a factor if I'm correct (its a nominal variable). In the regular regression I can do, but not in the Bayesian regression.

Hence, I started programming it in R. However, also here did not succeed to add Gender as a factor haha. I hope someone can help :).

I used the 'bas.lm' function to analyze the posterior probabilities. The Italic items are "covariates"...

**BLR <- bas.lm(RT.CIT.effect ~ First.LSRP.score + Second.LSRP.score + BIS.11.score + nogo.errors, data = data, prior = "JZS", modelprior = beta.binomial(1,1), method = "BAS", alpha = 0.125316)**

And this is what I wrote to calculate the BF inclusion

**pip_vector <- c(0.1729, 0.1547, 0.2247, 0.2250)**

**prior_odds_inclusion <- 1**

**prior_odds_exclusion <- 1**

**bf_inclusion <- pip_vector / (1 - pip_vector) * (prior_odds_inclusion / prior_odds_exclusion)**

All help will be greatly appreciated!

Nathalie

]]>For example, I have a Group IV with two levels and a Condition IV with two levels. One DV.

I can only get descriptive using Split group for one IV at a time. For example, for the Group IV. If I want to get the “cell” information for Group-by-Condition (e.g., M/SD/IQR/Shapiro) I have to filter the data at one level of Condition, copy the result to WORD, and then repeat for the other level of Condition.

Is there a workaround or is there any work being done to develop an option so that I do not need to filter at one level of one IV.

Thanks,

paul

]]>The "mode" should be translated as modalna or dominanta, not tryb

Is there a way I could pass it to the translation team, or correct the translation myself?

Regards,

Grzegorz

]]>My question is relating to multiple comparison and sample size planning. As I understand, when we have multiple comparisons, we can control prior probability of H0, depending on the number of comparisons. For example, if one has five comparisons, the prior probability of H0 = (1/2)^(2/5) = 0.758 and the prior probability of H1 = (1 - 0.758) = 0.242. Then, the corrected posterior probability of P(H1|y)/P(H0|y) can be referred as evidence for H1 relative to H0, which can be obtained by multiplying uncorrected BF10 with the corrected prior probability (= 0.242/0.758).

Now I am making an experiment of “open-ended sequential BF design” with multiple comparisons. For such design, one needs to set the threshold for evidence of H0 and H1 to stop data collection. For example we stop data collection when BF10 exceeds 3 or when BF10 gets less than 0.3. But with multiple comparisons, should the threshold be corrected?

If so, increasing the prior probability of H0 makes testing more conservative. It is problematic because evidence for H0 is easier to be gotten as the number of comparisons increases. So, do we need to select a hypothesis depending on the two values. One is based on correction of prior probability of H0 and another is based on correction of prior probability of H1?

For example, if we have five comparisons and want to set the threshold as BF10 = 3 and 0.3, two values are evaluated. One value is calculated by multiplying uncorrected BF10 with 0.242/0.758, and judged whether it exceeds 3 or not. Another value is calculated by multiplying uncorrected BF10 with 0.758/0.242, and judged whether it is less than 0.3 or not. Then, when the value goes beyond the thresholds in one of those judgements, data collection is stopped.

Is such method wrong for open-ended sequential BF design?

Thank you very much,

]]>Whenever I try to use the filter function to exclude rows which have NA in all 5 question columns it doesn’t seem to exclude the correct rows. I only need this to look at the participant demographics. How should I proceed? Thanks.

]]>I might have noticed a small discrepancy with the g-prior function while conducting a Bayesian linear regression exercise. I was originally working in an older version of JASP 2020 (version 14) and when I went to the advanced options and selected the g-prior option the nearby alpha value box next to the Hyper-g option was greyed out.

However, when I updated to JASP version 18, and I selected g-prior, there is now an active alpha value box immediately adjacent to the g-prior selection. The position of this alpha box seems to have shifted slightly in the newer update.

The alpha value box has a restriction on the inputs from 2 to 4 and a recommended value of 3, which is what I thought Liang et al. (2008) recommended for the Hyper-g prior option.

I have seen on (https://cran.r-project.org/web/packages/BAS/BAS.pdf) some text that pertains - "g-prior", Zellner’s g prior where ‘g‘ is specified using the argument ‘alpha‘. Does this mean that the alpha value box in the 2024 JASP version 18 is really stating a g input? and if so why is it limited to only a range of 2-4? or is the alpha some sort of weighting factor? I thought the value of g was usually determined by the references shown below from Consonni et al. (2018).

In summary, did the newer release of JASP get an update to adjust the g-prior distribution manually (i.e., what is the alpha box, and what is its function?) or did my code glitch?

Thanks,

TJ

]]>First of all, I'm well aware that the methods are different and therefore not necessarily comparable, but the results shouldn't be opposed in the majority of cases, otherwise there would be problems of replication and generalization?

To sum up, frequentists focus on the reliability of the procedures generating their conclusions (the p-value), while Bayesians are interested in the credibility of the hypotheses (Bayes factor). Bayesians look at the H0/H1 ratio to quantify and discuss it.

Do you think that simple one-factor models (which make more accurate predictions) are to be favored for Bayesian RM-ANOVAs, and that currently for RM-ANOVAs with more than two factors the frequentist model would be more reliable/fair?

In this case, we could use Bayesian mixed models rather than B-RM-ANOVA when we have more than 2 VI in repeated measures. But what are the advantages and disadvantages of using one rather than the other?

What's more, Bayesian analysis takes priorities into account. How do you define them for a good anlayse, and how were the default parameters you propose in 'additional options' chosen?

What do you think of the examples "where the use of prior distributions and Bayes factors suggests different conclusions than the classic p-values"?

If the results differ between frequentist and Bayesian, what can and should be reported in the articles and/or how can we justify our choices? The two totally different interpretations?

Furthermore, in the article by van den Bergh et al. (2020) you specify that "the residuals must be normal" (p.80-81), but is this the only assumption for Bayesian RM-ANOVA? Or do Bayesian sphericity tests exist (and would be useful in this approach?) but are not yet available on JASP?

We also noticed earlier (see forum discussion) that Bayesian RM-ANOVA was more sensitive to outliers, how can we see this in JASP quickly?

Thank you for your feedback and your hard work on JASP, it's always very interesting to have your opinion!

Johan

]]>I would like to view a Pareto chart for summary data, such as counts or averages, in the quality domain.

I would appreciate it if you could let me know if this feature is available.

And, if so, provide instructions on how to access it.

(I have already checked the availability of a Pareto chart for the frequency of categorical data.)

Thank you very much.

]]>L ~ D

D ~ Q

Q ~ M

M ~ P

P ~ I

I ~ A

A ~ M

All these variables linearly interrelate. They're logged variables to make this so.

I didn't standardize the data before estimation, and I'm using the unstandardized regression coefficients output of this model to predict output from input, in R, using the following R code:

------------------------------

x=USER SPECIFIED

x=log10(x) # log the input

x=x*-0.375

x=x*-0.875

x=x*-0.617

x=x*1.174

x=x*0.661

x=x*0.971

x=x*-0.947

x=10^(x) # de-log the output

x

---------------------------------

The problem is I'm not getting output within the expected range (and I do de-log the output. And the inputs tested are within the range of data of the model). Are the regression coefficients useable in this way? Or is what I'm trying to do just non-sensical?

The baseline test of the model has a small p-value (9e-29), and the p-values of the regression coefficients are all < 0.05, indeed many are so small that they're reported in JASP as 0 (and the 95% confidence intervals are good, with no zero within any). All the +- signs of the regression coefficients are as expected. The GFI is 0.969. So I presume the model is good? All be it the CFI is only 0.828, which is a worry?

P.S. The data set is quite sparse and I've used FIML setting within JASP to populate where there is missing data values.

]]>First of all thanks a lot for providing us with the computer program JASP to analyze our data set with respect to the growth curve model.

We prepared a manuscript for testing the change of students' ratings over four consequtive semesters.

The data gave acceptable fit to the linear model. We submitted the manuscript to a journal for publication.

One of the reviewers asked us including the effect size values for the model and the test of the asumptions of the growth curve analysis, such as normality, and homoscedasticity.

Would you please help us if these tests including the effect size are available in JASP. If so, we need to know how to carry out the analysis. If there is no option to have these anlyses in JASP we will appreciate if you provide references tocarry out aforementioned analyses.

Thanks a lot for your cooperation.

Giray Berberoglu

]]>Earlier versions, MSE was between 0 and 1 and which is correct.

Is it an error or different ways of calculations.

Most of JASP books written based on earlier version of JASP discusses Machine Learning Regression output MSE between 0 and 1

]]>I'm very new to Bayes, and am a little confused on the interpretation of the BF and specifically interactions. I would like to present the BF alongside the traditional ANOVA (actually it is an ANCOVA I am running - age is a covariate). I understand BF10 and BF01, but the interactions in the 'model comparison' are not analogous to the interaction effects in the traditional ANOVA since they also contain the main effects (TargetAge + Pgroup + TargetAge *Pgroup) and the BF tells a pattern that is quite different to the interaction effect in the traditional ANOVA.

I have compared with the null model which I feel makes most sense for what I need. I've also got the Analysis of Effects table, but then I'm not sure how this directly compares to the traditional ANOVA. Essentially, I'd like to report BF10 with ANOVA..

Below I report the BF10, but got stuck on what to report for the interaction. So, my question is, what should I report here? If I need to use BFinc instead, then how would I need to rephrase what I report below?

*Mixed factor ANCOVA and the analogous Bayesian analysis were conducted and are reported. We report Bayes Factor 10 to demonstrate comparisons of the alternative model to the null model. * *The covariate for age was found to be significant, F(1, 110) = 9.63, p = 0.002, ƞp2 = 0.08, BF10 = 31.24, therefore, when participant age is controlled for, this analysis found no main effect of group, F(2, 110) = 0.70, p = 0.489, ƞp2 = 0.013, BF10 = 0.26, and no interaction between group and stimulus age, F(8, 440) = 0.40, p = 0.919, ƞp2 = 0.007 [BF10 = ????? ]. However, a main effect of stimulus age was revealed, F(4, 440) = 4.15, p = 0.003, ƞp2 = 0.036, BF10 = 2.24 x 1011.*

*Many thanks!*

The first problem is that I got a "waiting for the module to initialize" message, so the module was in fact not running. A quick look on the forum suggested that I just needed to close JASP and re-do it. So I saved my JASP file, closed the program, and then re-opened it.

It did bring back the prior module specification, but had a warning that in part seemed to say that there were too many variables in the file. While the CFA I had specified only had 36 variables total, the overall data file has a couple of thousand. Although I couldn't grasp why the CFA module would care about variables not in the model, I decided to use a different file that only had the 36 variables plus an ID measure.

This time I got an error message saying that it could not use categorical data for FIML (or for two-stage or robust two-stage), so I should select a different method for handling missing data. I could do that, but the original data imported from SPSS were defined as numeric scalar data (ordinal, 1-6 strongly disagree-strongly agree), and JASP defines each of my variables as scalar.

So I'm troubled by JASP treating the data as nominal. If I switched missing data to listwise deletion (or pairwise), the output would appear, but with a caution that "the test statistic is scaled shifted [sic] because there are categorical variables in the data," so this misconception is persisting across all missing data specifications. I've been able to run descriptive statistics on the same variables with no problem; there are missing data, because these results come from three waves of a longitudinal panel survey, but that should neither affect how the data are classified nor the CFA, as only those people who appear in all 3 waves for these variables should be included in the latter.

Any suggestions on resolving this?

Branden

]]>1. How do I know if this 'centrality measures per variable' is a meaningful result? Like is betweenness 2.134 powerful?

I got figure too, so if Is it a meaningful result if the points are skewed to the right?

2. What is weight matrix?

]]>I have tried to install wasp on my Mac for two weeks now and it keeps downloading it, installing it but as soon as I try to open the app it crashes and kicks me out of it, with a malfunction message from my software.

I don't know what to do, does anyone have had the same problem and can suggest me smth?

Thank you!

]]>I have following the tutorial on Longitudinal Data Analysis that can be found here:

I wanted to replicate these results in JASP using the Mixed Models module.

For the first model, in R, I would write:

raneff_int <- lmer(rasch_FIM ~ 1 + (1|subID), data=DATA, REML=FALSE)

How do I add the initial "1" as a fixed effect in JASP? If I try to brute force it in the R Console and hit Ctrl + ENTER, it does not work either?

Best regards,

Robbin

]]>