agen judi bola , sportbook, casino, togel, number game, singapore, tangkas, basket, slot, poker, dominoqq,

agen bola. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 50.000 ,- bonus cashback hingga 10% , diskon togel hingga 66% bisa bermain di android dan IOS kapanpun dan dimana pun. poker , bandarq , aduq, domino qq ,

dominobet. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 10.000 ,- bonus turnover 0.5% dan bonus referral 20%. Bonus - bonus yang dihadirkan bisa terbilang cukup tinggi dan memuaskan, anda hanya perlu memasang pada situs yang memberikan bursa pasaran terbaik yaitu

http://45.77.173.118/ Bola168. Situs penyedia segala jenis permainan poker online kini semakin banyak ditemukan di Internet, salah satunya TahunQQ merupakan situs Agen Judi Domino66 Dan

BandarQ Terpercaya yang mampu memberikan banyak provit bagi bettornya. Permainan Yang Di Sediakan Dewi365 Juga sangat banyak Dan menarik dan Peluang untuk memenangkan Taruhan Judi online ini juga sangat mudah . Mainkan Segera Taruhan Sportbook anda bersama

Agen Judi Bola Bersama Dewi365 Kemenangan Anda Berapa pun akan Terbayarkan. Tersedia 9 macam permainan seru yang bisa kamu mainkan hanya di dalam 1 ID saja. Permainan seru yang tersedia seperti Poker, Domino QQ Dan juga

BandarQ Online. Semuanya tersedia lengkap hanya di ABGQQ. Situs ABGQQ sangat mudah dimenangkan, kamu juga akan mendapatkan mega bonus dan setiap pemain berhak mendapatkan cashback mingguan. ABGQQ juga telah diakui sebagai

Bandar Domino Online yang menjamin sistem FAIR PLAY disetiap permainan yang bisa dimainkan dengan deposit minimal hanya Rp.25.000. DEWI365 adalah

Bandar Judi Bola Terpercaya & resmi dan terpercaya di indonesia. Situs judi bola ini menyediakan fasilitas bagi anda untuk dapat bermain memainkan permainan judi bola. Didalam situs ini memiliki berbagai permainan taruhan bola terlengkap seperti Sbobet, yang membuat DEWI365 menjadi situs judi bola terbaik dan terpercaya di Indonesia. Tentunya sebagai situs yang bertugas sebagai

Bandar Poker Online pastinya akan berusaha untuk menjaga semua informasi dan keamanan yang terdapat di POKERQQ13. Kotakqq adalah situs

Judi Poker Online Terpercayayang menyediakan 9 jenis permainan sakong online, dominoqq, domino99, bandarq, bandar ceme, aduq, poker online, bandar poker, balak66, perang baccarat, dan capsa susun. Dengan minimal deposit withdraw 15.000 Anda sudah bisa memainkan semua permaina pkv games di situs kami. Jackpot besar,Win rate tinggi, Fair play, PKV Games

## Comments

Hi Kindred,

those are very good questions, let me answer them:

1) You would need to run a meta-regression to properly the effect sizes of different study types. Unfortunately, neither of the Bayesian Meta-Analyses supports the feature (yet). You can, however, use the Classical Meta-analysis, include the study type as a "Factor", and test whether it is a statistically significant predictor.

2) I'm not aware of any reason why a number of studies should be a problem here.

3) Let me quickly illustrate you on an example:

Imagine that you have 12 models, the first six assume an effect and the later 6 assume an absence of the effect. Furthermore, the data might not be very informative with respect to the remaining model types (random / fixed and no / some publication bias). So, the logarithmic marginal likelihood is slightly better for the first six models, assuming an effect, - let's say by 4, in comparison to the remaining 6 models assuming the absence of the effect. Moreover, there is no difference between the first 6 models, since the data do not distinguish them well enough. The following commands outline the calculations in R, where marg_lik stands for the logarithm of marginal likelihood, prior_prob for the prior model probability, post_prob is the resulting posterior model probability, bf_effect is the overall BF for effect size (you can simplify it to the posterior odds since the prior odds are equal to 1), and bf_incl is the inclusion BF for each model:

marg_lik <- c(rep(4, 6), rep(0, 6))

prior_prob <- rep(1/12, 12)

post_prob <- bridgesampling::post_prob(marg_lik, prior_prob = prior_prob)

bf_effect <- sum(post_prob[1:6]) / sum(post_prob[7:12])

bf_incl <- sapply(post_prob, function(p)(p/(1-p)) / ((1/12)/(11/12)))

If you run the code, you would see that each of the first six models, assuming the effect, has the same posterior model probability of around 0.164 and the remaining six models, assuming the absence of the effect have a posterior model probability of around 0.002. The Bayes factor for the effect size equals the change from prior to the posterior odds, in this case to 54.6, since .98 of the posterior model probability belongs to the models assuming the presence of the effect. However, the inclusion Bayes factors for each of the first six models is only around 2.15. That's because their posterior probability increased only from 0.083 to 0.164 - the rate of change was much smaller for any of the individual models in comparison to the set of models assuming the presence of the effect. This is a very nice example of the usefulness of the Bayesian model averaging used in RoBMA. If you used classical inference, you would have based your conclusion on only one of those six models even though the data support all of them. Here, you can seamlessly incorporate inference from all of them within one procedure.

see our preprint on the theoretical side of RoBMA (https://psyarxiv.com/u4cns/) as well as tutorial paper for the JASP implementation (https://psyarxiv.com/75bqn/).

4) The inclusion BF for the individual models are not of main inferential interests (that's is also why they are not part of the default output). It can be useful to look at them to check whether one of the models is favored much more than the other - you could e.g., write that the model-averaged ensemble was mainly based on the random-effects model (BF = xx, Post. prob = yy). Or, if one of them is of particular interest - for a specific theoretical reason.

Cheers,

Frantisek

Hello,

Thanks for the detailed response! I am still new at this so apologies if some of my follow-up questions arent clear.

Let me detail the data a bit more to give you a better context: All the studies used did not have "assessing attention in mTBI" as a main objective. As part of their measures, they included some attention measures. Some studies have more than one measure (one of them has 3 selective attention measures for ex). So in total, I have 13 selective attention measures that come from 7 studies.

1) I spoke to a statistician (he is not in the field of psychology) and he suggested I select 1 selective attention measure from each study (so I end up with 7 measures from 7 different studies), then I run a Selection model analysis to see if there is a difference between the regular Effect size and the adjusted for publication bias one in the random effects model. Then, I would look at the funnel plot and if there is an asymmetry (presence of bias), then I can run a bayesian meta-analysis to see which model is best. What do you think?

2- 4) amazing! Ok this makes a lot of sense to me. Thanks!

Hi Kindred,

thank you for the additional details. I did not realize that you have multiple estimates from a single study. There are two possible ways how to deal with this situation:

a) as your statistician suggested, select a single estimate from each study, so you don't violate the independence assumption (multiple estimates from a single study are more likely to be similar, leading to underestimation of standard errors and etc.)

b) account for the nesting of multiple estimates from a single study by fitting a "3-level" meta-analysis. Unfortunately, this type of meta-analysis is yet to be supported in JASP.

If you feel comfortable selecting a single estimate from each study, you can follow the a) option. I would further recommend analyzing all estimates together (ignoring the violation of independence) and use it as a robustness check (you can report that in a footnote or appendix).

Regarding using the usage of selection models and etc, that would indeed correspond to the classical approach to this situation. However, it is a very suboptimal solution (as we showed in the previously linked articles). Selection models often have convergence difficulties and the selection of p-value cutoffs can influence the results. This is even more problematic when the number of original studies is low, as in your case -- since the likelihood ratio test that you use to decide between selection models or random-effect meta-analysis are severely underpowered. On the other hand, the RoBMA module bypasses this issue by considering all models and weighting the inference according to their ability to predict the data. We provide a more detailed treatment on the topic in this preprint: https://psyarxiv.com/u4cns/.

Cheers,

Frantisek

Hello

I read that paper regarding the issues of selection models and indeed, they are flimsy in my case. Are you suggesting I just use the RoBMA after selecting a single effect size from each study?

When you say analyzing all estimates together and using it as a robustness check, do you mean analyze them with RoBMA as well?

Cheers,

PS: I really appreciate the help in these murky waters, thank you so much

Yes, that would be my suggestion.

I should also note that I'm a co-author of the method, so there is a conflict of interest. Nevertheless, I believe that RoBMA is the best available option in your situation.

Cheers,

Frantisek

Great!

I find RoBMA to be a solution to my issue as well. Its more intuitive for me.

Im interested in hearing more about what you mean by a "Robustness check" though. What exactly am I looking for? Do you have a paper of RoBMA used this way? Ive read the papers but I might have missed it

Thanks!

By robustness check, or sometimes called "sensitivity analysis", I mean verifying that your specific analytic choices did not affect the results.

For example, some people could argue that you should use all estimates, even though some of them are from the same paper - because selecting a single estimate from each paper omits a lot of data. On the other side, other people might argue that including all estimates is worse and only a single estimate from a single paper should be used - because the decency of the multiple estimates from a single study goes against the independence assumption. The best thing you can do in this situation is to determine which approach you will use as the main result but then run the alternative anyway and report it in a footnote/appendix. You would hope for similar mean effect size estimates which would verify that the results are robust to the choices you did during the analysis. (You are likely to observe different Bayes factors since you will have much more evidence if you include more studies.)

Cheers,

Frantisek

Hi Frantisek,

Re your first response, I'd like to follow-up and ask about possibility of conducting the meta-regression using JASP's Bayesian Linear Regression functionality under 'Regression'.

According to Cochrane Handbook for Systematic Reviews of Interventions:

"Meta-regressions are similar in essence to simple regressions, in which an outcome variable is predicted according to the values of one or more explanatory variables."

With a meta-analysis, since we are usually working with summary data, does it make sense to conduct a Bayesian 'meta-regression' using this Bayesian Linear Regression function in JASP, with continuous and categorical (dummy-coded) variables, so as to complement the main analysis, i.e., after we have performed the Bayesian meta-analysis (e.g., with RoBMA)?

I am new to the field and area of meta-analysis, so pardon me if I do not understand whether there is a genuine difference between a meta-regression and an 'ordinary' regression for Bayesian analysis. Thanks in advance!

Hi jber3175,

currently, the Bayesian Meta-Analysis analysis does not support a metaregression. However, you are correct that you can obtain a similar functionality from the Bayesian Linear Regression analysis - it's important to keep two things in mind though:

1) You have to use ``WLS Weights'' argument to pass the weighting of the studies (usually 1/se^2). You would discard information about the precision of the study effect size estimate otherwise.

2) This will result in weighted least squares meta-regression that differs a bit from the fixed/random effects meta-regression models regularly used in psychology. Nevertheless, some authors (e.g., Stanley and Doucouliagos) that WLS meta-regression has better properties than fixed/random effects meta-regressions.

Best,

Frantisek

Hi Frantisek,

Thanks for your guidance and clarification on the specific resultant models. I am excited to learn that there is such a thing called WLS meta-regression!