Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Why does a bipolar scale need indifference at zero for regression?

We have a bipolar scale that asks do you agree with A more or b more


It happened to be coded this way

(1) completely agree with A <--> (7) completely agree with B

So (4) is indifference


It should have been coded this way:

(-3) completely agree with A <--> (3) completely agree with B

So (0 is indifference)


In a regression our agreement scale is a control to see whether the difference between two conditions can be explained by agreement. E.g. person A seems to be liked significantly more, is this just because most people agree with person A more than person B. (Agree is the control and Like is DV).


If coded the first way there is a significant main effect of condition (p < .001) and the second way the main effect disappears (p = 0.546) (i.e. is explained by agreement)


I know the second way is correct but I didn't realise this would make a difference in regression. Can anyone explain why I need to make sure indifference is at 0 on bipolar scales? The only thing I can think of is that -3 and 3 have the same absolute value but 1 and 7 don't but I am not sure why absolute value needs to be calculated.

Comments

  • Hmm that is strange indeed. The analysis probaly does not take into account that the scale is ordinal, but still, this is a counterintuitive outcome. I'll discuss this with some others. We may just not believe you and ask for the data :-)

    E.J.

  • Hi EJ,

    Here is a super simple dataset!

    Two very basic linear regressions, Decision Condition is categorical, Agree is Continuous:

    Like ~ Agree Original Coding *Decision Condition

    Like ~ Agree Correct Coding * Decision Condition

    I just just redid the coding by hand to double check. p - values are what I said above. Everyone I have asked have said this shouldn't happen, I feel like I am going crazy ๐Ÿ˜…

  • Here is the Jasp file if it saves you time

    And here is an image of the two sets of results.


  • And here is a plot if it is helpful

    (Obviously they are the same, just different coding - x axis numbers)

  • Hi @Whirly123 ,


    I just took a look (always excited for such little puzzles!), and I think something went wrong in the encoding because there is not a perfect correspondence between the two ways of encoding:

    As you can see, there is a level 4 (and a missing observation) in the correct coding. After correcting the data, the discrepancy is still there though, but a bit smaller:

    I think the problem lies in the interaction, where the product of condition and agreement differs for the different codings and influences the estimate of the main effect of condition. Although it does not affect the predictive performance of the whole model...

  • Thanks so much for spotting the error Jonny!

    I just checked and it's not there in the original one (before prepping for it here) :)!

    Of course this doesn't explain things though.

    "I think the problem lies in the interaction, where the product of condition and agreement differs for the different codings and influences the estimate of the main effect of condition"

    I agree with this but I can't understand why it would differ just across codings.


    The predictive performance of the model is the same but the conclusions one draws are different. In the original coding there is a difference between conditions when accounting for agreement but in the "correct" coding that isn't the case. I know I should trust the "correct" coding but I am not sure why.

  • I think interpreting a main effect in isolation while there is a significant interaction effect is pretty tricky to do, and your case highlights why. Because of the interaction effect, if we look at what our model predicts for someone in condition B, with an agreement score of 0 / 4, we can see how the main effect differs:

    for original coding: 3.212887 + 0.2882073 * 4 + conditionCoef * 1 + -0.5749412 * 4 = 4.141533

    for correct coding:  4.365716 + 0.2882073 * 0 + conditionCoef * 1 + -0.5749412 * 0 = 4.141533

    In order to still predict the same value, the conditionCoef needs to be different between the 2 models (since the difference in calculation for the main effect of agreement can just be added to the intercept). Namely, the difference between the coeffients is -0.5749412 * 4 = 2.299765, which is what we see in the results. I am not sure how illuminating this rambling is for you, because this still leaves us with the undesirable fact that the standard error does not scale in a similar way, leading to different t and p-values. I will also ask around a bit in the team, to see if anyone has a more satisfying conclusion.

  • Thanks for the response, it is useful! Something I can do (and have done) is instead of controlling for the interaction effect, I can control for the main effect of "agreement with the person you saw"

    So I would reverse code Agreement for all participants in condition B (which is just flipping the orange line in the plot).

    Note that this is (as far as I can tell) analytically identical to controlling for the interaction effect above. Importantly, the p values are the same! (well now the p for the interaction and main effect of agreement is swapped as you would expect). But note this is only the case when using the 0 indifference coding! The p values are different with the original coding!

    To me, this is another argument for why it needs to be 0 indifference coding and the other coding is simply *incorrect*. But again - dunno why ๐Ÿ˜‚

  • The inference about the main effect changes because changes to the coding change the question you are asking.

    Remember that the main effect estimates the difference between the decision categories, when the agreement rating is 0. If you look back at your two plots, you can see that this differenece is estimated to 2.04 at a rating of 0 (which is outside the original scale). When you center your ratings at 4 changes, you are now estimating the difference between the decision categories, for indifferent agreement ratings, which is 0.21. I think this is what you care about.

    You will observe a similar effect, when you use zero-sum rather than dummy coding for the decision category factor. Currently, you are using dummy coding, so the main effect for agreement ratings is the change in liking with a one unit change in agreement in decision category A (i.e. decision_condition = 0, red line in the above plot), which is 0.288, p < .001. If, instead, you use zero-sum coding, you will estimate the change in liking with a one unit change in agreement averaged across both decision categories, which is 0.006, p = .885.

    These codings also have downstream effects on your interaction terms.

    You can explore this further with the attached R script (change file extension to .R to run, not allowed for uploads).


  • "Remember that the main effect estimates the difference between the decision categories, when the agreement rating is 0."


    Ah! Now it makes sense! And 0 means something different depending on the coding! Zero doesn't really mean anything in the original coding where zero actually means indifference in the correct coding! This makes sense and is very helpful!


    "When you center your ratings at 4 changes, you are now estimating the difference between the decision categories, for indifferent agreement ratings, which is 0.21. I think this is what you care about."

    Is this a typo? I assume you mean center at 0 for indifference?


    "If, instead, you use zero-sum coding, you will estimate the change in liking with a one unit change in agreement averaged across both decision categories, which is 0.006, p = .885."


    Very cool! Thanks frederikaust. This is definitely filling holes my regression knowledge. Just based on this one example it seems like "zero-sum coding" would be a better default than dummy coding. What is the disadvantage of coding this way?


    This is also such a good example of why we (psychologists) need to understand what these stats actually mean better. I bet this sort of thing will be hard for people to spot (keeping things default and using a bipolar scale that starts at 1). I only spotted it because I was playing with the reverse coding example!

    Bipolar scales and mean-centering (something that is default in GAMLj in Jamovi) is surely going to mess things up further if people don't know this! And people generally mean centre when they do mixed-models which is going to mess up interpretation for bipolar scales as well.

  • Is this a typo? I assume you mean center at 0 for indifference?

    Yes, this was supposed to say

    When you center your ratings at 4, you are now estimating the difference between the decision categories, for indifferent agreement ratings

    and I hoped it would be interpreted as shifting the scale by 4 points such that 4 turns into 0 ("Centered coding" in the plot above).

    Just based on this one example it seems like "zero-sum coding" would be a better default than dummy coding. What is the disadvantage of coding this way?

    What coding to use really depends on what you are interested in. In some cases you may be most interested in the association between the continuous variables in the reference category (the one coded with 0).

    Happy to hear this was helpful. :)

Sign In or Register to comment.

agen judi bola , sportbook, casino, togel, number game, singapore, tangkas, basket, slot, poker, dominoqq, agen bola. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 50.000 ,- bonus cashback hingga 10% , diskon togel hingga 66% bisa bermain di android dan IOS kapanpun dan dimana pun. poker , bandarq , aduq, domino qq , dominobet. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 10.000 ,- bonus turnover 0.5% dan bonus referral 20%. Bonus - bonus yang dihadirkan bisa terbilang cukup tinggi dan memuaskan, anda hanya perlu memasang pada situs yang memberikan bursa pasaran terbaik yaitu http://45.77.173.118/ Bola168. Situs penyedia segala jenis permainan poker online kini semakin banyak ditemukan di Internet, salah satunya TahunQQ merupakan situs Agen Judi Domino66 Dan BandarQ Terpercaya yang mampu memberikan banyak provit bagi bettornya. Permainan Yang Di Sediakan Dewi365 Juga sangat banyak Dan menarik dan Peluang untuk memenangkan Taruhan Judi online ini juga sangat mudah . Mainkan Segera Taruhan Sportbook anda bersama Agen Judi Bola Bersama Dewi365 Kemenangan Anda Berapa pun akan Terbayarkan. Tersedia 9 macam permainan seru yang bisa kamu mainkan hanya di dalam 1 ID saja. Permainan seru yang tersedia seperti Poker, Domino QQ Dan juga BandarQ Online. Semuanya tersedia lengkap hanya di ABGQQ. Situs ABGQQ sangat mudah dimenangkan, kamu juga akan mendapatkan mega bonus dan setiap pemain berhak mendapatkan cashback mingguan. ABGQQ juga telah diakui sebagai Bandar Domino Online yang menjamin sistem FAIR PLAY disetiap permainan yang bisa dimainkan dengan deposit minimal hanya Rp.25.000. DEWI365 adalah Bandar Judi Bola Terpercaya & resmi dan terpercaya di indonesia. Situs judi bola ini menyediakan fasilitas bagi anda untuk dapat bermain memainkan permainan judi bola. Didalam situs ini memiliki berbagai permainan taruhan bola terlengkap seperti Sbobet, yang membuat DEWI365 menjadi situs judi bola terbaik dan terpercaya di Indonesia. Tentunya sebagai situs yang bertugas sebagai Bandar Poker Online pastinya akan berusaha untuk menjaga semua informasi dan keamanan yang terdapat di POKERQQ13. Kotakqq adalah situs Judi Poker Online Terpercayayang menyediakan 9 jenis permainan sakong online, dominoqq, domino99, bandarq, bandar ceme, aduq, poker online, bandar poker, balak66, perang baccarat, dan capsa susun. Dengan minimal deposit withdraw 15.000 Anda sudah bisa memainkan semua permaina pkv games di situs kami. Jackpot besar,Win rate tinggi, Fair play, PKV Games