Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Linear regression - case wise diagnostics - how to remove outliers?

Hi everyone,

I heave outliers in the case wise diagnostics of the linear regression in JASP.

Here is my problem: as soon as I filter the case number out (e.g. 48) it knows a next case number as outlier (e.g. 47).

What can I do about it? Where is my mistake?

Thank you so much for your help!

Bests,

Laura

Comments

  • Hi Laura,

    This is probably not a mistake. When you filter out 48, this reduces the variance, and this may cause another outlier to be identified. I am not sure this is problematic -- there are a number of ways in which you can detect outliers, and it seems prudent to try a number of different ones and confirm that your general conclusion is robust.

    Cheers,

    E.J.

  • Hi EJ,

    Apologize if I am supposed to open a new topic, I was not sure if need to open a new discussion or ask in this dicussion page since it is on the same topic.


    Here is my question, for detecting outliers before performing multiple regression, when I use boxplot in the descriptive analysis, 2 of my independent variables have few outliers. However, when I use z-score (following the below video at minute 8:05), the z-score minimum and maximum are within the range (this is without removing outliers detected by Boxplot.


    Is this expected since each method works differently (Boxplot based on median, z-score based on mean and SD)?

    Since regresstion is sensitive to outliers, would it be valid to use z-score only as detector of outliers without using boxplot?


    Best Regards,

    Sameha

  • Yes, I'd go with the z-scores.

    E.J.

  • ...although you could remove the outliers as a robustness check.

  • Hi EJ,


    Thank you very much. Your answers have been helpful for my studies.

    It's appreciated!


    Best Regards,

    Sameha

Sign In or Register to comment.