Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

Eye data processing flow

Does anyone have any information/know where I can find information about preprocessing eye data? For example, interpolation methods, filtering methods, general artifact detection, etc. I have developed a pretty rudimentary flow, but think the data quality could still be improved. Thanks!

Comments

  • sebastiaansebastiaan Posts: 2,811

    Hi,

    There is very little standardization here. In part, this is because there are many different eye trackers that all have their own properties, and thus require different kinds of preprocessing. And the analysis also depends on the experiment and the question that you want to answer.

    So I would first narrow the question down:

    • What eye tracker are you using?
    • What does your experiment look like?
    • What kind of data do you want to extract?

    One book that may interest you is Eye Tracking: A comprehensive guide to methods and measures by (among others) Kenneth Holmqvist. I haven't read it myself, but it's a technical guide to eye tracking.

    Cheers!
    Sebastiaan

    Thanked by 1tsummer2

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • tsummer2tsummer2 Posts: 20

    We are using the EyeTribe.

    During the experiment, the participant uses a chin rest while they perform a go/no go task.

    We are primarily interested baseline pupil size before each stimulus presentation.

    We are relatively new to python and pupil data, and we're using jupyter and pandas to process the data. Here is a link to the current version of the flow. (warning, it's very ugly/inefficient and there is a lot of stuff I am just fiddling around with in there - I will neaten it up once we have a finalized process). Both the eye tracking data as well as task performance are logged to the same tsv.
    As a summary, currently we do something like the following
    1. identify blinks (zero's in a somewhat arbitrary column)
    2. clear some samples around the blinks in pupil size series (about 200ms)
    3. linearly interpolate all zeroes in pupil size
    4. filter pupil size using scipy filtfilt
    5. z-score pupil size
    6. Calculate average zscore for each trial
    There are a few other things done but those are the most important steps.

    Thanks for the book suggestion, I will check it out.

    Thanks a lot for the help, Sebastiaan!

  • sebastiaansebastiaan Posts: 2,811
    edited March 16

    Hi,

    That looks pretty sensible to me, although the script could indeed do with some cleanup and structure (modules, functions, etc.)

    Two things come to mind:

    • You can sort of remove blinks. But because blinks have long-lasting effects on pupil size (seconds), even removed blinks can be confounds. So it makes sense to check also if the blink rate is different between conditions.
    • You z-score pupil size (per participant?). But do you have a reason to? If you do this to remove between-subject variability: there's no need. Your statistical analysis will do that for you, just like it would for reaction-time data, etc.

    Cheers!
    Sebastiaan

    Thanked by 1tsummer2

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

Sign In or Register to comment.