Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[solved] Recording only fixations

edited June 2016 in PyGaze

Hello again*...

So, using a Tobii eye tracker, the pygaze log records all samples from the eye tracker. This is great, but to make my life simpler later on, I am wondering whether only recording fixations (or indeed, separately recording fixations) might be possible? I note the eyetracker.wait_for_fixation_start() and eyetracker.wait_for_fixation_end() methods, so wonder whether using these in a script (within a coroutine), then writing to a logfile using eyetracker.log() is a feasible idea? Has anyone done anything like this before?

Any suggestions appreciated.

Neon

*PS. Apologies to clobber the forum with my questions, but I thought it would be worse to put multiple questions in a single post...

Comments

  • edited 3:02AM

    Hi Neon,

    As we write in the paper on PyGaze, the online (=while the experiment is running) event detection is meant to be a as quick as possible in detecting fixations, saccades, and blinks. This is at the expense of accuracy. There are more elegant algorithms to detect events, which require more samples around an event, and which do not necessarily work in a way that is compatible with online detection (some, for example, go back and forth along your data with a sliding window).

    The main issue is this: Fixation detection is a science in itself, and doing it while the experiment is running is bound to leave you with inaccurate fixation detection. Logging only the fixations would require you to detect them online, and thus would be detrimental to data quality.

    In addition, detecting fixations offline is relatively easy once you have a fixation detection (either write your own, or download somebody else's). You simply run it on your raw data file, and use it to produce a data file with only fixations.

    I think the industry standards are now Engbert & Kliegl, or Nystrom & Holmqvist.

    Good luck!

    Edwin

    PS: No worries! One thread for each question is the way we prefer it, so thanks for doing it like this. It makes it easier for other people to find solutions to specific issues :)

  • edited 3:02AM

    Thanks Edwin - your response is much appreciated. I had a slight suspicion that it might be this way, as all the 'one-package-fits-all' software (i.e. runs experiment, and does analysis) I've encountered do this post-hoc.

    Thanks for the heads up on industry standards; I'd previously looked at the I-VT filter of Salvucci and Goldberg (2000) but guess this has been superseded by Engbert etc.
    Hopefully I can find some opensource implementation of these online somewhere and tailor them to my needs. Of course, if anyone reading this has any immediate suggestions, this would also be appreciated!

    Best wishes!

    Neon

  • edited 3:02AM

    Hi Neon,

    Although I haven't actually used Salvucci and Goldberg before, but a quick look at their paper (specifically table 2) suggests that they do a very similar thing to what everyone else does. Their method is a bit less sophisticated, as it seems to look at point-to-point velocity in the raw data without any smoothing or corrections. It should work fine, though. Eye tracking isn't rocket science, and most methods will give you roughly the same results. (Don't ever say that to eye-tracking nerds, they might verbally lynch you.)

    If you're looking for an existing implementation of an algorithm like the one you referred to, you could have a look at PyGaze Analyser. It has some very basic detectors, including one for fixations that seems very similar to S&G (2000).

    Good luck!

  • edited 3:02AM

    Hi Edwin,

    Cheers for that. Yes - the pseudo-code looks straightfoward in S&G(2000). I was thinking of incorporating a sliding average to act as a low-pass filter to smooth things out a bit. Guessing about 3 to 5 samples window size?

    Very grateful for the link to the PyGaze Analyser code which I'll take a closer look at shortly. I need to brush up on python (and Numpy) a bit, but that can only be a good thing ;-)

    Hopefully I can hack something together that does the job!

    On another note (and just out of curiosity), the settings for the pygaze_init item includes saccade velocity and acceleration thresholds parameters - are these for the online calculation for the event detection??

    Thanks again,

    Neon

  • edited 3:02AM

    Hi Neon,

    Sounds all right, but you might want to play with the window size and type of smoothing a bit. I've grown quite fond of Hampel filtering, but primarily for pupil size signal; haven't actually tried it on gaze location yet.

    The thresholds are indeed for event detection, but they are only used when the tracker doesn't have any native event detection. So if you're using an EyeLink, PyGaze will use the EyeLink's own methods for waiting for an event (fixation, saccade, or blink).

    Cheers,

    Edwin

  • edited 3:02AM

    Hi Edwin,

    Yes - I expect I'll spend a fair bit of time trying to optimise the smoothing. Hadn't come across Hampel filtering before so something interesting to look in to (as long as I understand it!).

    Having had a peek at your code, if I understand correctly, consecutive missing values would be classified as fixations (as long as the 'missing' values were consistent, and there were enough over the min fixation time) - am I right here? If so, does this ever cause problems, or does pygaze analyser deal with it further 'up stream'?

    Thanks again for all the advice,

    Neon

  • edited 3:02AM

    Hi Neon,

    Yes, you're right! I usually chuck those out before I start, or replace them by NaNs. You could also remove all (0,0) fixations afterwards. But I'd recommend the NaN thing.

    Good luck!

Sign In or Register to comment.