Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[solved] timing in Opensesame and Eyelink

edited February 2016 in PyGaze

Dear all,
I'm sorry if this post might replicate a previous one, I wasn't able to find it!
I would like to know whether there is - or there should be - a correspondence between the time associated with events in Opensesame and Eyelink.
I ran an experiment with eye movements and now I would like to check whether certain events (e.g., stimulus display) have been accurately marked by the triggers that I sent from OS to EL while the experiment was running.
I see that in the edf file (converted to asc) time is coded by an integer, which if I understand correctly has a 1 ms precision (Eyelink was set to 1000 Hz time resolution).
The timing recorded by Opensesame and assigned to the same event however not only is a different number (which I can understand if it depends on a machine's lifetime, since OS and EL run on different computers), but has a different format, with 6 decimals. Is this real? Or is my spreadsheet editor showing me something else?

Hypotetically, if I make the two times equal, for instance by subtracting one from the other at the start of each trial, would I be able to have a common measure for time in the two files?

Sorry for the weird question, thanks in advance! :)

Chiara

Comments

  • edited 1:48AM

    Hi Chiara,

    I don't understand your question completely. What do you plan to gain from subtracting the one time from the other? You want to check if there's a discrepancy between the moment the trigger was logged in the edf file and the moment of the actual display change, right?

    Cheers,

    Josh

  • edited 1:48AM

    Yes, the subtraction would only allow to set a common starting point, since the two datafiles seem to have very different timescales.
    For instance, for the given event "stimulus display" the time reported in the MSG trigger in the edf file can be 4853434, while the "time_stimulus_display" variable in the csv file is 251890.585673
    how do I know if they are actually coding for the same moment in time?
    That's only an idea, I suppose I could just "normalize" the time of each event to the starting time of the trial, and then compare the two...
    My question was raised mainly by the thought to use OS timing to determine task events, and EL timing to determine response events, and then merge them together, if this makes any sense...

    Anyways, I was puzzled a bit however by the numbers I see in the csv, is it really so "precise"? Are these decimals fractions of milliseconds?

    Thanks,
    Chiara

  • edited 1:48AM

    Alright Chiara. About determining response events: yes, using the eye-link datafile would be the way to go, if the response is based on a fixation or saccade. Otherwise it's better to stick to the Opensesame logfile for your analyses. But in the case of a fixation/saccade response, note that the eyetracker received a "start recording" trigger from the Opensesame computer as well, and your response times will be based on that. So in that case you might as well base everything on the eye-link datafile.

    In all honesty, I don't know about the decimals in your logfile, but I doubt it's actually indicating something insanely precise. I checked a logfile of one of my own experiments, and this particular variable wasn't there. Did you create it yourself?

    Cheers,

    Josh

  • edited 1:48AM

    Thank you Josh, I use saccades as responses and I will do that!

    In the meanwhile I tested some random trials and the times of the events - within each trial - seem to correspond.

    The superprecise numbers were in my csv, automatically generated by the OS experiment. Maybe they depend on the general settings of the computer I was using?
    The millisecond units are at the left of the comma, so at least for that computer, the values on the right were actually fractions! isn't that impressive ;)

    Thank you again!
    Cheers
    Chiara

  • edited 1:48AM

    Hi Chiara,

    Some minor points, to add to @Josh's answers:

    1) The timing differs between the EyeLink and stimulus PCs. EyeLink timestamps are since epoch (I think?), and usually the OpenSesame time is from the onset of the experiment (I think?). Subtracting one from the other would be a solution, provided you could be sure that the timing runs at the same pace on both PCs. You can't be sure of this, so minor errors might sneak in. The best way to control for timing issues, is by logging events to your EDF file.

    (You could actually test this, by looking at the relation between the timestamps of events logged on both PCs.)

    2) You mention you use triggers, and you want to check their timing. That might not be a very easy thing to do, actually. This paper describes a way to test the system latency of a complete gaze-contingent setup, but I think that's beyond what you want. As far as I understand, you would like to know the latency between the execution of the command (on the stimulus PC) to log a message, and a message actually being logged? @sebastiaan, any ideas?

    3) Some (most) timing libraries can actually do microsecond-accurate timing, but this is relatively useless for our purposes. Just round off to the nearest millisecond (should be an integer multiplication of the screen refresh time).

    Cheers,

    Edwin

Sign In or Register to comment.