Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

Two timing questions

LJGSLJGS Posts: 25


I am running an eye-tracking experiment through opensesame and concerned about the accuracy of my RT measurements. I have two concerns that I'm wondering if the devs could help with:

1) I seem to be stuck with the legacy backend- the psycho back end crashes when I launch the calibration, and the Expyriment back-end timing seems very bad on my system- the experiment stalls for around half a second after I make each response. In contrast the pygame timing 'looks' good in that everything in the experiment happens instantly

From what I have read in Mathot et al., (2012) the pygame backends introduces about 20ms of imprecision in timing. I am wondering, does this 20ms kick over to response time measurements? I don't care so much if there is a little bit of variation in stimulus onset per se, but I do care if that variability leads to bad measurement of the time between stimulus presentation and the recorded response. That is, say the stimulus presentation was off by 10ms (due to lack of the vsync thing that the other back-ends have), but the response timer somehow knew that and adjusted for that, I'd be totally fine.

2) Currently I am logging RT by using time.time() at the start of a trial and after the eye movement response comes in. Is there a more precise method? Perhaps one that accounts for stimulus onset variability as described in (1).

Relatedly it seems that time.time() and clock.get_time() are somehow not that reliable on my system. I found that the eyetracker.wait_for_saccade_start() function from pygaze would crash because sometimes there would be no different in time between two valid samples (with a 500hz SMI eyetracker) leading to a division by zero on the velocity calc. I tried both the clock.get_time() function in pygaze and also swapping it out for time.time() and got the same result. Wondering could this be due to imprecision in the time.time() or clock.get_time() functions themselves?



  • eduardeduard Posts: 875


    1) Hard to say. I use Psychopy backend and there it works just fine (with an Eyelink, though). I recommend you just the timing. Simulate some responses and see how reliable they are.

    2) Sounds good generally. I would probably use the onset of the eye movement as response time, not so much the end of it. But that also depends on your specific question. You can also define your final response times offline, once you have the data.


  • LJGSLJGS Posts: 25
    edited September 10

    Thanks very much Eduard. The open source support I have gotten from you and others has made the difference between this project being doable and it crashing and burning.

    Bit of good news, I got psychopy working by changing the monitor to '0' instead of 'testMonitor'. I have no idea why that worked. It is a bit laggy on the loadup but I think overall worth it to use the back-end that has a blocking flip implemented and doesn't lag during the trials.

    Yes I agree the start of the movement is really what I want rather than just time gaze arrived somewhere. The problem I am having is wait_for_saccade_start (code chunk below) doesn't play nice with the SMI500. It sort of works, but in not too long it throws a divide by 0 error. I have zeroid in a bit on the source of this error, somehow t0 and t1 will be equal despite a difference in the two eyetracker.samples. Wondering if you have any insights into why that could happen? I suspect that sometimes the tracker gets a unique sample microseconds before the sample switches, then evaluates and resamples microseconds later such that gazepos!=newpos with a miniscule change in time but I have had no luck digging deeper into that. Does seem to happen less (but still eventually happens) if I run Iview on a lower eyetracker sampling frequency (although that also reduces the resolution of my RT measures).

    Ideally I would sort this out to work at run time, but I may settle for your suggestion of changing the scoring at run time using the saved eyetracker data.

    def wait_for_saccade_start(self):
            """Returns starting time and starting position when a saccade is
            started; based on Dalmaijer et al. (2013) online saccade detection
            endtime, startpos   -- endtime in milliseconds (from expbegintime);
                           startpos is an (x,y) gaze position tuple
            # # # # #
            # SMI method
            if self.eventdetection == 'native':
                # print warning, since SMI does not have a blink detection
                # built into their API
                print("WARNING! 'native' event detection has been selected, \
                    but SMI does not offer saccade detection; PyGaze \
                    algorithm will be used")
            # # # # #
            # PyGaze method
            # get starting position (no blinks)
            newpos = self.sample()
            while not self.is_valid_sample(newpos):
                newpos = self.sample()
            # get starting time, position, intersampledistance, and velocity
            t0 = clock.get_time()
            prevpos = newpos[:]
            s = 0
            v0 = 0
            # get samples
            saccadic = False
            while not saccadic:
                # get new sample
                newpos = self.sample()
                t1 = clock.get_time()
                if self.is_valid_sample(newpos) and newpos != prevpos:
                    # check if distance is larger than precision error
                    sx = newpos[0]-prevpos[0]; sy = newpos[1]-prevpos[1]
                    if (sx/self.pxdsttresh[0])**2 + (sy/self.pxdsttresh[1])**2 > self.weightdist: # weigthed distance: (sx/tx)**2 + (sy/ty)**2 > 1 means movement larger than RMS noise
                        # calculate distance
                        s = ((sx)**2 + (sy)**2)**0.5 # intersampledistance = speed in pixels/ms
                        # calculate velocity
                        v1 = s / (t1-t0)
                        # calculate acceleration
                        a = (v1-v0) / (t1-t0) # acceleration in pixels/ms**2
                        # check if either velocity or acceleration are above threshold values
                        if v1 > self.pxspdtresh or a > self.pxacctresh:
                            saccadic = True
                            spos = prevpos[:]
                            stime = clock.get_time()
                        # update previous values
                        t0 = copy.copy(t1)
                        v0 = copy.copy(v1)
                    # udate previous sample
                    prevpos = newpos[:]
            return stime, spos
Sign In or Register to comment.