Two timing questions
Hi,
I am running an eye-tracking experiment through opensesame and concerned about the accuracy of my RT measurements. I have two concerns that I'm wondering if the devs could help with:
1) I seem to be stuck with the legacy backend- the psycho back end crashes when I launch the calibration, and the Expyriment back-end timing seems very bad on my system- the experiment stalls for around half a second after I make each response. In contrast the pygame timing 'looks' good in that everything in the experiment happens instantly
From what I have read in Mathot et al., (2012) the pygame backends introduces about 20ms of imprecision in timing. I am wondering, does this 20ms kick over to response time measurements? I don't care so much if there is a little bit of variation in stimulus onset per se, but I do care if that variability leads to bad measurement of the time between stimulus presentation and the recorded response. That is, say the stimulus presentation was off by 10ms (due to lack of the vsync thing that the other back-ends have), but the response timer somehow knew that and adjusted for that, I'd be totally fine.
2) Currently I am logging RT by using time.time() at the start of a trial and after the eye movement response comes in. Is there a more precise method? Perhaps one that accounts for stimulus onset variability as described in (1).
Relatedly it seems that time.time() and clock.get_time() are somehow not that reliable on my system. I found that the eyetracker.wait_for_saccade_start() function from pygaze would crash because sometimes there would be no different in time between two valid samples (with a 500hz SMI eyetracker) leading to a division by zero on the velocity calc. I tried both the clock.get_time() function in pygaze and also swapping it out for time.time() and got the same result. Wondering could this be due to imprecision in the time.time() or clock.get_time() functions themselves?
Thanks
Comments
Hi,
1) Hard to say. I use Psychopy backend and there it works just fine (with an Eyelink, though). I recommend you just the timing. Simulate some responses and see how reliable they are.
2) Sounds good generally. I would probably use the onset of the eye movement as response time, not so much the end of it. But that also depends on your specific question. You can also define your final response times offline, once you have the data.
Eduard
Thanks very much Eduard. The open source support I have gotten from you and others has made the difference between this project being doable and it crashing and burning.
Bit of good news, I got psychopy working by changing the monitor to '0' instead of 'testMonitor'. I have no idea why that worked. It is a bit laggy on the loadup but I think overall worth it to use the back-end that has a blocking flip implemented and doesn't lag during the trials.
Yes I agree the start of the movement is really what I want rather than just time gaze arrived somewhere. The problem I am having is wait_for_saccade_start (code chunk below) doesn't play nice with the SMI500. It sort of works, but in not too long it throws a divide by 0 error. I have zeroid in a bit on the source of this error, somehow t0 and t1 will be equal despite a difference in the two eyetracker.samples. Wondering if you have any insights into why that could happen? I suspect that sometimes the tracker gets a unique sample microseconds before the sample switches, then evaluates and resamples microseconds later such that gazepos!=newpos with a miniscule change in time but I have had no luck digging deeper into that. Does seem to happen less (but still eventually happens) if I run Iview on a lower eyetracker sampling frequency (although that also reduces the resolution of my RT measures).
Ideally I would sort this out to work at run time, but I may settle for your suggestion of changing the scoring at run time using the saved eyetracker data.