where on screen are calibration points drawn?
Hey all,
So ive finished collecting a dataset of eyetracking data using an eyelink1000 which partway through switched from one screen to another... not good practice I know but more importantly it was only after finishing data collection that we realised the screen resolution was incorrect for the entire data collection (as inputed in opensesame). Despite this the experiment ran fine in full screen so perhaps opensesame realised the error and autocorrected? In any case this has made the eyetracking data a bit more difficult to work with as it seems that the pixdel values in eyetracker space are based on the incorrect screen resolution (at least this seems to be the case but it isnt entirely clear. is there a way to definitively check this?).
In any case I would like to do my eyetracking analysis based on relative distance between fixation and certain items on the screen (so in % deviation towards an item rather than pixels). This is a bit problematic as participants never actually move their eyes towards the items, so in retrospect is is quite difficult to say what the pixel location of the items was in eyetracker space.
I know where the items were drawn on the actual screen, so if I know where the calibration points were drawn in eyetracker pixel space the I can convert this number over to determine item location. So my question is where can I see the pixel values for where the calibration items were drawn? I assume the eyetracker takes some percentage of the recorded screen resolution and drawn the fix dots relative to that, but I cant seem to find the actual calibration function in the pygaze github so I cant find these values. any help would be appreciated!
Cheers
Comments
Hi,
Have you seen this eyelink tool? Maybe it helps
Eduard