Weird pygaze calibration reports (constant distance value & absurdly high acc values)
Hi,
My pygaze calibrations reports always turn out quite weird. For one thing, the screen distance is ALWAYS 57cm, no matter which eye-tracking device I'm using. I've tried to find out why by inspecting the according files, but to no avail. The other issue is the accuracy, which is always around 700px horizontally and 500px vertically, no matter how well the device was calibrated. In the end, the calibration report doesn't matter all that much in my experiment specifically since I've created a custom validation by which I judge the need for a recalibration. However, I would still like to find out why the results end up like this since it'd be more convenient to report the pygaze calibration results in an article rather than having to explain my custom validation :)
Can you give me a tip where to look for potential culprits? I think the coordinate settings may be to blame, but that's just a hunch.
Thanks in advance!
Comments
Dear @Edwin
I noticed the same problem as @ChrIm pointed out. For example, this is a calibration report on the Tobii T60:
While I'm pretty sure that I was following the targets closely, the accuracy report seems to show otherwise. It is interesting that those values were so close to the center of the screen. I was wondering if the reports did not show the absolute deviation from the targets but some kind of average position of samples?
Thanks,
Han
Hi everyone,
I had the same problem using Tobii Spectrum, any advice would be helpful.
Cheers,
Ge
Hi all,
Screen distance can only be picked up by systems that can. If yours can't, it will try to fall back on a user-defined value, or (in lieu of that) on the default of 57 cm. To define your screen distance, obviously measure it first , and then add this to your constants.py script:
The same holds true for the screen size, which should be user-defined. For example, if it's 40.3x30.0 cm, include this in your constants.py:
High values for "accuracy (in pixesl)" mean that your eye tracker thought you were looking at a different place on the screen than where the central marker was presented. One very common reason for this is that people don't set the correct display resolution. For example, if your display is 1024x768 pixels, but you set 800x600, the tracker will report in the 1024x768 frame whereas your experiment will display in the 800x600 frame. This will lead to mismatches. (The only exception to this when you use a different resolution to your display's, is when you set that different resolution in both the eye tracking software and your experimental software.) To set your resolution correctly, include the following in constants.py (obviously adjusted to your own resolution):
Cheers,
Edwin
PS. VERY IMPORTANT: The values for the speed and acceleration thresholds are ONLY used when event detection is set to "pygaze". Hence, you will not notice any of these values being off if you don't use the "wait_for_*" event functions. Even if you do use them, you'll only be affected by these values being off if you're using the "pygaze" event detection as opposed to "native".
Hi @Edwin ,
Thank you for your reply! I have some follow-up questions. I'm now using the PyGaze plug-in in OpenSesame with EyeLink.
Thanks a lot for your help!
Han
Perhaps @sebastiaan also knows answers to those questions above? :)
Hi there,
I'm using Tobii TX300, and apparently my Distance value in the log file is not getting updated by the calibration procedure. What should I do now?
Art
Just to clarify: calibration procedure for Tobii seems to detect the visual distance correctly, it's just that the calibration value in the eyetracker log file is not getting saved accordingly.
Hi Art,
If you're not using any of the implemented online event detection functions, you don't have to do anything.
If you'd like for Tobii to implement the distance sensing, you could file a request on GitHub: https://github.com/esdalmaijer/PyGaze/issues (tag the Tobii devs on GitHub if you file an issue there:
@grebdems
and@pedrotari7
)Hi Edwin,
Thanks for the reply. I am confused: during calibration (the initial step, where Tobii positions subject's eyes in its bounding box), distance is estimated correctly. This value is simply not saved by pygaze in its log file. I wonder how to retrieve it and have it stored in the log file. I thought this is all done by pygaze?
Hi Art,
The question is whether it is saved and used later on. Examples:
My SMI implementation in PyGaze estimates distance using the tracker, and later saves this to the log file, and uses it in event detection: https://github.com/esdalmaijer/PyGaze/blob/master/pygaze/_eyetracker/libsmi.py#L364
The Tobii implementation takes the default value (https://github.com/esdalmaijer/PyGaze/blob/master/pygaze/_eyetracker/libtobii.py#L47), and it doesn't look like this is updated before being logged and used (https://github.com/esdalmaijer/PyGaze/blob/master/pygaze/_eyetracker/libtobii.py#L590). This is also true in my earlier Tobii implementation (tobii-legacy).
I referred you to GitHub, because Tobii developers maintain the Tobii implementation in PyGaze. They would be much quicker to implement the functionality you're requesting. (Also, I don't have access to a Tobii, so wouldn't be able to test any changes I make.)
Cheers,
Edwin
OK, I think we managed to dig into the appropriate portions of Pygaze code.
Just one question: is there any way of making Pygaze detect saccades for Tobii output? I can't see how to enable it in Pygaze settings.
Bump.
I dug in libtobii.py code in the meantime, but still can not figure out how to make it log saccades as events in the Tobii log file. I mean, the saccade detection code in libtobii seems OK, I have eventdetection set to pygaze version, but it seems to not work. I am not a Python expert so can not really find the reason for this and check whether it detects saccades on-line and just not saves them or whether not saving results from saccades not being detected.
I was also wondering, if there is any low-pass filtering implemented to the gaze data, as I see nothing like this in the code, apart from noise calibration parameters being applied to saccade detection/thresholds etc.
As I said before, this stuff is really only for online event detection. I don't really see why you'd like to continuously try to detect events and log them to the data file. You can use much better event detection algorithms offline using the gaze samples.
The same is true for filtering. Why would you want to do this during recording? While sensible for some analyses, ideally your raw data is as raw as possible, leaving you to do whatever you need offline.
Right. Is there any code available for analysing saccade kinematics for Tobii output with Opensesame?
Regarding filtering, I was basically wondering if what Tobii SDK outputs is already filtered but this seems to be not the case.
But anyway, I also thought of using some on-line processing, for fixation control - do you know if there is anything available in these lines? I mean, code that could be implemented with Opensesame?
Not that I'm directly aware of, but there are more general toolboxes out there for eye movement analysis.
Good question! There's likely some filtering going on, i.e. some smoothing, and there's also the head model they seem to use to compute samples. All proprietary implementation, though, so not sure whether we'll ever know exactly what filtering is going on. (Whether that's an issue highly depends on your actual research question, though.)
You could use the built-in "wait_for_fixation_end" etc. functions, or do something like the following:
You might want to build in a timeout, and it might also be good to include a sample counter. (To break only if e.g. >5 samples were over max. deviation.)
Hi Edwin,
Thanks! I think I will ask directly at Tobii whether they can provide this info about filtering. As soon as I have it I will post it here.
Maybe it's a lame question, but can you recommend any toolboxes you know people have used for Tobii + Pygaze analysis? I tried to use GazeAlyze but it spits out a long list of errors, and it's probably no longer compatible with newer Matlab versions, but I haven't looked into it in detail.
I was just wondering if it's more efficient to write my own tools or use something that's there already.
Two more things:
Thanks.
Hi,
where do I insert the code you suggested for on-line event detection?
Wherever you need it in your experiment. The code itself is put in an inline_script (if you use Opensesame), and should be put in the experimental sequence. Where exactly depends on your specific case.
Does that help?
Eduard
Hi Eduard,
Thanks, that sounds clear 👍️