Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Display calibration results (Gazepoint)


In a previous experiment, I've used the Gazepoint eye-tracker without properly inspecting the calibration data, which backfired somewhat (probably quite an understatement!). For my latest experiment, I'm trying to remedy that but I can't manage to display the desired calibration data. The calibration itself works fine and produces an image showing the average distances between the calib points and the recorded point of gaze, but I'd like to see the values as well (my goal is to reject calibrations with accuracies lower than one degree). My first instinct was to load the calibration results from the log file during the experiment (using loops to prevent the sketchpad from preparing too early), but that obviously doesn't work since the logfile's only completed after the experiment has concluded, rendering that approach futile (duh!). My next instinct was to modify either the or file, but I'm not sure how to approach that because my knowledge of Python is somewhat limited at the moment (I can read the code thanks to Edwin's awesome book but strategizing is a whole other story, I have no idea where to add the appropriate code snippets). Or is it possible to directly extract information from those files within OpenSesame, which would be much easier than modifying the original files?

Thanks in advance!


  • Hi,

    That's a very good question! In short, it's impossible to present those values (in degrees of visual angle) without knowing more about your monitor. You would need to enter the screen distance and screen size in centimeters, and those would need to be used within PyGaze's GazePoint calibration.

    Perhaps more to your point, it would be possible to compute the accuracy (how close are data points to the calibration targets) and display that in text (in pixels, and only in degrees if aforementioned info is available and accurate). In addition, it would be possible to compute and present the precision (how consistent are data points).

    It would require either a change in the PyGaze source, or a little additional (inline) scripting in your own experiment. For example, you could run a calibration, followed by a custom validation. Here you simply show a number of points, compute the distance between those points and samples you stream (using the EyeTracker class' sample method). In addition, you can compute the distance between consecutive points, which gives you an indication of the precision (RMS noise, essentially).


Sign In or Register to comment.