[open] Arrington ViewPoint Eye Tracker
I am planning an fMRI study using the Arrington ViewPoint eye tracker: http://www.arringtonresearch.com/
So far I have ran all pilots with OpenSesame, and would prefer to use OpenSesame in the scanner too, but the ViewPoint eye tracker is not supported as far as I can see. Although Arrington provides some Python code, this code is very minimal. The connection is made through an ethernet cable, and since I am not an expert (neither on the hardware or software side) I am not sure how to get it to work with OpenSesame.
Will OpenSesame also support the ViewPoint eye tracker at some point, or does anyone have some advice on how to make it work? Any advice is highly appreciated!

Comments
If Arrington has some example code for how to use the tracker through Python, then it may essentially be a matter of copy-pasting this code into Python inline scripts. I'm not familiar with the Arrington though. What do you want to do exactly, and what kind of example code do you have?
Cheers,
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Thanks for your reply! Too bad I only read it now, but still have the problem!
Below the Python code I got from Arrington Research.
We would like to track eye movements (not just control for fixation); so we need saccade and fixation location and time information logged. Ideally we would also send some messages of what is on the screen, but not necessarily, as long as I have a defined start point of the data. As far as I can see, the provided code doesn't give me this type of info.
What else would I need to get this info?
--
Hi,
What the above script seems to be doing, is polling the tracker for transitions of the point-of-fixation between different areas/regions of interest (AOI or ROI, depending on what term you prefer).
What it does not do, crucially, is tell us what command (or series of commands) in the VPX API starts a calibration, what commands start recording or pause recording, what commands we need to log a message to the data file, and what command we can use to poll the current gaze position.
These are all important for keeping track of what happens with the participant (e.g. are they fixating, or making a saccade - you need to poll the gaze position for this at the very least), and to keep track of what happens in the experiment (logging screen-transitions to the gaze data file). Some API documentation would be very helpful to help you.
There are roughly two options: 1) You integrate the ViewPoint code in a new sub-class of PyGaze's EyeTracker class. This has been done before with new trackers, and sounds a lot more complicated than it actually is. 2) You use the code in custom inline_scripts within your experiment. This is less sustainable, but it might be a bit easier than modifying an existing class (which requires a bit more programming experience). Both options are contingent on knowing what commands there are in the VPX API, though.
NOTE: I have no experience with the library, and I don't have their tracker around to test code. I'm happy to provide help with specific things, but I won't be able to implement and/or test stuff on my end.
Cheers,
Edwin