Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

[open] Arrington ViewPoint Eye Tracker

jsneurojsneuro Posts: 6
edited March 2016 in Miscellaneous

I am planning an fMRI study using the Arrington ViewPoint eye tracker: http://www.arringtonresearch.com/
So far I have ran all pilots with OpenSesame, and would prefer to use OpenSesame in the scanner too, but the ViewPoint eye tracker is not supported as far as I can see. Although Arrington provides some Python code, this code is very minimal. The connection is made through an ethernet cable, and since I am not an expert (neither on the hardware or software side) I am not sure how to get it to work with OpenSesame.

Will OpenSesame also support the ViewPoint eye tracker at some point, or does anyone have some advice on how to make it work? Any advice is highly appreciated!

Comments

  • sebastiaansebastiaan Posts: 2,737
    edited 7:28PM

    Although Arrington provides some Python code, this code is very minimal. The connection is made through an ethernet cable, and since I am not an expert (neither on the hardware or software side) I am not sure how to get it to work with OpenSesame.

    If Arrington has some example code for how to use the tracker through Python, then it may essentially be a matter of copy-pasting this code into Python inline scripts. I'm not familiar with the Arrington though. What do you want to do exactly, and what kind of example code do you have?

    Cheers,
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • jsneurojsneuro Posts: 6
    edited March 2016

    Thanks for your reply! Too bad I only read it now, but still have the problem!

    Below the Python code I got from Arrington Research.
    We would like to track eye movements (not just control for fixation); so we need saccade and fixation location and time information logged. Ideally we would also send some messages of what is on the screen, but not necessarily, as long as I have a defined start point of the data. As far as I can see, the provided code doesn't give me this type of info.
    What else would I need to get this info?

    --

    # ViewPoint EyeTracker (R) interface to Python 3 (vpxPython3_Demo_09.py)
    #       Verify the sections below marked:       # <<<< CHANGE AS NEEDED
    #   Paths must use either (a) Forward Slashes, or (b) Double Back Slashes !!!
    #   To use Python 2 change: print("x") --to--> print "x"
    #   07-Dec-2010 : kfa : changed to Python3 and added vpxDll access check.
    #
    #   To run this, either put this file in the Python root directory, or do:
    #       import sys          # to set where to look for modules
    #       sys.path.append("C:/ARI/VP")    # <<<< CHANGE AS NEEDED
    #       import vpxPython3_Demo      # <<<< CHANGE AS NEEDED, without .py
    #
    #   This demo prints a line whenever an ROI is newly entered or exited.
    #   Nothing is printed while the the gaze point remains inside an ROI.
    #   Example:    [3,19] means the gaze point just entered ROI#3 and ROI#19,
    #           [-3] means the gazepoint has just exited ROI#3
    #
    
    from ctypes import *
    import os
    
    #  CONSTANTS (see vpx.h for full listing)
    VPX_STATUS_ViewPointIsRunning = 1
    EYE_A = 0
    VPX_DAT_FRESH = 2
    
    #
    #  Load the ViewPoint library
    vpxDll = "C:/ARI/VP/VPX_InterApp.dll"   # <<<< CHANGE AS NEEDED
    if ( not os.access(vpxDll,os.F_OK) ):
        print("WARNING: Invalid vpxDll path; you need to edit the .py file")
    cdll.LoadLibrary( vpxDll )
    vpx = CDLL( vpxDll )
    vpx.VPX_SendCommand('say "Hello from Python" ')
    if ( vpx.VPX_GetStatus(VPX_STATUS_ViewPointIsRunning) < 1 ):
        print("ViewPoint is not running")
    #
    #  Create needed structures and and callback function
    class RealPoint(Structure):
        _fields_ = [("x",c_float),("y",c_float)]
    
        # Need to declare a RealPoint variable
    gp = RealPoint(1.1,1.1)
    
    VPX_CALLBACK = CFUNCTYPE( c_int, c_int, c_int, c_int, c_int )
        #   The first param is the return value, the rest function parameters.
    
    def ViewPointMessageCallback( msg, subMsg, p1, p2, ):
        if ( msg == VPX_DAT_FRESH ):
            roiList = []
            for ix in range(5):
                roiNumber = vpx.VPX_ROI_GetEventListItem( EYE_A, ix )
                if (roiNumber != -9999):
                    roiList.append(roiNumber)
                else:
                    break
            if (len(roiList)>0):
                print(roiList)
        return 0
    #
    #  Register the Python callback function with the ViewPoint DLL
    vpxCallback = VPX_CALLBACK(ViewPointMessageCallback)
    vpx.VPX_InsertCallback(vpxCallback)
    
  • EdwinEdwin Posts: 635
    edited 7:28PM

    Hi,

    What the above script seems to be doing, is polling the tracker for transitions of the point-of-fixation between different areas/regions of interest (AOI or ROI, depending on what term you prefer).

    What it does not do, crucially, is tell us what command (or series of commands) in the VPX API starts a calibration, what commands start recording or pause recording, what commands we need to log a message to the data file, and what command we can use to poll the current gaze position.

    These are all important for keeping track of what happens with the participant (e.g. are they fixating, or making a saccade - you need to poll the gaze position for this at the very least), and to keep track of what happens in the experiment (logging screen-transitions to the gaze data file). Some API documentation would be very helpful to help you.

    There are roughly two options: 1) You integrate the ViewPoint code in a new sub-class of PyGaze's EyeTracker class. This has been done before with new trackers, and sounds a lot more complicated than it actually is. 2) You use the code in custom inline_scripts within your experiment. This is less sustainable, but it might be a bit easier than modifying an existing class (which requires a bit more programming experience). Both options are contingent on knowing what commands there are in the VPX API, though.

    NOTE: I have no experience with the library, and I don't have their tracker around to test code. I'm happy to provide help with specific things, but I won't be able to implement and/or test stuff on my end.

    Cheers,

    Edwin

Sign In or Register to comment.