Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Using OpenSesame/Pygaze for dual eye tracking?

edited January 2016 in OpenSesame

Hey folks,

I was thinking if OpenSesame could support running two eye trackers (ideally eye tribe) simultaneously. The eyetribe website says that the EyeTribe server only supports operating one tracking device at a time. But I was thinking maybe we could make use of the "parallel" function to create two eye tracking sequences so that each of them could run an eye tracker? Is this even possible theoretically?

Thanks,
Han

Comments

  • edited 12:51AM

    Hi Han,

    Not an expert here, but I do know that the EyeTribe server needs to run in the background in order for you to be able to use it in OpenSesame; so I think the EyeTribe server rather than OpenSesame will be the bottleneck.

    I can imagine there may be solutions where you connect one EyeTribe to another computer with its own EyeTribe server. You create a simple experiment for the second EyeTribe, where you just trigger the recording module upon receiving a signal from your first computer (through the parallel port; see for instance the EEG section on the documentation site). The first computer will be where you run your actual experiment.

    Cheers,

    Josh

  • edited 12:51AM

    Just to elaborate on what @Josh said (which is correct): The EyeTribe has a server that runs in the background, and which OpenSesame (or rather PyGaze) connects to. Usually, the server runs on the same computer as OpenSesame, but it doesn't need to. It's perfectly possible to have multiple servers on multiple computers, using multiple EyeTribes, and connecting to all of them from a single OpenSesame experiment.

    However, this will need a bit of inline scripting, so let us know if you want to try this route.

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited January 2016

    Thank you so much @Josh and @sebastiaan ! I am really interested in the idea of connecting multiple eye tribe servers to a single OpenSesame experiment. I guess this could be constructed as a main computer with a mirror computer, like Josh said. Or it could be constructed as a main computer for the experimenter with two data collection computers.

    I will certainly do some research on this. Meanwhile, do you guys have any suggestions about where I should first look into? I have some experiences using Python for manipulating data, but have zero experience using it for programming...and I know almost nothing about how to construct communications between to computers or the like...

  • edited 12:51AM

    Hi,

    It is always a good idea to work through the tutorials on the documentation website. If you want to learn more, you can also check some of the example experiments of Opensesame and try to understand why it is implemented the way it is. In doing so, you should get good understanding of creating experiments with python.

    For your second point, setting up a communication structure between computers, I can't recommend anything to learn it. But, I am sure google knows more.

  • edited 12:51AM

    Hi,

    I have some updates on the dual eye tracking:

    I managed to make to two eye trackers start recording at (roughly) the same time. The computers do not have to be physically connected as long as they are in the same LAN. We have a sender computer and a receiver computer. Each of them run an individual experiment. The two computers first do eye tracker calibrations separately. Then the receiver will wait for the sender to send a trigger so both of them can start recording. Here is the design:

    The sender:

    <br />import os
    from socket import *
    from openexp.keyboard import keyboard
    
    #setting experiment variables
    my_canvas = canvas()
    my_canvas.text('<b>Hit the space bar to start simultaneous recording...</b>')
    my_canvas.show()
    
    key_press = keyboard(exp, keylist=['space'])
    key, end_time = key_press.get_key()
    self.experiment.set('response', key)
    
    
    # Sending a trigger
    
    host = "35.2.15.136" # the IP address of the receiver
    port = 13000
    addr = (host, port)
    UDPSock = socket(AF_INET, SOCK_DGRAM)
    while True:  #Hit the space bar to sender the trigger
        if self.experiment.get('response') == 'space':
            data = "1"
            UDPSock.sendto(data, addr)
            break
    

    The receiver:

    <br />my_canvas = canvas()
    my_canvas.text('<b>Waiting for the main computer...</b>') #showing a waiting message
    my_canvas.show()
    
    
    import os
    from socket import *
    
    host = ""
    port = 13000
    buf = 1024
    addr = (host, port)
    UDPSock = socket(AF_INET, SOCK_DGRAM)
    UDPSock.bind(addr)
    
    
    while True:             # Waiting for message from the sender
        (data, addr) = UDPSock.recvfrom(buf)
        if data == "1": 
            break
    

    I found my computers do not have parallel ports so I tried using the internet to connect them. Luckily it worked. However, I wonder if this method would yield longer delays than the parallel method and make the synchronization not accurate...

    I also managed to draw a circle on the receiver's screen to indicate the sender's eye position:

    sender:

    <br />while True: 
        pos_tuple = eyetracker.wait_for_fixation_start()[1] #get the fixation pos
        data = str(pos_tuple[0])+' '+str(pos_tuple[1])
        UDPSock.sendto(data, addr)
    

    receiver:

    from openexp.canvas import canvas
    
    while True:
        (data, addr) = UDPSock.recvfrom(buf)
        data = data.split()
        x = data[0]
        y = data[1]
        fixpoint = canvas(exp)  #create a canvas and draw the fixation point
        fixpoint.circle(float(x),float(y),30)
        fixpoint.show()
    

    My goal is to superimpose the fixation indicator on the stimuli that participants will be looking at. For example, a mother and her child read the same story with the child's fixation position superimposed on the mother's screen. However, the current code only draws a circle on a black screen. I wonder if there is any possibility to achieve this?

    I am really bad at coding so the codes might seem ugly and unprofessional. I would really appreciate any suggestions you may have, and I would really love to keep working on this project.

  • edited 12:51AM

    You came quite a long way, congrats!

    I found my computers do not have parallel ports so I tried using the internet to connect them. Luckily it worked. However, I wonder if this method would yield longer delays than the parallel method and make the synchronization not accurate..

    No, that's fine. If you have a good internet connection, that is, using an ethernet cable, UDP sockets are really fast. If you go through WiFi it will be a bit less reliable, but we're still talking about milliseconds in most cases.

    My goal is to superimpose the fixation indicator on the stimuli that participants will be looking at. For example, a mother and her child read the same story with the child's fixation position superimposed on the mother's screen. However, the current code only draws a circle on a black screen. I wonder if there is any possibility to achieve this?

    There's a few things you need to take into account here:

    1. For OpenSesame, (0,0) is the center (at least in OpenSesame 3.0). For most eye trackers, it is the top-left. So you need to do a simple coordinate transformation.
    2. Right now, you're not drawing any image onto the canvas. If you want to show the same display on the mother's and child's PCs, then you need to explicitly draw it to the canvas on every frame. (Probably using canvas.image().)
    3. UDP sockets treat data as a continuous unstructured string of bytes. You wrote the code as though you send and receive one gaze sample at a time. This may sometimes work, but it's not guaranteed to. Ideally, the sender sends markers to indicate where one message (here one gaze sample) begins and ends. Then, the receivers continuously appends data into a buffer, without assuming that each socket.recvfrom() collects exactly one message. Then, when the receiver detects a complete message (here one gaze sample) in the buffer, it processes it. Also, socket.recvfrom() can time out, in which case an Exception is triggered that you'll probably want to catch. Does this make sense? It sounds quite complicated, and it kind of is. But, looking at what you've accomplished so far, I think you can do it.

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

Sign In or Register to comment.