Using OpenSesame/Pygaze for dual eye tracking?
Hey folks,
I was thinking if OpenSesame could support running two eye trackers (ideally eye tribe) simultaneously. The eyetribe website says that the EyeTribe server only supports operating one tracking device at a time. But I was thinking maybe we could make use of the "parallel" function to create two eye tracking sequences so that each of them could run an eye tracker? Is this even possible theoretically?
Thanks,
Han
Comments
Hi Han,
Not an expert here, but I do know that the EyeTribe server needs to run in the background in order for you to be able to use it in OpenSesame; so I think the EyeTribe server rather than OpenSesame will be the bottleneck.
I can imagine there may be solutions where you connect one EyeTribe to another computer with its own EyeTribe server. You create a simple experiment for the second EyeTribe, where you just trigger the recording module upon receiving a signal from your first computer (through the parallel port; see for instance the EEG section on the documentation site). The first computer will be where you run your actual experiment.
Cheers,
Josh
Just to elaborate on what @Josh said (which is correct): The EyeTribe has a server that runs in the background, and which OpenSesame (or rather PyGaze) connects to. Usually, the server runs on the same computer as OpenSesame, but it doesn't need to. It's perfectly possible to have multiple servers on multiple computers, using multiple EyeTribes, and connecting to all of them from a single OpenSesame experiment.
However, this will need a bit of inline scripting, so let us know if you want to try this route.
Check out SigmundAI.eu for our OpenSesame AI assistant!
Thank you so much @Josh and @sebastiaan ! I am really interested in the idea of connecting multiple eye tribe servers to a single OpenSesame experiment. I guess this could be constructed as a main computer with a mirror computer, like Josh said. Or it could be constructed as a main computer for the experimenter with two data collection computers.
I will certainly do some research on this. Meanwhile, do you guys have any suggestions about where I should first look into? I have some experiences using Python for manipulating data, but have zero experience using it for programming...and I know almost nothing about how to construct communications between to computers or the like...
Hi,
It is always a good idea to work through the tutorials on the documentation website. If you want to learn more, you can also check some of the example experiments of Opensesame and try to understand why it is implemented the way it is. In doing so, you should get good understanding of creating experiments with python.
For your second point, setting up a communication structure between computers, I can't recommend anything to learn it. But, I am sure google knows more.
Hi,
I have some updates on the dual eye tracking:
I managed to make to two eye trackers start recording at (roughly) the same time. The computers do not have to be physically connected as long as they are in the same LAN. We have a sender computer and a receiver computer. Each of them run an individual experiment. The two computers first do eye tracker calibrations separately. Then the receiver will wait for the sender to send a trigger so both of them can start recording. Here is the design:
The sender:
The receiver:
I found my computers do not have parallel ports so I tried using the internet to connect them. Luckily it worked. However, I wonder if this method would yield longer delays than the parallel method and make the synchronization not accurate...
I also managed to draw a circle on the receiver's screen to indicate the sender's eye position:
sender:
receiver:
My goal is to superimpose the fixation indicator on the stimuli that participants will be looking at. For example, a mother and her child read the same story with the child's fixation position superimposed on the mother's screen. However, the current code only draws a circle on a black screen. I wonder if there is any possibility to achieve this?
I am really bad at coding so the codes might seem ugly and unprofessional. I would really appreciate any suggestions you may have, and I would really love to keep working on this project.
You came quite a long way, congrats!
No, that's fine. If you have a good internet connection, that is, using an ethernet cable, UDP sockets are really fast. If you go through WiFi it will be a bit less reliable, but we're still talking about milliseconds in most cases.
There's a few things you need to take into account here:
(0,0)is the center (at least in OpenSesame 3.0). For most eye trackers, it is the top-left. So you need to do a simple coordinate transformation.canvas. If you want to show the same display on the mother's and child's PCs, then you need to explicitly draw it to thecanvason every frame. (Probably usingcanvas.image().)socket.recvfrom()collects exactly one message. Then, when the receiver detects a complete message (here one gaze sample) in the buffer, it processes it. Also,socket.recvfrom()can time out, in which case anExceptionis triggered that you'll probably want to catch. Does this make sense? It sounds quite complicated, and it kind of is. But, looking at what you've accomplished so far, I think you can do it.Check out SigmundAI.eu for our OpenSesame AI assistant!