EyeGaze contingencies
Hello, I am a masters student and I would like to create a gaze-contingent attention bias modification task using PyGaze and OpenSesame.
The display consists of two word stimuli (one correct and one incorrect) displayed in two rectangles.
I would like the rectangles’ outline to turn red when the participant looks at the incorrect word, and green when the participant looks at the correct word.
Also, I would like to play an ‘incorrect’ sound when the participant is not looking at any of the stimulus rectangles, and a ‘correct’ sound when the participant fixates on the correct word for a given amount of time.
Trials should begin with a gaze-contingent fixation dot, where the participant has to fixate on the dot for a given amount of time before resuming the trial. The documentation shows how to create this fixation dot, but I do not know how to set this required fixation duration.
I am also wondering about how to set up the parameters for the drag and drop eyetracker functions.
As you probably guessed, I am an absolute beginner. I learned OpenSesame basics by watching the video tutorials on the OpenSesame website, but I could not find any eyetracking videos. The PyGaze documentation is helpful, but not nearly enough at my level…
Are there any online resources to learn how to use PyGaze and OpenSesame ‘from scratch’?
Comments
Hi,
Are there any online resources to learn how to use PyGaze and OpenSesame ‘from scratch’?
Not that I am aware of. You can try to browse online repositories like OSF for projects that used pygaze with Opensesame
I am also wondering about how to set up the parameters for the drag and drop eyetracker functions.
Essentially, start recording before a trial/block and end record after a trial/block by drag/drop respective item to the position in the script. You can also also record the entire experiment. In any case, you need to make sure that you know when things happen in your experiment. So probably, you will need to send custom log messages to be able to sync experimental events with eye events.
Hello, I am a masters student and I would like to create a gaze-contingent attention bias modification task using PyGaze and OpenSesame.
The key function you are looking for is the
exp.pygaze.eyetracker.sample()
function, which gives you the eye coordinates every time you call it. The idea is then to write a little loop that tracks eye position during a trial, detects whether a predefined event happens, and adjust experimental settings according to it.Here is a link to one of my own projects where I did essentially that. It is a little convoluted (because of reasons), but if you work through the script, you mean even learn something about Opensesame/Pygaze ;)
https://osf.io/349xz/
Feel free to ask if you can't figure out what I have done.
Eduard
Thank you M. Ort for your kind reply and impressive code :)
I was able to use some of it to program almost the entire task (I think). I am now stuck on the last gaze contingency where the participant has to fixate on one of the stimuli to proceed to the next trial. I used the drag and drop items, and so I am wondering how to send the command to OpenSesame to clear the current display and proceed to the next trial.
Do you have any idea how I can code this?
Hi @mrhmatar ,
Could you upload your experiment here?
Cheers,
Lotje
Did you like my answer? Feel free to
Hi Lotje,
Here is the latest version of it. It is not working yet...
Best,
Mariah
Hi Mariah,
What exactly do you mean with:
I used the drag and drop items, and so I am wondering how to send the command to OpenSesame to clear the current display and proceed to the next trial.
Literally, how to stop a trial and change the screen to black?
You can show an empty screen by calling the
clear
function on a canvas object and then showing it.Normally, you shouldn't need this though. Because of the block/trial loop structure, the end of one trial is the start of the next trial. So, once the last code in your trial sequence is executed, you should start with the first code in that sequence. If this does not happen, you probably have created an infinite loop in your python code, meaning that you created a situation, in which it is impossible for a user to proceed in the experiment. This can happen if you have some condition in your code that says: "Move to next trial, if fixation is at [this] location", where [this] is some location, that is impossible to be fixated (e.g. outside the screen).
I haven't testrun your code in detail, but I can see two problems:
1)
if dist(xy, var.success_pos) == 0:
This will cause a problem because the distance is never exactly equal to zero. There will always be some deviation, so best to define a small epsilon that will work as upper bound of values that are treated as being essentially zero. For example:
if dist(xy, var.success_pos) < 50:
The further away your stimuli are from each other, the more liberal you can be here.
2)
while fixation_duration > 100:
This loop is executed for as long as the fixation duration is larger than 100 ms. That is a weird construction because within the while loop you never recompute the fixation duration. That means that once it has its values from before the while loop, it will never gonna change, and therefore lead to an infinite loop (or to the loop if the fixation duration happens to be less than 100 ms).
Maybe there are more problems with your code, I haven't checked it in detail, but generally, I can recommend you start simple. Try to implement a simple gaze contingent functionality and if it works extend it step-by-step until it works.
I hope this helps. Good luck!
Eduard
Here is my new code
I am getting this traceback:
Error while executing inline script
Details
item: fixdot
phase: run
item-stack: experiment[run].practice_loop[run].practice_block_seq[run].practice_block_loop[run].practice_trial_seq[run].fixdot[run]
time: Thu Mar 18 14:09:02 2021
exception type: TypeError
exception message: unsupported operand type(s) for -: 'Legacy' and 'float'
Traceback (also in debug window)
File "C:\Program Files (x86)\OpenSesame\Lib\site-packages\libopensesame\inline_script.py", line 116, in run
self.workspace._exec(self.crun)
File "C:\Program Files (x86)\OpenSesame\Lib\site-packages\libopensesame\base_python_workspace.py", line 124, in _exec
exec(bytecode, self._globals)
Inline script, line 9, in <module>
Inline script, line 6, in dist
TypeError: unsupported operand type(s) for -: 'Legacy' and 'float'
What does this mean?
Hello,
Update: This issue is resolved, but I have a new one...
I want to play an incorrect sound when the participant is not looking in one of the rectangles in my stimuli sketchpad. I created an inline script and a sampler for this, but if I place them before the sketchpad in the trial sequence, the sound doesn't play at all, and if I place them after it, the sound always plays, but not before the sketchpad starts fading.
There might be an error in the code, but to find out what it is, I need to make sure that the code in my inline script only runs while the right sketchpad is displayed... What are ways to do this?
Hi @mrhmatar ,
Sorry for the delay and glad to hear that you already resolved part of the issue.
Indeed, OpenSesame runs all items in a
sequence
sequentially, so you will probably need to play the sound file from an inline script (for example in a Python loop where you are evaluating eye position). You can do this, for example, by:sampler
item (for example called "my_sampler") in the trialsequence
run-if
statement to "never"inline_script
with the following code:If this doesn't help, could you upload the mos recent version of your experiment here?
Cheers,
Lotje
Did you like my answer? Feel free to
Hello Lotje,
No luck...
Also, something strange is happening where I cannot see my mouse moving once the experiment gets past the fixation dot. This does not happen when I run another experiment on my very same computer.
In all cases, here is the latest version of my code
Thank you very much for your help,
Mariah
Hi @mrhmatar ,
Also, something strange is happening where I cannot see my mouse moving once the experiment gets past the fixation dot. This does not happen when I run another experiment on my very same computer.
If you add a
pygaze_start_recording
(andpygaze_stop_recording
) to your trialsequence
, the cursor should appear (when running in advanced dummy mode).Unless I'm missing something (@sebastiaan or @eduard ?), I think the only way to apply gaze contingencies in the way you described it (checking whether participants look at an area of 40 px around the fixation dot for at least 400 ms) is to use a
while loop
in your Pythoninline_script
item. Herewith an example, where a beep is played when participants look away from the fixation dot. See the comments for explanation.I implemented this fixation check in a simplified version of your experiment.
Hope this helps!
Cheers,
Lotje
Did you like my answer? Feel free to
Hello Lotje,
Thank you for your code, I adapted it and it helped me a lot!
But my error sound is still not quite playing at the right time... I want it to play the sound when the participant is not looking at one of the word stimuli. It seems to be playing everytime I make a big saccade with my mouse regardless of the end position.
So sorry that this is dragging so much ... it's my first time.
UPDATE: this is solved!
Seems like the inline_script cannot call coordinates from the sketchpad written as (0, 160) etc
Hi,
Good to hear that it works! Just for reference though, could you clarify what you mean with your last post?
Eduard
ps, sorry for having been absent the last week
Hello,
I am having trouble calling coordinates of items from my sketchpad canvas in my code, even though I named them and followed the tutorials.
For example, in the gaze_cont inline script, i want to give feedback to my participants according to whether they are looking at the target (color rectangle containing the target in green) or the opposite stimulus (color rectangle containing the neutral stimulus in red). But when I write for example:
my_canvas = items['practice_stimuli'].canvas
var.x_targ = my_canvas['targ'].width/2
var.y_targ = my_canvas['targ'].height/2
var.x_neut = my_canvas['neut'].width/2
var.y_neut = my_canvas['neut'].height/2
eye_x, eye_y = exp.pygaze_eyetracker.sample()
dist_target = getDist(eye_x,eye_y,var.x_fix, var.y_targ)
dist_neutral = getDist(eye_x,eye_y,var.x_fix, var.y_neut)
if dist_target < var.threshold and (eye_x,eye_y) in my_canvas['rect_u']:
feed_tug = copy_sketchpad('target_ug')
feed_tug.show()
break
The canvas attributes in bold do not seem to be working. When I replace them with:
var.x_fix = var.width/2
var.y_fix_1 = var.height/4
var.y_fix_2 = 3*var.y_fix_1
the desired sketchpads are displayed, but since the target position changes in my experiment, it's not what I want.
Sorry for the loooong reply. Here is the updated version if you'd like to see for yourself:
Hi,
Sorry, but I had a somewhat hard time to read your code and see what the problem was, so I took the freedom to rewrite it my way, based on what I assumed you wanted to accomplish. Attached the script, but here some notes:
I hope this is useful to you, and that I didn't misinterpreted your goals all that much.
Good luck,
Eduard
Hello,
Again, the script was very helpful. We're getting somewhere!
I understand your confusion, it's a complicated task... here is a step by step explanation of what I would like it to look like:
My latest code is very close to what I would like to achieve, except that it is bugging like crazy.
Thanks again for your help :)
Thanks for clarifying!
play incorrect sound if participant is not looking at one of the stimuli
I'm not sure about this part though. The two boxes are essentially all over the screen and it is hard to look anywhere that is not a stimulus without passing through the stimuli. Especially because the fixation dot is placed outside the two boxes, so actually already triggering the negative feedback. The only (easy) solution for this that I can see is to treat the entire central area, incl. the two stimuli as well as the area in between as one regions of interest, and if someone looks outside that box there will be a negative feedback.
Would that work?
Sorry this is the latest code
Starting to get lost with all these versions...
Hello eduard,
Thank you for replying on a Saturday,
I did not expect that and actually just saw your response!
You are right about the boxes being really big. What I actually did in my code is that I played the sound (which works on my PC btw) whenever the participant is not looking at the point at the center of the rectangle, and I increased my threshold to 100 (which corresponds to roughly half the height of a rectangle). This gives a circular AOI with a diameter equal to the height of a rectangle, which seems to be working.
The code I just sent seems to be doing what I want, except that the words are not always coloring when they should, or returning to black when they should. It seems like sometimes the eye position is not always being detected quickly enough. The sounds are playing correctly.
Hi,
yeah, your code is overly complicated, which I tried to improve in my last post, but as I see, you prefer yours ;)
Well, I don't give up and simplified again. The colors seem to behave the way you intend to. About the sound, I am not sure, but as you say it works correctly, I didn't change its behaviour.
Hope this solves it then.
Eduard
Thanks A MILLION!!! This is perfect.
Thank you so much for your help.
Sincerly,
Mariah
Hello,
Thanks again for your help in the past few days.
I am trying to log the number of times the participant fixates on either the target or the opposite stimulus in my experiment.
For this, I created 2 new variables: var.targ_fix and var.oppstim_fix that I later increment in exactly the same way right after each if condition is fulfilled.
I kept everything else pretty much the same as it was before.
var.targ_fix is behaving normally (i.e. it increments the first time the gaze is detected on the target and stops if it stays there), but var.offstim_fix keeps incrementing if the gaze is on the opposite stimulus, even if the gaze does not move. It looks like it is computing the fixation time. Attempts at fixing the value with a while loop failed...
Best regards,
Hi @mrhmatar ,
Could you upload the most recent version of your experiment and indicate where counting the number of fixations goes wrong?
I do want to point out that I believe it is customary to do analyses such as these offline, on the eye-tracking output data, such that you can for example apply some corrections (for blinks, drift, etc.) and keep the coding in the (run-phase) in OpenSesame to the necessary minimum (for example to do gaze-contingent stuff).
Hope this helps a little bit.
Cheers,
Lotje.
Did you like my answer? Feel free to