measuring proportion of fixations to each picture during specific time windows
Hi! I am new to OpenSesame, and, unfortunately, I have zero knowledge of Python. That is why I am writing here to ask for help with my experiment that I am going to conduct using a GazePoint eye tracker. I did my best and built the main structure, but I still have difficulties with some things that (I guess) require inline Python scripts.
The experimental design consists of 4 pictures on the screen and an audio sentence that accompanies each picture set. I need to measure the proportions of fixations to each picture averaged on the following time windows:
- the duration of the whole audio sentence, starting from the subject to the end
- The duration of the sentence from the subject until 200ms after the object is mentioned.
Could you please help me with this? I have attached what I've built so far. I used to build similar experimental designs using both EyeLink and Tobii software, but it didn't require any coding skills.

Comments
I couldn't attach the file since it was too big. Here is the link to the Google Drive with the file: https://drive.google.com/file/d/1FyL69VotrqOnm0HmJjCuyIBPZPP_tLQs/view?usp=share_link
I would also like to move to stimuli presentation only when the participants fixated their gaze for at least 500ms on a central fixation cross. If anyone could help me with this as well.
Hi Natasha,
. I need to measure the proportions of fixations to each picture averaged on the following time windows
Do you need this information during the experiment or only for later analyses? If latter, I would recommend you leave this out of the experiment, and include it rather in your analyses scripts. That is more efficient. Then you only need to keep in mind that you need to send log messages to the tracker, so that you know which samples belong to which phases of the experiment.
I would also like to move to stimuli presentation only when the participants fixated their gaze for at least 500ms on a central fixation cross.
The code is probably not entirely accurate any more (the study was conducted 7 years ago), but the functionality still exists in Opensesame. So, you can check this experiment here and get inspired (the relevant bit is in the inline_script
stim_presentationaround lines 151ff)Hope this helps,
eduard
Hi Eduard, thank you so much for the prompt response!
I need the proportion of fixations only for the analysis indeed. I am unsure where I should include inline scripts with messages and how to specify that I want to send the timestamp as well.
For example, where should I include the script if I want to send a log message about the beginning of the object in the spoken sentence (in the practice loop, I have a variable "object_time" with all the object times in milliseconds)?
Concerning the fixation dot, thank you so much for sharing this example with me. I tried to adapt the code to my experiment, and told the program to start with my audio and picture stimuli only after my conditions for the fixation cross are satisfied. However, when I run the experiment in a mouse simulation mode, the stimuli aren't shown, meaning that there is some mistake with the script. This is the first time I am writing a Python script, so I probably made a mistake with some positions.
Here is the script for a loop that repeats until the participant fixates on the central dot for at least 500ms. If the participant fails to do so within a specified time limit (10 seconds), drift correction will be performed, and the loop will repeat. Plus, in the trial coroutines, I indicated: run if fixation_achieved == 1
import math import time def dist(point1, point2): x1, y1 = point1 x2, y2 = point2 return math.sqrt((x2 - x1)**2 + (y2 - y1)**2) # Define the fixation dot coordinates and diameter fixation_dot_x = var.width * 0.5 fixation_dot_y = var.height * 0.5 fixation_dot_diameter = 20 # in pixels # Calculate the radius of the fixation dot fixation_dot_radius = fixation_dot_diameter / 2 # Set the fixation threshold to be slightly larger than the radius of the fixation dot fixation_threshold = fixation_dot_radius * 1.5 # Adjust as needed # Function to check fixation on fixation dot def check_fixation(): start_time = time.time() while True: # Eye tracking data sampling eye_x, eye_y = exp.pygaze_eyetracker.sample() # Check if the distance between eye position and fixation dot is within the threshold distance_to_fixation_dot = dist((eye_x, eye_y), (fixation_dot_x, fixation_dot_y)) if distance_to_fixation_dot < fixation_threshold: # Check if the participant has been fixating for at least 500ms if time.time() - start_time >= 0.5: return True else: start_time = time.time() # Reset the start time if fixation is broken # Function to perform drift correction def perform_drift_correction(): exp.pygaze_eyetracker.log("Performing drift correction") exp.pygaze_eyetracker.drift_correct() exp.pygaze_eyetracker.log("Drift correction completed") # Main loop for fixation and audio stimuli presentation while True: start_time = time.time() # Record the start time of the loop # Check fixation on the fixation dot for at least 500ms fixation_start_time = time.time() while time.time() - fixation_start_time < 10: # Check for fixation for up to 10 seconds if check_fixation(): fixation_end_time = time.time() if fixation_end_time - fixation_start_time >= 0.5: # Fixation achieved for at least 500ms exp.pygaze_eyetracker.log("Fixation achieved for 500ms") fixation_achieved == 1 # Present audio stimuli exp.pygaze_eyetracker.log("Presenting audio stimuli") break # If fixation not achieved within 10 seconds, perform drift correction and repeat the loop if time.time() - start_time >= 10: perform_drift_correction() continue # Break out of the loop if fixation achieved and audio stimuli presented breakFor example, where should I include the script if I want to send a log message about the beginning of the object in the spoken sentence (in the practice loop, I have a variable "object_time" with all the object times in milliseconds)?
Generally, right before the corresponding items. If all your events in the experiment happen in separate sketchpads, then you should put send the trigger in separate inline_scripts right before each sketchpad (in the run phase). If you present everything in inline_scripts, then you would send the trigger in the same inline_scripts, just before the lines of code that change the visual stimulation (stuff like
canvas.show(), oraudio.play(), etc.)Does that make sense?
Concerning the fixation dot,
For this it is important to learn how to present the information that go into the algorithm in a way that you can make sense of it. In your case, you have time (delays, durations, etc) and coordinates. I suspect the coordinates are wrong (my examples was written in a time when Opensesame used a different coordinate system). So you can try to add print() statements to your loop that print out (into the debug window) the current values of
eye-xandeye_yand the distance to the fixation dot. Like that you can control whether the logic of your algorithm checks out. So yeah, I would make sure fixation dot and eye position are represented in the same coordinate system. If this is correct and the problem persists, we can have a closer look.Little sidenote: I don't think it is related to your problem, but Opensesame measures time in ms not seconds. So your if statement should check
>500not>0.5Eduard
Dear Eduard, thank you so much for your help!
Concerning sending messages to the log file, I present my stimuli using sketchpads due to my limited coding skills. It does make sense to present the code for a log message in an inline script before each sketchpad. However, I am uncertain about how to implement this correctly in my situation. I have only one sampler item containing the audio sentence, but within each audio sentence, the timing of when the object is heard varies slightly. For my study, it is important to analyze gaze within specific time frames (prior to the participant hearing the object of the sentence). For instance, if a participant hears the sentence "A cat is chasing a rat" while viewing four pictures on the screen, I'm specifically interested in eye movements toward the objects only within the time window preceding the object being heard, such as "A cat is chasing a". I have recorded all the times at which the sentential object occurs for each sentence and included them as a variable in the practice loop. Is there a method to log a message along with a timestamp indicating the start of the sentential object in each audio sentence?
Natasha
Hi Natasha,
Sorry for the late reply.
If you want a precise trigger whenever a specific word in a sentence is being presented, there is no other way than to split up to recording into multiple recordings (per utterance), and send the trigger before the utterance that is important to you. Even this option is not super ideal, because audio is generally a bit less precise in terms of timing that visual information. Alternatively, you could use some heuristic and send a trigger before audio, and then add the delay before the interesting word is being said during postprocessing. Over the course of the experiment there is a good chance that there won't be a systematic difference between conditions. But depending on some hard-to-control factors like operating system, hardware, your experiment, etc. your precision , and therefore SNR will suffer a bit.
Hope this helps a bit.
Eduard
Hi Eduard, thank you for your help! I think I managed to solve the problem!