Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Video timing and RT precision at 25 FPS on a 60 Hz display (OpenSesame / Pygame)

edited January 8 in OpenSesame

I am developing an experiment in OpenSesame that presents short video stimuli using Pygame. Because accurate stimulus timing and reaction time measurement are critical for my study, I would like to ensure that my current implementation achieves sufficient temporal precision, especially given potential future applications in EEG.


OpenSesame version

[e.g. 4.1.0]

Operating system

[Windows 10, Display refresh rate: 60 Hz]

Backend

[legacy]

Expected behavior

  1. Fixation cross: 500 ms
  2. Video stimulus:
    • Two videos were presented left and right of the center scree
    • Duration: 1 second
    • Implemented as 25 frames (intended 25 FPS, 40 ms per frame)
    • Frames are drawn manually using the Pygame backend
    • Participants are allowed to respond during video playback
    • If a response occurs, the video is immediately terminated
  1. Blank screen response window:
    • Duration: 800 ms
    • Used only if no response occurred during the video

Reaction time is always measured relative to the onset of the first video frame.

Actual behavior (what goes wrong)

The experiment appears to run as intended at the behavioral level. However, because the video is presented at 25 FPS on a 60 Hz display, I would like to confirm whether my implementation is correct and whether the resulting stimulus timing is sufficiently precise, especially if I later extend this paradigm to an EEG experiment.

Specifically, I would like to make sure that:

  1. The code is correct in general.
  2. I implemented the video presentation by following the official OpenSesame tutorial:
    1. https://osdoc.cogsci.nl/3.2/manual/stimuli/video/
  1. Reaction time (RT) is recorded correctly.
  2. I intend to record RT relative to stimulus onset, defined as the moment when the first video frame is flipped to the screen. To do this, I record clock.time() immediately after the first pygame.display.flip() call and compute RT relative to that timestamp.
  3. The actual presentation time of each video frame is correct and well-defined.
  4. Given that my monitor refresh rate is 60 Hz, I am confused about:
  • how long each frame is actually presented on the screen when targeting 25 FPS, and
  • whether the resulting timing variability (due to refresh-rate quantization) is acceptable for EEG experiments, where precise stimulus onset timing is critical.


The Prepare part:


import cv2
import numpy as np
import pygame

video1 = var.video1
video2 = var.video2
video_path1 = pool[video1]
video_path2 = pool[video2]

v1 = cv2.VideoCapture(video_path1)
v2 = cv2.VideoCapture(video_path2)

def load_video_to_frames(video_path, surface):
    video = cv2.VideoCapture(video_path)
    fps = video.get(cv2.CAP_PROP_FPS) or 25.0
    frames = []
    while True:
        ret, frame = video.read()
        if not ret:
            break
        frame = np.rot90(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
        surf = pygame.surfarray.make_surface(frame)
        frames.append(surf)
    video.release()
    return frames, fps

frames1, fps1 = load_video_to_frames(video_path1,
pygame.surfarray.make_surface)
frames2, fps2 = load_video_to_frames(video_path2,
pygame.surfarray.make_surface)

fix_cnvs_pre = Canvas()
fix_cnvs_pre.fixdot()
fix_cnvs = Canvas()
fix_cnvs.fixdot()
blank_cnvs = Canvas()

my_keyboard = Keyboard(keylist=['f', 'j'])

Run part:

screen_w, screen_h = exp.surface.get_size()
fix_cnvs.show()

clock.sleep(495) # fixation dot
frame_duration = 1.0 / 25 # 0.04 s fps
frame_times = []
width_dis = 260

my_keyboard.flush()#before the video, clear the response
frame_idx = 0
flip_times = []

while frame_idx < 25:# the length is 25 frames
    vid_w, vid_h = frames1[frame_idx].get_size()
    left_x = screen_w // 2 - width_dis - vid_w // 2
    right_x = screen_w // 2 + width_dis - vid_w // 2
    y = screen_h // 2 - vid_h // 2


    exp.surface.blit(frames1[frame_idx], (left_x, y))
    exp.surface.blit(frames2[frame_idx], (right_x, y))


    pygame.display.flip()

    if frame_idx == 0:
        start_v = clock.time()

    t_flip = clock.time()
    flip_times.append(t_flip)


    key, end_time = my_keyboard.get_key(timeout=0)

  if key is not None:
      var.response = key
      var.response_time = end_time - start_v
      break # STOP video playback immediately


    # Wait exactly 40 ms
    clock.sleep(frame_duration * 1000) # clock.sleep ms
    frame_times.append(clock.time() - start_v)
    frame_idx += 1

end_v = clock.time() # record the end of the videos

if key is None:
    my_keyboard.flush()

    t_b = blank_cnvs.show()

    key, end_time = my_keyboard.get_key(timeout=800)
    end_res_win = clock.time()



print('========================================')
print(end_v-start_v)

if key is None:
    print(t_b - end_res_win)
    print(t_b - flip_times[0])

print('physical durations', flip_times[-1] - flip_times[0])

print('RT', end_time - start_v)
print('========================================')

print(flip_times)





Sign In or Register to comment.