Back-ends and sound renderers
Hi,
I'm running an experiment in OpenSesame that involves using the media_player_mpy to present 3-second-long audiovisual clips (.mp4).
Currently, I'm using the legacy back-end and pygrame sound renderer option, but I noticed that some of the video clips play without the sound. This seems to happen at random. I double-checked the video files, and they work fine when I play them. I tried using different sound renderers with legacy but got an error and a suggestion to use a different sound renderer.
Since temporal precision of AV stimuli is important to this project, I tried using the expyriment backend with pygame, but it only runs two trials before crashing unexpectedly and the other sound renderer options don't seem to be compatible with it.
I'm wondering what is the difference between each sound renderer? Is it possible to use Expyriment with Pygame to present videos and ensure maximal temporal precision?
Any help is greatly appreciated it
Thank you
Gazelle
Comments
Hi Gazelle,
It sounds like the issue might stem from how your video files are encoded. To clarify, when you say the issue occurs "at random," do you mean that the same video sometimes plays with sound and sometimes doesn't, or that certain videos consistently lack sound while others don't? If it's the latter, it's likely an encoding issue where different videos have inconsistent formats or audio tracks.
I recommend ensuring all your videos are encoded with the same settings (e.g., using a consistent codec like H.264 or H.265 for video). Tools like HandBrake or FFmpeg can help standardize this.
Additionally, note that temporal precision in video playback isn't designed to be millisecond-accurate. Video is optimized for smooth playback, which should be sufficient unless your experiment explicitly requires high temporal precision.
Let me know if this helps,
Claire
Check out SigmundAI.eu for our OpenSesame AI assistant! 🤖