trouble with backends and opengl
I am really enjoing opensesame thus far, ist seems to be very well suited for my purposes and i was able to get a lot done with the inline script modules. However, I have run into a few problems with my experiment. right now i am using the xpyriment backend with opengl enabeled.
The experiment is supposed to do the following things:
1 present video stimulus and record respondent's input from a gamepad. I got that working pretty well with the media_player_mpy. It checks the joystick device every frame and writes the state to a csv file. The performance is good and accuracy is sufficient.
2 output realtime 2d graphics to which respondents have to react and that change depending on the input. In this case an analog axis of the joystick controls the length of a box. I got that working with a constantly updating canvas on which a rect gets drawn. From what i understand the canvas object was not indendet to output in realtime, therefore performace and accuracy is ok with one rect being drawn, but mutliple objects and especially running at hd resolution push the update rate down to 15hz on a pretty beefy machine. Accuracy and performace are therfore not sufficient.
In a perfect world these two things (video and interactive 2d) would work simultaneously; 2d shapes overlayed onto the video but for now it is sufficient to habe both in sequence.
Solutions i tried to achieve video playback and good realtime 2d:
1 switch to legacy backend: this seems to work in principle but in practice the video playback is stuttery, not unusable but unpleasant to look at. Also a lot of my scripting does not work anymore, for example the realtime canvas does not update as it should. The scripts can surely be fixed but i would really prefer the video playback to be nice so investing time adapting the scripts does probaply not pay off.
2 switching to psychpy backend: This seems to be able to perform well but i think major adaptations to my code would be needed. That alone would be ok, however the psychopy documentation is an absolute mess for an outsider like me. I tried setting up a video file as stimulus and after long hours of searching through unmaintained documentation gave up becaus ffmpeg was not found. With enough time this might be fixable but when a basic task is allready such a hassle i anticipate the rest of the way to be hell.
3 expyriment backend with no opengl: This increases the performace of the realtime canvas update drastically, however video playback seems to be broken when opengl is disabeled. The experiment simply closes instantly when a videoplayer block is added. so i tried...
4 switching to opencv2 for video presentation: i was not able to install opencv sucessfully, neither by pip nor manually, the documentation is not clear to me.
Solutions i would like to try but don't know how to:
switch opengl on and off as needed. This would currently be the ideal scenario. my current code would stay intact and i could use the smooth video playback with opengl and the fast canvas updating without opengl.
Chain experiments. If the on-the-fly manipulation of opengl is not possible chaining experiment files might be a workaround, so at the end of the first experiment, the second would automatically be started.
Some other way to draw 2d shapes with better performance: maybe with pygame? But from what i understand pygame can not draw onto canvas objects. Maybe there is another way?
I know my problem is a little abstract, i am not a programmer and at the same time i try to do things that go outside the normal applications of opensesame, but maybe someone has an idea of how i can present smooth video with simlutaneus response recording and realtime 2d shapes, all in one go. i would be very happy.