[open] xpyriment backend seems slow
Hi Sebastiaan,
During teaching, the xpyriment backend seems terribly slow on the university's PC's. I always thought this to be because of the pc's themselves (terribly slow) and the running from USB of OpenSesame Portable. Recently, however, I have been trying out some stuff on my laptop as well and I run into the same problem: the xpyriment backend seems very slow compared to the other backends. This is something to worry about, as my laptop is really quite fast.
Examples of slowness: form plugins (any of them) and even a simple while loop for continuously clearing and updating the screen lags a lot! (for an example, see here)
Do you notice any differences between the backends? If not: what could I be doing wrong. If so: what should be done with the xpyriment backend?
Thanks in advance!

Comments
Hi Edwin,
Slowness is an issue with the xpyriment back-end. It's especially noticeable with forms, and when you present stimuli in rapid succession.
The slowness is due to the use of OpenGL. If you disable OpenGL (under back-end settings), the slowness will be gone. But so, unfortunately, will the increased temporal precision. It's a tradeoff.
That being said, in your example the slowness is probably worse than it needs to be, because you can disable
auto_prepareforcanvasobjects, as described here.Is the psycho back-end faster in your experience?
Cheers!
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi Sebastiaan,
Thanks for the tip! I wasn't aware of the auto_prepare setting yet. Just tried it and there is some improvement, but the movement still isn't smooth.
The weird thing is that the psycho back-end runs very smooth on the exact same machine. I was under the impression that the OpenGL functionality should be the same for both the xpyriment and psycho back-end (hence this topic). Is this not the case?
Best!
Edwin
No, they use different principles. PsychoPy uses OpenGL for everything, at least as far as I know. That's why it can generate all these fancy stimuli on the fly. Expyriment works pretty much the same way as the legacy back-end, with the exception that it converts everything to an OpenGL surface at the end. This is necessary to allow a 'blocking flip', which in turn is necessary to get accurate display timestamps (as explained on their Wiki). But it's also slow and inefficient. So PsychoPy is "real" OpenGL, whereas xpyriment uses OpenGL only to get the blocking flip.
Maybe I should run another poll one of these days, to see what people's experiences are with the various back-ends. It can vary a lot from machine to machine, so it's difficult to draw conclusions from your experience.
Check out SigmundAI.eu for our OpenSesame AI assistant!
Good point and good idea. To me, it seems to make more sense that the legacy or psycho backends should should be default, due to their respective stability and overall-awesome-functioning (and timing). But you are right, this is personal.
Hi there, I have some issues with speed especially with the form_text_inputs as well. The form base and multiple choice forms run smooth and ok, but the open question format is terrible. Interestingly, this is just the case if the question for the item ist too long. Asking for the age of the participants works very very good but if we need 2 sentences for the question it breaks down.
Psycho runs even worse than xpyriment whereas legacy works wonderful (but then you have the problem with time accuracy). We already put parts of the instructions in another form, so there is just 2 sentences on top and the text field left, but the problems stay.
any ideas?
Oh sorry, btw. I also tried it with turning out GL in xpyriment, but thats not a big improvement. But I varied the question and the title and it is really proportionate to the length of the instructions.
In my case 1 word as a question (such as: age) works fine. but more than 5 words leads to a tremendous delay for the text input, no matter if I youse good hardware or not.
Hi Johannes,
Forms can indeed be really slow, and this is particularly obvious when using the
text_inputwidget. Improving performance is certainly on the to-do list, but it's not trivial to do so while keeping forms compatible with all back-ends and devices.What you could is use the
expyriment.io.TextInputwidget in an inline script. This should work much more smoothly, but obviously works only with the xpyriment back-end and also does not match the look of the forms. The example below shows how to collect a text response with this class:For more information, see:
Cheers!
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!