Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

Question about stability of the psycho back-end

Hi

I am creating an eye-tracking experiment in opensesame. The design started very simple, with testing on-site with an SMI, and at that stage I had everything working fine. However, over time the design has become very complex and now the plan is to test it off site with an eyelink:

https://www.dropbox.com/s/99uuob5l3a7prhe/SS_fulldesign.osexp?dl=0

Note participants will do 3 sessions separately and that is all coded into this one file (session 3 not operational yet).

It is crucial that we can measure RT - the time it takes the eyes to move left or right after a stimulus is presented- accurately, and thus I chose to use the psychopy back-end. I have had no problems with this on my end, however my collaborator who is testing on the eyelink is getting some weird performance. He noticed stimuli disappearing off screen with no warning when they should not have, for example a screen where participants should press {space} moved onto the next screen without him pressing space. Also, the experiment failed to respond once and he had to ctrl+alt+delete out. We had no problems in earlier testing with the pygame back-end.

I am wondering if the instability my colleague is experiencing is due to the psychopy back-end preloading sequences inside sequences inside sequences due to my structure. Could that be the case? Is there a better way for me to set things up to avoid that type of lag? For example if I split my 3 daily sessions into 3 files, would that help?

Thanks

Comments

  • eduardeduard Posts: 1,115

    Hi,

    I am wondering if the instability my colleague is experiencing is due to the psychopy back-end preloading sequences inside sequences inside sequences due to my structure

    this is rather unlikely because this should then have happened with other experiments as well. And I had never problems of that kind with psychopy.

    For example if I split my 3 daily sessions into 3 files, would that help?

    The more complex your experiment gets, the easier it is to lose track of the internal logic of everything. So if the the scripts are separate, I'd recommend to have three different scripts. In particular, if you have each of them start separately anyway (sessions on different days, right?).

    In case there are some internal dependencies, you can always write some variables to a file amd load that file in the next session.

    Does that make sense?

    Eduard

  • LJGSLJGS Posts: 34
    edited December 2017

    Thanks Eduard,

    If I recall, you are running eye experiments on an eyelink computer with a psychopy back-end right? It's strange that yours would work just fine whilst mine have problems... I wonder if it's the particular system my colleague is trying to work on. Would you mind sharing one of your .osexp files so I can scan whether my code has anything extra that could be causing the problems?

    I had preferred maintaining one script to keep everything in place and so I could easily modify the same trial procedure etc.

    However, I tried splitting the experiments into the three daily sessions and found a couple of things:

    1) the initial load time went from about 15 seconds to about 5 seconds
    2) overall RAM usage for the python processes dropped about 100mb

    Thus there seems to be some extra demand from pre-loading many items. However, I have no insight into whether those extra load times/ RAM demands are innocuous, or the sign of something that would cause the instability I'm hearing about...

  • eduardeduard Posts: 1,115

    Hi,

    Yes you remember right, I use Eyelink together with psychopy. A couple of my experiments you can find online on OSF.io : https://osf.io/qknug/files/

    Good luck,

    Eduard

Sign In or Register to comment.