Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[solved] Issues regarding [acc] %

edited November 2012 in OpenSesame

Hi all,

I've been using feedback in my experimental design and have discovered two issues.

Firstly, when I had feedback at the end of the block it reported the overall average in % terms without fault.

However, once I started provided instant feedback at the trial level (which worked perfectly) the overall average at the end of the block ceased to work. Instead it showed the following: undefined%

Secondly, once I began using keyboard_response at the sequence stage before the trial block began I started to see a really bizarre issue:

If the response for the first trial is incorrect the feedback reports it as 50% accurate instead of the usual 0% (which denotes an incorrect response) or 100% (which denotes a correct response).

Can anyone please help me solve this as they're the final 2 hurdles before I begin running the experiment.


p.s. Opensesame is excellent. Keep up the good work.


  • edited 12:56PM

    Hi Boo,

    Welcome to the forum and thank you for your interest in OpenSesame!

    • Regarding your first question, I would suggest deticking the "Reset feedback variables" box in your trial feedback item. If this box is ticked the feedback variables such as 'acc' and 'avg_rt' are reset every time the feedback item is called, which is in your case after every trial. Therefore, the variables do not exist after your last trial, causing the above-described problem in your block feedback item. (Note that resetting feedback variables is convenient when providing participants with block feedback, because this guarantees a 'fresh start' for a new block.)

    • Regarding your second question, I would suggest appending a 'reset_feedback' item just before your experimental block (see the 'Extended template' for an example). This way you make sure that the feedback will not be confounded by key presses that participants have made during , for example, the instruction phase: Each each block of trials starts by resetting the feedback variables.

    Does this help? Don't hesitate to ask any further questions!

    Best wishes,


  • edited October 2012

    Hi Lotje,

    Thank you very much for your help.

    Having followed your instructions I corrected the first issue I had.

    Unfortunately I've not had as much luck for the 2nd issue. In fact, now it not only reports the first incorrect response as 50% but is also doing some odd maths throwing out 80s, 40s, and 30s % also throughout the entire trial.

    I placed the 'reset_feedback' as is shown in the Extended template as suggested.

    Sorry for being a pain but do you have any further advice on how to address this?


    EDIT: I have a new strange problem. I've selected lshift, rshift and space as allowed responses for each trial but, for reasons unknown to me, OpenSesame has started to detect 'rshift' as 'lshift'. This is a clearly a huge problem for response collection.

    I did turn off shortcuts for StickyKeys (to prevent a pop up if the participant pressed one of the shift keys 5 times in a row) if this may be something to do with it? Although I undid that change and the problem still exists.

  • edited October 2012

    Hi Boo,

    I'm glad to hear the block feedback works fine now!

    Trial feedback:

    Regarding the trial feedback, you say that you placed the reset_feedback item as in the 'Extended template', that is, at the start of the block_sequence. However, if you have an item collecting keyboard responses, such as an instruction screen, in the block_sequence but after the reset_feedback (which the 'Extended template' doesn't have) there will still be carry-over effects from these responses. So in this case you need to place the reset_feedback item after the response-collecting item in the block_sequence.

    Also, note that the variable [acc] contains the average accuracy percentage calculated from the current and all previous trials in a given block, such that, for example,

    • after a correct response on the first trial, the trial feedback should be 100% accurate
    • if this is followed by an error on the second trial, accuracy drops to 50%
    • and after another error on the third trial, even to 33%

    So values like 40% are in itself not surprising. But you're right, of course, in saying that after the first trial accuracy should be eiter 0% or 100%.

    Does this make sense? If this doesn't solve the 'accuracy = 50% on the first trial' problem we'll need some more information about the structure of your experiment to see what's going on. So in this case, would it be possible to upload either your experiment or a screenshot of the overview of it?

    Left and right shift:

    Regarding your question about the left and right shift keys, do I understand correctly that all shift responses are always detected as coming from the left shift? Did you see this in your output file?

    Again, it's difficult to understand what is causing this without some additional information, such as which version from OpenSesame you're using, which operating system you are running it on (e.g. Windows, Ubuntu, MacOs), and which back-end you use for your experiment (for the latter, see the 'General properties' tab of your experiment).

    So, for both issues, if you would provide us with some more details about your experiment, I'm sure we can solve it!

    Best wishes,


  • edited 12:56PM

    Hi Lotje,

    I think I have already placed the reset_feedback as you described above.

    Regarding the shift responses, yes, the output reports all shift responses as 'lshift'.

    I'm using OpenSesame 0.26 on Windows XP and I'm using PsychoPy back end for the increased temporal accuracy.

    I have a folder set up on DropBox with the OpenSesame file inside and I have sent you the link.

    Thanks again for your help.


  • edited 12:56PM

    Hi Boo,

    Thanks for the script. When I run your experiment on Ubuntu it works perfectly. However, when running it on Windows, right-shift responses are indeed detected as "lshift". This occurs only with the PsychoPy back-end and probably has something to do with low-level libraries PsychoPy uses for keyboard-response detection. In other words, it's a feature of this back-end and outside the control of OpenSesame.

    Obviously, this also explains why your trial feedback doesn't perform like you expect it to perform: All trials on which correct_response is set to "rshift" are (falsely) considered incorrect, even when the participant did press the right shift.

    A few suggestions to make your experiment work like desired:

    • Of course you could use other key presses than the left and right shift (but perhaps you have a good reason not do so?).
    • You could use the legacy back-end, but as you said in this case the time stamps will be slightly less accurate than with the PsychoPy back-end. Note, however, that this is only advantageous for verification purposes, such as getting response times and verifying the timing of your displays post hoc. The timing of display presentation itself does not differ between back-ends (and is...)
    • Alternatively, you could use the latest prelease (currently 0.27-pre22). This is because the Expyriment back-end is available as of OpenSesame 0.27. This back-end has excellent timing properties too, and is in this sense equivalent to PsychoPy. The latest pre-release can be obtained as described here.

    Note that for the latter two suggestions you'll have to change the values of the variable 'correct_response' in your block_loop. This is because with the legacy and the Expyriment back-ends shift presses are detected as "left shift" and "right shift" instead of "lshift" and "rshift". (See also the 'List available keys' button in the keyboard_response item.)

    I hope this helps! And please feel free to ask any further questions! That's what the forum is for! :)



  • edited 12:56PM

    Hi again,

    I've opted for the simple solution of changing the keys I use for response collection so that problem is in the can.

    However, I still have the problem that I don't know how to report the overall average for each block at the end of said block. Since I am using reset_feedback after each trial [acc] is being reset and by the end of the loop it has been reset to zero again and results in undefined%.

    I can't really see a way around this at the moment.

    Is there perhaps some way I could incrementally assign each value of [acc] at the end of each trial to a new variable and calculate the average and report the new variable at the end of the loop?

    Thanks again for your help,


  • edited November 2012

    Hi Boo,

    I think there's a slight misunderstanding here: What Lotje meant is that you should not reset the feedback variables after each trial (either by a reset_feedback plug-in, or by ticking the 'Reset feedback variables' box in a feedback item), because (as you experienced) this will prevent the accuracy from accumulating.

    So, my suggestion is to reset the feedback variables after or right before each block (which you probably do already), but not after each trial. Perhaps at the root of the confusion is the difference between the accuracy / acc variable, and the correct variable:

    • accuracy → the running percentage of correct responses since the last reset of the feedback variables
    • correct → 0 or 1 depending on whether the last response was correct.

    So if you want to give feedback after each trial, you probably want the correct variable. After each block, you generally want the accuracy.

    Does this make sense?


  • edited 12:56PM

    Hi all,

    Seem to have everything working as it should be.

    Went with using the 'correct' variable and it made life a lot easier.

    Thanks for all your help Lotje and Sebastiaan.

Sign In or Register to comment.