Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Reversal Learning Task

edited June 2016 in OpenSesame


I'm creating a reversal learning task in which participants will select one of two stimuli (let's say a blue box and a red box). During the first phase of the experimental, one stimulus will be probabilistically associated with a particular outcome. So, 80% of the time, selection of the blue box will lead to one type of outcome (let's say reward), while 20% of the time, it will lead to the other (let's say no reward). In contrast, selection of the red box will lead to reward 20% of the time and no reward 80% of the time. I want this association to switch every 15 trials during one block of the task and to stay constant during another block of the task. So, I'd have something like 3 x 15 trials during one block (where the associations switch 2x), then 3 x 15 trials during another block where the associations stay constant, and then another block of 3 x 15 trials where they are switching every 15 trials. My issue is that I also want to randomly change where the blue and red box are presented onscreen so that the participant doesn't merely associate one side of the screen (left or right) with the desired outcome. I plan to implement two versions of the task, one with reward vs. no reward and one with loss vs. no loss. So, I have to counterbalance which color boxes are associated with which outcomes, and to psuedo-randomize how often the stimuli are presented on the left or right of the screen. I worked out how to do the counterbalancing with mulitple conditions, but am struggling a bit with how to do the pseudorandomization of location and how to compute the probabilistic association. Any advice would be most appreciated.

Warm regards,


  • edited 3:44PM

    Hi Nicholas,

    Assuming you have all your experimental variables in a loop item, the easiest solution would be to add a variable like position_red, with two values 'left' and 'right'. You may further want to use 3 loop items per block, each containing a trial sequence that is run 15 times. You can then take care of the probabilistic association in each individual loop item.

    Mind the difference between getting a reward 80% of the time and having an 80% chance of getting a reward. In case of the former, if I continuously choose red (when it is associated with 80% reward) I am sure to get a reward 80% of the time, whereas in the other case I might get nothing (even though the chance of that happening is very tiny).



Sign In or Register to comment.