Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Dual task timing


I have used opensesame to build a free recall experiment where participants see lists of words that they attempt to verbally recall either under a serial RT dual task or no dual task conditions. The dual task is one in which one of four coloured dots appears randomly in 3/4 possible positions in horizontal frame (dot can not appear in the position that is congruent with the relevant response key).

At the moment this is achieved via a 4 colour x 3 position loop that contains the dual task sequence. The dual sequence primarily consists of 4 sequences that are executed conditionally (one of each colour dot), with each sequence consisting of a sketch pad item and a response logger. The sketchpad for each coloured dot contains conditional statements that are executed in order to display 3 each coloured dot across all possible positions each coloured dot can appear.

The issue I am having is that I am unable to maintain the presentation timing for the dots (atm being set via the response collection timeout in each dot sequence) that is being calibrated via response times for the task taken from a training phase. There appears to be an approx. 300ms preparation time associated with each sketchpad (i.e., coloured dot) which results in a presentation rate of 1000ms per dot actually taking 1300ms. This problematic for a number of reasons -

1) the presentation is not appropriately calibrated to training phase performance (slower than it should be); and
2) the length of the dual task is based on the number of times the DT sequence needs to be called in order to approximate a 30 second (auditory) free recall window. On the basis of a 1000ms presentation rate, the experiments calls the DT sequence 30 times. However due to the preparation time, the presentation rate is actually around 1300ms which results in a the free recall window lasting ~40s (not the intended 30s).

I tried a couple of ways to fix this issue:

  • manually adjusting the presentation rate and/or number of times DT sequence is called. While I was able to approximate a 30s window, the response times recorded are not correct i.e., getting a presentation rate of 1000ms (i.e, 30s recall window = DT sequence x 30) via setting response time out to 600 + 400ms preparation time records response time outs as 600ms (rather than 1000ms).
  • due the lack of preparation time associated with loops, replacing the dot sequences with loops. This actually made performance worse - 30 x 1000ms trials taking >60s.
  • inserting an advanced timing delay into the DT sequence to take into account the preparation time for the each sketchpad. However, adding a 400ms advanced delay simply increased the presentation rate by 400ms (was under the impression that the next sketchpad could be prepared during the advanced delay?).

Any suggestions on how best to maintain timing this particularly kind of dual task? In the advanced tutorial you demonstrate how to use the prepare/run strat to prepare all stimuli for the attentional blink exp in advanced (i.e., having a list of canvas items that is iterated through). However, the stimuli for that experiment were single letters so not sure how easily my dual task stimulus could be prepared in a similar way.

All the best,


  • Hi Marton,

    Would it be possible to have a look at your experiment? It is difficult to understand how it is set up by just reading how all the items are put together. As I understand it the sequence for a single trial it as follows:

    1 list of words
    2 recall the words (typing? voice controlled? recognition?)
    2 recall words + look at colored dots
    3 some sort of response
    4 logger

    The issue might be that since there seems to be some sort of nested structure in the experiment some of the preparations of the stimuli happen at a time that you ideally would not want them to happen, but we'll need to see how the experiment is actually constructed to give a more meaningful answer


  • Hi Roelof,

    Thanks the for the response. The structure of a trial is as follows
    1) present word via sketchpad x 8
    2) verbally recall words (via soundrecorder plugin) while either:
    - under dual task (responding to coloured dots on screen via keypress)
    - under no dual task (just verbally recalling words)

    The dual task is primarily coded as follows:
    response loop:
    response sequence:
    dual task loop (4 colours x 3 locations):
    dual task sequence (i.e., single dual task trial):
    4 x dot sequences (one for each colour) - each consisting of:
    -sketchpad (shows dot is 3 possible locations via conditional
    -response logger

    the dual task sequence is called as many times as needed to approximate a 30s recall window via training phase performance.

    Below is the relevant parts of the general script (sorry if this is not what need):

    define loop DT_exp
    set source_file ""
    set source table
    set repeat 3
    set order random
    set description "Repeatedly runs another item"
    set cycles 12
    set continuous no
    set break_if_on_first no
    set break_if "[DT_count] > [break_DT]"
    setcycle 0 colour blue
    setcycle 0 position 1
    setcycle 1 colour red
    setcycle 1 position 1
    setcycle 2 colour yel
    setcycle 2 position 1
    setcycle 3 colour grn
    setcycle 3 position 1
    setcycle 4 colour blue
    setcycle 4 position 2
    setcycle 5 colour red
    setcycle 5 position 2
    setcycle 6 colour yel
    setcycle 6 position 2
    setcycle 7 colour grn
    setcycle 7 position 2
    setcycle 8 colour blue
    setcycle 8 position 3
    setcycle 9 colour red
    setcycle 9 position 3
    setcycle 10 colour yel
    setcycle 10 position 3
    setcycle 11 colour grn
    setcycle 11 position 3
    constrain colour maxrep=1
    run DT_exp_seq

    define sequence DT_exp_seq
    set flush_keyboard yes
    set description "Runs a number of items in sequence"
    run new_advanced_delay_1 always
    run blue_dots_exp "[colour] = blue"
    run red_dots_exp "[colour] = red"
    run yel_dots_exp "[colour] = yel"
    run grn_dots_exp "[colour] = grn"
    run get_pause_dur always
    run pause "[last_RT] < [pres_dur]"
    run update_DT_counter always
    run logger always

    (code for 1/4 colours)
    define sequence blue_dots_p
    set flush_keyboard yes
    set description "Runs a number of items in sequence"
    run blue_p always
    run resp_blue_p always

    define sketchpad blue_exp
    set duration 1
    set description "Displays stimuli"
    set background white
    draw image center=1 file="DTframe.png" scale=1 show_if=always x=0 y=0 z_index=0
    draw image center=1 file="DTblue.png" scale=1 show_if="[position]=1" x=-110 y=0 z_index=0
    draw image center=1 file="DTblue.png" scale=1 show_if="[position]=2" x=115 y=0 z_index=0
    draw image center=1 file="DTblue.png" scale=1 show_if="[position]=3" x=340 y=0 z_index=0
    draw textline center=1 color=black font_bold=no font_family=mono font_italic=no font_size=50 html=yes show_if=always text=RECALL x=0 y=-256 z_index=0

    Can link you a the github repo for the experiment if it makes things easier.

    All the best,

  • If I understand correctly, you need the response window to be limited to 30 seconds, always, regardless of individual participant,
    but in order to achieve this limit you check the average response time in training phase: and use this as an indication for the rest of the experiment?

    If this interpretation is correct, there are other ways to keep checking how much time has passed and build in a conditional stop after 30 seconds (however many repetitions of the dual task cycle this takes), for example:

    But a link would be perfect,


  • edited November 2017

    Hi Roelof,

    The above solution would necessarily limit the dual task to the required ~30s, though would the conditional stop be based on how many trials needed achieve <30s or >30s e.g., if trials should take 1100ms - would the conditional stop occur after 27 or 28 trials?

    However the timing issue will still remain in the context of the cognitive load associated with the dual task not being consistent with performance during the practice phase. The presentation time for the dual task stimulus during the experiment would still be artificially inflated due to the preparation time i.e., if on the basis of practice phase performance the dual task stimulus presentation rate was intended to be 1000ms, the stimuli are actually appearing every ~1300ms. This would make the dual task "easier" (i.e., less cognitively demanding) than intended.

    While I can take into account this preparation time within the intended presentation time (i.e., to achieve 1000ms presentation rate, set the presentation rate to 1000ms - ~300ms prep time), this throws out the RT being recorded (i.e., a "timed out" response being recorded as ~700ms rather than ~1000ms, and I suspect any response after 700ms not being recorded appropriately despite being a valid response in the context of being less than intended presentation time of 1000ms).

    Ultimately what I would like to achieve is a way to code the dual task in way that avoids incurring a ~30% timing cost due stimulus prep. Given the nature of the stimuli (essentially 4x horizontal frame, each with 3 coloured dots), I suspect this should not be unreasonable to achieve.

    Here is a link to the exp on my github:

    All the best,

  • FYI use the test version (it does not include the practice phase and hardcodes the presentation rate for the dual task). Sorry for the repost, couldn't find where to edit the previous post?


  • edited November 2017

    Okay, the git example makes thing a lot clearer: "timing can only be accurately controlled within a sequence" The structure you have here always runs into preparation time since every sequence will need to be prepared. What needs to be done is put everything required for one trial in a single sequence, attached an example of how this could be achieve with an inline script (note this is an incomplete example). An other option is to make three images per color (3*4 = 12 images total), and present these images depending on the position ad the color, this might be a bit more concise, a third way would be to create a loop table that actually only matches the correct positions to the correct colors: but this can only be done through coding: e.g.
    blue - 2,3,4
    red - 1,3,4
    hope this helps

    edit: The conditional stop can be completely controlled: you could opt for an inclusive type, which would start a new trial as long as there is any time left, or an 'exclusive type' which would check if there is enough time left for a new trial (on individual participant basis of based on some other condition)
    edit2: editing is only allowed for certain amount of time, after that a new post is needed, so it makes sense you could not find it :-)

  • edited March 22

    Hello again Roelef,

    I have finally gotten around to implementing your fix - I have recoded the dual task via a single canvas item that is then run and generates the appropriate stimuli via the position and colour variables. While this method has tightened up the presentation timings, I am still unable to maintaining a stimulus presentation rate that is within less than 100ms of the intended presentation rate.

    One added bit of compication is that in order to maintain a consitent presentation rate regardless of response times, I have had to add a pause between stimuli which has a presentation duration calculated on the basis of the difference between the last RT and intended presentation interval - so if intended presentation interval is 1000ms and a participant responds in 500 ms, a 500 ms 'pause' (an blank frame) is presented before the next stimuli.

    Attached below is example code + file pool images its using to maintain a 1000ms presentation rate

    with this code, the best timing performance I can get (using psychoPy@640x480) is a true presentation rate of average ~1130ms.

    I have also tried stripping out the pause related components and just have essentially all stimuli timing out due to no response, resulting in an average true presentation rate of ~1100ms.

    Is there any way to get this presentation rate closer to the intended presentation rate for the task or is this stimulus presentation time unavoidable? If this prep time is unavoidable then I can end up using a lower cut-off from the training phase RTs in order to compensate for the prep time (i.e., target ISI = 1000ms but ~100ms prep time - so set actually ISI to 900ms to approximate a 1000ms ISI).

    All the best,

  • edited March 22

    edit - here is a zip with the individual images used in the DT code (wanted to replace the individual files above but edit won't allow to change attachments?)

  • Hi Marton,

    I've been away from the forum for a while, hence the tardy response. Your experiment still makes use of timings that loop over the sequence item. All the recall canvases and button responses should be prepared in advance and be shown in 1 run of 1 sequence, not in multiple runs of the same sequence. For this we need some inline coding. I have rewritten the task, see attachment, in the way that I think you need the stimuli to be presented, although check the correct responses etc to be sure.

    some notes:
    -I have not put in the voice recorder
    (-resolution changed to 1000 * 1000)
    (-added text with response buttons to the main canvases)
    (-removed unused stimuli from file pool)
    (-removed break dt --> was not defined as far as I could see)
    -the order of the task is now:
    1 present 'get ready canvas'
    2 present dots in continuous loop, every dot/canvas for 1 second, or until response
    3 show placeholder if correct response was made
    4 show next canvas after 1 second
    5 show fixation dot after 10 seconds

    There is a chance that a canvas shows at the same position twice: which is now unclear in the experiment: you could opt for a brief presentation of a different canvas to indicate there is a new dot: or
    organise your list is such a way that there are not repetitions in dot color+ location. Also responses and response times are not saved yet, this still needs to be done.

    Let me know if this works, or if you have any other questions,

  • Thanks Roelof, I have a look at and get back to you.

    FYI break dt is a variable used to calculate (on the basis of the dual task presentation rate) how many dual task trials will needed in order to achieve a 30s recall window.

  • Roelof, One quick question - would be easier/more appropriate to set up the dual task as a co-routine (using a canvas/sketchpad item and a keyboard response item)??

  • Hi Marton, you could probably use a coroutine, but this would also have to be customized, so it would not necessarily make life a lot easier, I also have not worked with co routines before, so I cannot provide clear insights here. And as far as I understand it the break DT would now no longer be needed, since we are simply hardcoding the duration of the task, regardless of previous RT (if that is the correct interpretation), hope it helps, good luck

    Thanked by 1Marton
  • Hi Roelof,

    Had to adjust a couple of things and for the most part it is working as intended. However there a couple issues that I am not sure how to solve:

    • canvas_counter seems to incrementing twice when a response is made, leading to an out of range index error when calling canvas_list.

    • I am getting some ghosting of the DT stimulus between presentations. After the placeholder canvas is shown after a response is made, the previous DT stimuli seems to appear briefly in the last location prior to the new DT stimuli appearing. While this necessarily a huge issue, it is somewhat disorientating.

    (Again, thanks for all your help so far)
    All the best,

  • Hi Marton,

    You have to to add the line start_time = clock.time() to the part while loop (in stimulus_presentation) that is executed when a response was given. This will solve both issues.


    Thanked by 2Roelof Marton
  • Thanks Eduard that worked perfectly.

    Can I ask what is the easiest way to save a specific variables of interest from stimulus_presentation?

    Tried using log.write_vars() but:

    • After for example, initialising canvas_counter as a experimental variable, using something like log.write_vars(var.canvas_counter) throws an error - either int object is not iterable, or int objects has no attribute replace.
    • using something like log.write_vars('canvas_counter') results in having a variable for each letter in 'canvas_counter' logged.
    • putting var.canvas_counter in a list then passing it log.write_vars results in a similar error to using log.write_vars(var.canvas_counter).
    • using log.write_vars() to log all variables ends logging far too many unnecessary variables.

    Alternatively I managed to use var.write(canvas_counter) to save the necessary value but using this across multiple variables is problematic - they not logged under a variable name, and they are either written each on a new line or all together on the same line (rather having var1 and var2 appear separately on the same line).

    All the best,

  • Hi Marton,

    the function log.write_vars() takes as input a list of variables that you want to write. So, if you call it like this: log.write_vars(), all variables are being stored. If you only want to have a subset, than you have to use log.write_vars([var_1,var_2, ..., var_n]). Generally, I recommend calling this command only once per observation (i.e. once per trial). Otherwise your logfile can get a little messy.


    Thanked by 1Marton
  • Hi Eduard,

    Thanks for the suggestion but I have already tried this and receive the following error:

    item-stack: experiment[run].response_loop[run].response_seq[run].DT_exp_seq[run].stimulus_presentation[run]
    exception type: AttributeError
    exception message: 'int' object has no attribute 'replace'

    and the following traceback:

    File "C:\Program Files (x86)\OpenSesame\lib\site-packages\libopensesame\", line 96, in run
    File "C:\Program Files (x86)\OpenSesame\lib\site-packages\libopensesame\", line 174, in _exec
    exec(bytecode, self._globals)
    Inline script, line 44, in
    File "C:\Program Files (x86)\OpenSesame\lib\site-packages\openexp_log\", line 73, in write_vars
    l = [u'"%s"' % var.replace(u'"', u'\"') for var in var_list]
    AttributeError: 'int' object has no attribute 'replace'

    I have attached the current version of the script - is there something I am missing?


  • Hi Marton,

    The log needs a list of strings with variable names, like so:
    var.bropper = 1

    I agree the documentation is somewhat unclear, hope this helps though


Sign In or Register to comment.