Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

How can I make a Continuous Performance Test without coroutines?

Hello, I am new to OpenSesame and Python. I've already read beginner tutorial and some other tutorials, but only start to read about Python. I intend to make a basic Conners Continuous Performance Test (CCPT), where there will be series of letters showing up on the screen. Letters will show up for 500 ms and every 2 seconds. Participants need to press space bar for every letters, except 'X' letters.

I've already made the experiment in OpenSesame using coroutines instead of sequences. I use the coroutine because the keyboard response need to be run in parallel with the stimulus letters. I tried using sequences once, but I couldn't make the keyboard response accurately measure the reaction time and when the participant press the space bar the stimulus immediately goes to the next stimulus.

The problem I am facing right now when using coroutines is I can't run the experiment online because the coroutines script isn't compatible with the OSWeb. Is there any solution to my problem? Can I make the experiment without the coroutines or is there any way to make my experiment compatible with OSWeb?

Thank You

note: I made the CPT using only the interface tools without any Python inline script. Also, the instruction is using Indonesian Language.


  • FabFab
    edited April 2021

    Hi @arkasukmaa,

    I'm not very experienced with OS yet, but I think I might be able to help you. Other users or the moderators might be able to provide a better method, but what I describe below should do the job and will run on OSWeb.

    First, note that coroutines and Python inline_script are not supported in OSWeb (or if you run your task in a browser from OS). You can check the features that are currently supported here: SO, if you plan to run your experiment online, you need to stick to what is described in the link above and use Javascript instead of Python.

    So, the aim is to find a way to simulate a coroutine but without actually using a coroutine. If I understand your task, you want to present a target and a transition screen, both with fixed duration, irrespective of when the subject produces a response (and that subjects are allowed to respond to the target while it is on screen but also during the transition - if not, you could edit the task accordingly). The trick is to use two target screens and two transitions screens, two keyboard responses and use Javascript code to set up the duration of the second target screen and the second transition screen based on when participants produce a response. You can find a useful information in a previous thread:

    I've had a look at your task and made some changes so that it runs following a method similar to that described above. Note first that I simplified it a little by creating a single trial sequence instead of two as you had, because it seemed redundant to have two identical copies of the same sequence (you already have a variable that identifies practice and test trials anyway). The main idea can be illustrated like this:

    The pre-R1 and post-R1 intervals present the target on the screen. The duration of the Target sketchpad is set to 0 and the duration of the R1 response object to [targeton] (defined early int he task to be equal to 495 ms). The duration of the post-R1 interval is set using Javascript (in the Run tab) to a value equal to [targeton] minus R1's response time. This ensure that the target stays visible during 495 ms regardless of when or whether the subject produces a response. The same logic is applied to the transition screens, with the difference that the R2 keyboard object is only run if no R1 response has been detected (this is checked using Javascript and under the "Run if" condition in the trial_sequence loop.

    Note that the second target and second transition screens are presented using feedback objects instead of sketchapds because feedback objects allow you to modify their duration on the fly using Javascript code in the "Run" tab. Note also that in order not to mess up the calculation of performance on the feedback displayed after a block, you need to make sure to uncheck the "reset feedback variables" on these feedback screens used to present the target and transition screens.

    One thing to take into account id the calculation of the response time if the subject does not produce a response during the target presentation and does produce a response during the transition screen. In that case, since I imagine that you'd want to measure the response time from the onset of the target and not from the onset of the transition screen, you have to use Javascript to update the response time to that measured by the 2nd keyboard object + the duration of the target presentation ([ontarget]).

    if (vars.collectedR1 == "no") {
    vars.response_time = vars.response_time+vars.targeton

    I've added lots of console.log code so that can track responses and RTs on the run while running the task within OS.

    Also, note that I moved your "rest feedback" objects; they were initially placed right before the feedback display, which would not work (you want to reset the feedback variable before a block starts, not just before displaying the feedback).

    Finally, when trying the modified task in a browser, I got a "Cannot read property 'length' of null" error message in the browser. It took me a while to figure out where it came from. It looks as if it came from the keyboard objects you had in your program. I think they might somehow have kept some properties they inherited from the coroutine you initially set up and that these properties did not disappear when I took them out of the coroutines, so I simply deleted the original keyboard objects and replaced them by new one. That fixed the problem.

    Have a look at the modified task attached. I think it does what you were after, and it is fully compatible with OSWeb and would run as a JATOS experiment. Just edit / change it as needed. Double-check the data are correctly recorded (I believe they are but I'm not familiar with that task, and I noticed taht your correct response is always the SPACE bar... so I'm not quite sure what's is all about; so bottom line: make sure to check the data are recorded as you want).

    Good luck!


    Buy Me A Coffee

  • edited April 2021


    Thank you so much. I've checked the attachment you modified. It's brilliant, now it can work well without using Coroutine.

    But, I've found that the feedback slide didn't show the [acc] properly. If I quick run the test, it always show 0 accuracy. Is it happen because there are 2 keyboard responses available or is it because I quick run it on OpenSesame, not on the web? I've also check the log file, the correct response is work properly but the [acc] still show either value of 0, 50, or 100.

    I also want to add variation of interval between stimulus (usually it's called interstimulus interval or ISI). I intend to make ISI of 1000ms, 2000 ms, and 4000 ms. Should I add more script about vars.trial_duration, like vars.trial_duration1, vars.trial_duration2, and vars.trial_duration3 ?

    I intend to make the experiment loop to have 6 blocks and 3 sub-blocks within each block. The 3 sub-blocks represent the ISI variation and each block have different variation of the sub-blocks. To put it more clearly I make it into a picture here. I'm sorry if my explanation is very confusing.

    Thank you so much for your help.

  • FabFab
    edited April 2021

    Hi @arkasukmaa,

    I'm glad you found my modification of your task useful.

    I think that the problem with the accuracy displayed on the feedback after each block is due to a few things I missed the first time...

    (1) I suspect that the "reset feedback variables" on the feedback object right after R2 should be unchecked.

    (2) I just realized that some of the correct responses in the practice and experimental loops were set to "None" instead of "space". Change these back to "space":

    However, I think that because there are two responses, failure to produce a correct R1 response lead to the task counting an error and including it in the calculation the average statistics per block. In fact, I think that if you respond correctly in every trial while the target is on the screen (R1), you get a 100% average accuracy (because if R1 is detected, the second keyboard actually never runs, which means that R2 would never be missing). In contrast, if no R1 is detected, the accuracy for R1 would be 0 and the task would expect a R2. I've run a test and it does confirm my suspicion. The way around that would be to use code to set these variables manually. I'll have a look at this issue soon and will get back to you.

    In the mean time, I have some questions about your second query regarding the various blocks and ISIs: are blocks A-F within-subject or between-subjects? That is, is every subject going to do blocks A to F (and if so, in what order? random? sequential?)? Or are you gonna have 6 groups of subjects, each doing one of the blocks only? f you can clarify your design, I'll have a go at solving it.

    Let me know and I'll try to free some time to look into this and suggest an economical way to modify the task.



    Buy Me A Coffee

  • Hello @Fab,

    There are several things I forgot to mention before.

    1) First, the "None" in the correct response is intentional. The instruction of the test is actually like this: "Press the space bar for all letters EXCEPT X. Please respond as quickly as possible but also as accurately as possible." So there are two correct responses, first is "space" for every letters exepct X and "None" for every X letters

    2) The block is within participants, so every participant will do every blocks. Actually for the order of the blocks it is not specified in the original test, but I will prefer to have it in a sequential order. I think the order of the block is not a major thing because the most important part of the blocks is the change of pace within each blocks. The change of pace is intended to see whether the participant will be able to adjust to the changing demand of attention or not.

    3) For the detail of the test, there will be 6 blocks and 3 sub-blocks within each blocks. In these sub-blocks there will be 20 stimulus each, so in a block there will be 60 stimulus. Interstimulus interval (ISI) between each sub-blocks is different (sub-block 1= 1000ms, sub-block 2= 2000ms, sub-block 3= 4000ms). Then between each block there will be different order of sub-blocks (example: Block A= first Sub-block 1, second Sub-block 2, third Sub-block 3| Block B= first Sub-block 1, second Sub-block 3, third Sub-block 2| Block C: first Sub-block 2, second Sub-block1, third Sub-block 3, and so on)

    Thank you so much for always helping me.

  • Hello @arkasukmaa,

    I have now had a chance to spend some time on your task. I attach below a modified version that fixes the issues with the calculation of the feedback variables and implement the blocks and sub-blocks structure you described.

    Take you time to go through the task and comments in the various Javascript inserts, so that you get go absorbing how it all works. Hopefully you'll then be able to modify it as needed and finish it. I list below the highlights of what I have implemented and how I solved some of the challenges.

    The task is compatible with JATOS and will run fine in a browser.

    I'm providing this "as is". Make sure to test the task carefully and check the output before you run the actual experiment.

    Good luck!


    Computing the performance variables for feedback display

    The problem with the standard feedback variables is that they take in every response. Hence, in this task, it would take R1 and, if no R1 is detected, R2. If a participant did not produce a R1, the task would count it as one missing response and an incorrect response, and then it would take R2 as a separate response. The feedback variables would then be calculated on that basis when, in this task, you don't want R1 and R2 to be counted separately. So I have implemented variables counting the responses, the number of correct responses, the omission, and summing up the RTs.

    Javascript code in "Performance_calculation":
    // stores performance in block performance
    if (vars.correct==1){
    if (vars.collectedR1=="no" && vars.collectedR2=="no") {

    Then, after each block, code is used to calculate the % of accurate responses by rounding up to two decimal the following expression: (100 x number of correct responses) / (number of responses). Note that the number of responses would be equivalent to the number of trials in the sub-block since the absence of a response is a response too. In order to round up the % of accuracy, I defined a function in Javascript (since there is no command rounding up numerical values per se in Javascript). The mean RT is calculated by dividing the sum of RTs by the number of responses. I use the same function to round up the mean RT. That is, for trials without a response, it will consider the RT to be the duration of the trial. In other words, the mean RT is not the mean RT for correct responses only, it is the mean RT overall. I implemented it that way because that's how the feedback object is normally implemented in OS.

    Javascript code in "calculate_feedback":

    // calcultes block-wise performance
    // Declares rounding function to be used for calculating block performance (for feedback display)
    // Rounding to 2 decimals
    function roundToTwo(num) {
    return +(Math.round(num + "e+2") + "e-2")

    Note that I opted to define vars.acc and vars.avg_rt, which are the variables that OS would normally define too. Only here, I'm setting up their values instead of OS calculating it by itself.

    The bespoke performance variables are reset before each sub-block begins (reset_feedbackvariables):

    // resets bespoke feedback variables

    Setting up blocks and sub-blocks

    I set up the task so that it runs one practice block, then blocks A to F, with 3 sub-blocks in each, each of these with a distinct ISI (organized according to a Latin-square method). All within-subject, and with the blocks and sub-blocks sampled in a sequential manner (as you indicated).

    One option was to create lots of different loops, but that quickly gets messy. It is always better to use nested blocks and sequences and just set up variables that tell the task what needs to be different from one block to another.

    I set up three loops for the test trials. The first contains information about the 6 blocks (A to F), including a block id and the ISI to use in sub-blocks 1, 2 and 3 (Javascript code is used in the trial sequence to retrieve that ISI information before each block of trials; when sub-block is 1, it'll go retrieve the value located in column SB1_ISI; when sub-block is 2, it'll retrieve the value in column SB2_ISI, and the same idea for sub-block3). The second loop is just there to tell the task to run three times (one of each sub-block). Then we have the loop defining what happens in every trial. Here's an illustration of the structure:

    The setup_subblock_ISI Javascript object retrieves the appropriate ISI from the SB1_ISI, SB2_ISI or SB3_ISI variables in the blocks loop, depending on the current sub-block number (subblock variable in the subblocks loop).

    // setsup the appropriate ISI value for this subblock
    if (vars.subblock == "1") {
    if (vars.subblock == "2") {
    if (vars.subblock == "3") {
    vars.r2timeout = vars.trial_duration-vars.targeton

    Two variables are defined in that code: trial_duration and r2timeout, which correspond to the total duration of a trial in ms, and the duration of the transition phase (trial_duration minus the time the target is on the screen).

    If you later decided to run blocks A-F in a random order, you would just have to change the order parameter of the blocks loop from "sequential" to "random".

    I modified the target and transition screens to make visible during the task what is going on (current block sub-block, ISI etc.) so that it is easier to check the task. Just edit these screens to remove that information when the task is ready.


    • The trial duration for the practice trials is 2000 ms. This is set up in the "setup_timings" Javascript object at the beginning of the experiment. You can change it there
    • In order to test the task quicker, I set up the cycles in the practice test trial loops to 0.25x instead of 1.00x or whatever values you'll use on the actual experiment. Just make sure to modify these loops to meet your requirements.
    • I output a lot of information to the console for monitoring and debugging during development. Once you have the task finished and ready, my advice is to make a copy of the task for backup and future reference, and to remove the console-log instructions to make the task a little faster in the version you'll actually use for running your experiment
    • The OS compatibility check complained about the high number of variables being outputted from the logger and demanded it to be set manually to a selection of variables (instead of logging all variables (which is normally recommended by default).
    • I disabled the "log all variables" and inserted a series of key variables which I think would be the ones that would be relevant for your data analysis. However, make sure to check this out and add anything you might want to add (or indeed remove). Not logging everything makes sense when running the task online (it helps keeping things smooth and fast). The easiest way to set up the variables to be logged is to display the variables inspector, and then to drag whatever variable you want in the logger.

    Buy Me A Coffee

  • @Fab ,

    Thank you so much, you've been a great help. I am sorry for the late response. I will check and learn the way your modification in this weekend. I will contact you again to inform you whether your modification work perfectly or there is something I confused.

  • @Fab,

    Hi Fabrice,

    Sorry for my late reply. I've checked the experiment you modified. It can run well and I've not found any trouble so far.

    Thank you so much for your help.

Sign In or Register to comment.

agen judi bola , sportbook, casino, togel, number game, singapore, tangkas, basket, slot, poker, dominoqq, agen bola. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 50.000 ,- bonus cashback hingga 10% , diskon togel hingga 66% bisa bermain di android dan IOS kapanpun dan dimana pun. poker , bandarq , aduq, domino qq , dominobet. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 10.000 ,- bonus turnover 0.5% dan bonus referral 20%. Bonus - bonus yang dihadirkan bisa terbilang cukup tinggi dan memuaskan, anda hanya perlu memasang pada situs yang memberikan bursa pasaran terbaik yaitu Bola168. Situs penyedia segala jenis permainan poker online kini semakin banyak ditemukan di Internet, salah satunya TahunQQ merupakan situs Agen Judi Domino66 Dan BandarQ Terpercaya yang mampu memberikan banyak provit bagi bettornya. Permainan Yang Di Sediakan Dewi365 Juga sangat banyak Dan menarik dan Peluang untuk memenangkan Taruhan Judi online ini juga sangat mudah . Mainkan Segera Taruhan Sportbook anda bersama Agen Judi Bola Bersama Dewi365 Kemenangan Anda Berapa pun akan Terbayarkan. Tersedia 9 macam permainan seru yang bisa kamu mainkan hanya di dalam 1 ID saja. Permainan seru yang tersedia seperti Poker, Domino QQ Dan juga BandarQ Online. Semuanya tersedia lengkap hanya di ABGQQ. Situs ABGQQ sangat mudah dimenangkan, kamu juga akan mendapatkan mega bonus dan setiap pemain berhak mendapatkan cashback mingguan. ABGQQ juga telah diakui sebagai Bandar Domino Online yang menjamin sistem FAIR PLAY disetiap permainan yang bisa dimainkan dengan deposit minimal hanya Rp.25.000. DEWI365 adalah Bandar Judi Bola Terpercaya & resmi dan terpercaya di indonesia. Situs judi bola ini menyediakan fasilitas bagi anda untuk dapat bermain memainkan permainan judi bola. Didalam situs ini memiliki berbagai permainan taruhan bola terlengkap seperti Sbobet, yang membuat DEWI365 menjadi situs judi bola terbaik dan terpercaya di Indonesia. Tentunya sebagai situs yang bertugas sebagai Bandar Poker Online pastinya akan berusaha untuk menjaga semua informasi dan keamanan yang terdapat di POKERQQ13. Kotakqq adalah situs Judi Poker Online Terpercayayang menyediakan 9 jenis permainan sakong online, dominoqq, domino99, bandarq, bandar ceme, aduq, poker online, bandar poker, balak66, perang baccarat, dan capsa susun. Dengan minimal deposit withdraw 15.000 Anda sudah bisa memainkan semua permaina pkv games di situs kami. Jackpot besar,Win rate tinggi, Fair play, PKV Games