How can I make a Continuous Performance Test without coroutines?
Hello, I am new to OpenSesame and Python. I've already read beginner tutorial and some other tutorials, but only start to read about Python. I intend to make a basic Conners Continuous Performance Test (CCPT), where there will be series of letters showing up on the screen. Letters will show up for 500 ms and every 2 seconds. Participants need to press space bar for every letters, except 'X' letters.
I've already made the experiment in OpenSesame using coroutines instead of sequences. I use the coroutine because the keyboard response need to be run in parallel with the stimulus letters. I tried using sequences once, but I couldn't make the keyboard response accurately measure the reaction time and when the participant press the space bar the stimulus immediately goes to the next stimulus.
The problem I am facing right now when using coroutines is I can't run the experiment online because the coroutines script isn't compatible with the OSWeb. Is there any solution to my problem? Can I make the experiment without the coroutines or is there any way to make my experiment compatible with OSWeb?
note: I made the CPT using only the interface tools without any Python inline script. Also, the instruction is using Indonesian Language.
I'm not very experienced with OS yet, but I think I might be able to help you. Other users or the moderators might be able to provide a better method, but what I describe below should do the job and will run on OSWeb.
I've had a look at your task and made some changes so that it runs following a method similar to that described above. Note first that I simplified it a little by creating a single trial sequence instead of two as you had, because it seemed redundant to have two identical copies of the same sequence (you already have a variable that identifies practice and test trials anyway). The main idea can be illustrated like this:
I've added lots of console.log code so that can track responses and RTs on the run while running the task within OS.
Also, note that I moved your "rest feedback" objects; they were initially placed right before the feedback display, which would not work (you want to reset the feedback variable before a block starts, not just before displaying the feedback).
Finally, when trying the modified task in a browser, I got a "Cannot read property 'length' of null" error message in the browser. It took me a while to figure out where it came from. It looks as if it came from the keyboard objects you had in your program. I think they might somehow have kept some properties they inherited from the coroutine you initially set up and that these properties did not disappear when I took them out of the coroutines, so I simply deleted the original keyboard objects and replaced them by new one. That fixed the problem.
Have a look at the modified task attached. I think it does what you were after, and it is fully compatible with OSWeb and would run as a JATOS experiment. Just edit / change it as needed. Double-check the data are correctly recorded (I believe they are but I'm not familiar with that task, and I noticed taht your correct response is always the SPACE bar... so I'm not quite sure what's is all about; so bottom line: make sure to check the data are recorded as you want).
Thank you so much. I've checked the attachment you modified. It's brilliant, now it can work well without using Coroutine.
But, I've found that the feedback slide didn't show the [acc] properly. If I quick run the test, it always show 0 accuracy. Is it happen because there are 2 keyboard responses available or is it because I quick run it on OpenSesame, not on the web? I've also check the log file, the correct response is work properly but the [acc] still show either value of 0, 50, or 100.
I also want to add variation of interval between stimulus (usually it's called interstimulus interval or ISI). I intend to make ISI of 1000ms, 2000 ms, and 4000 ms. Should I add more script about vars.trial_duration, like vars.trial_duration1, vars.trial_duration2, and vars.trial_duration3 ?
I intend to make the experiment loop to have 6 blocks and 3 sub-blocks within each block. The 3 sub-blocks represent the ISI variation and each block have different variation of the sub-blocks. To put it more clearly I make it into a picture here. I'm sorry if my explanation is very confusing.
Thank you so much for your help.
I'm glad you found my modification of your task useful.
I think that the problem with the accuracy displayed on the feedback after each block is due to a few things I missed the first time...
(1) I suspect that the "reset feedback variables" on the feedback object right after R2 should be unchecked.
(2) I just realized that some of the correct responses in the practice and experimental loops were set to "None" instead of "space". Change these back to "space":
However, I think that because there are two responses, failure to produce a correct R1 response lead to the task counting an error and including it in the calculation the average statistics per block. In fact, I think that if you respond correctly in every trial while the target is on the screen (R1), you get a 100% average accuracy (because if R1 is detected, the second keyboard actually never runs, which means that R2 would never be missing). In contrast, if no R1 is detected, the accuracy for R1 would be 0 and the task would expect a R2. I've run a test and it does confirm my suspicion. The way around that would be to use code to set these variables manually. I'll have a look at this issue soon and will get back to you.
In the mean time, I have some questions about your second query regarding the various blocks and ISIs: are blocks A-F within-subject or between-subjects? That is, is every subject going to do blocks A to F (and if so, in what order? random? sequential?)? Or are you gonna have 6 groups of subjects, each doing one of the blocks only? f you can clarify your design, I'll have a go at solving it.
Let me know and I'll try to free some time to look into this and suggest an economical way to modify the task.
There are several things I forgot to mention before.
1) First, the "None" in the correct response is intentional. The instruction of the test is actually like this: "Press the space bar for all letters EXCEPT X. Please respond as quickly as possible but also as accurately as possible." So there are two correct responses, first is "space" for every letters exepct X and "None" for every X letters
2) The block is within participants, so every participant will do every blocks. Actually for the order of the blocks it is not specified in the original test, but I will prefer to have it in a sequential order. I think the order of the block is not a major thing because the most important part of the blocks is the change of pace within each blocks. The change of pace is intended to see whether the participant will be able to adjust to the changing demand of attention or not.
3) For the detail of the test, there will be 6 blocks and 3 sub-blocks within each blocks. In these sub-blocks there will be 20 stimulus each, so in a block there will be 60 stimulus. Interstimulus interval (ISI) between each sub-blocks is different (sub-block 1= 1000ms, sub-block 2= 2000ms, sub-block 3= 4000ms). Then between each block there will be different order of sub-blocks (example: Block A= first Sub-block 1, second Sub-block 2, third Sub-block 3| Block B= first Sub-block 1, second Sub-block 3, third Sub-block 2| Block C: first Sub-block 2, second Sub-block1, third Sub-block 3, and so on)
Thank you so much for always helping me.
I have now had a chance to spend some time on your task. I attach below a modified version that fixes the issues with the calculation of the feedback variables and implement the blocks and sub-blocks structure you described.
The task is compatible with JATOS and will run fine in a browser.
I'm providing this "as is". Make sure to test the task carefully and check the output before you run the actual experiment.
Computing the performance variables for feedback display
The problem with the standard feedback variables is that they take in every response. Hence, in this task, it would take R1 and, if no R1 is detected, R2. If a participant did not produce a R1, the task would count it as one missing response and an incorrect response, and then it would take R2 as a separate response. The feedback variables would then be calculated on that basis when, in this task, you don't want R1 and R2 to be counted separately. So I have implemented variables counting the responses, the number of correct responses, the omission, and summing up the RTs.
Note that I opted to define vars.acc and vars.avg_rt, which are the variables that OS would normally define too. Only here, I'm setting up their values instead of OS calculating it by itself.
The bespoke performance variables are reset before each sub-block begins (reset_feedbackvariables):
Setting up blocks and sub-blocks
I set up the task so that it runs one practice block, then blocks A to F, with 3 sub-blocks in each, each of these with a distinct ISI (organized according to a Latin-square method). All within-subject, and with the blocks and sub-blocks sampled in a sequential manner (as you indicated).
One option was to create lots of different loops, but that quickly gets messy. It is always better to use nested blocks and sequences and just set up variables that tell the task what needs to be different from one block to another.
Two variables are defined in that code: trial_duration and r2timeout, which correspond to the total duration of a trial in ms, and the duration of the transition phase (trial_duration minus the time the target is on the screen).
If you later decided to run blocks A-F in a random order, you would just have to change the order parameter of the blocks loop from "sequential" to "random".
I modified the target and transition screens to make visible during the task what is going on (current block sub-block, ISI etc.) so that it is easier to check the task. Just edit these screens to remove that information when the task is ready.
Thank you so much, you've been a great help. I am sorry for the late response. I will check and learn the way your modification in this weekend. I will contact you again to inform you whether your modification work perfectly or there is something I confused.
Sorry for my late reply. I've checked the experiment you modified. It can run well and I've not found any trouble so far.
Thank you so much for your help.