Sequence Run if based on accuracy
Hello,
I'm looking to train participants, then check their training with a test. If participants complete the test with 100% accuracy (every question correct), they progress to the next phase. If they score <100%, failing even one of the questions, they repeat the train-test cycle for a maximum of three times.
In my sequence, I have a training loop, a test loop, and a training_repeat loop. I've tried setting the training_repeat loop to Run if [acc] = 0, but this only works based on the most recent trial accuracy, not the overall accuracy for the test loop, i.e. whether they got only the final question correct.
In the sequence, is there a way to set the training_repeat loop to Run if the test loop total accuracy is < 100%? And then should a participant score <100% after the third round of training, is there a way to no longer repeat the training, but instead move on to the next phase? Perhaps there is a better way?
This is for an experiment to be run online via JATOS, so sadly I am constrained in terms of inline script.
Any help very much appreciated!
Comments
Hi @Steph76 ,
It sounds like the
accvariable only reflects the correctness of the last response (i.e. it's either 0 or 100, but never, say, 75). If so, this most likely means that the feedback variables are reset on every trial. This can happen, for example, if there's afeedbackitem in the trial sequence with the 'Reset feedback variables' option enabled. Could that be it?Once that's fixed, you can probably set the run-if statement for the training_repeat loop to
[acc] < 100.— Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi @sebastiaan ,
Thanks so much for getting back to me. I've added in a feedback item and disabled reset, as you suggested, which has fixed the issue there.
However, now when the training_repeat loop is run and the participant is tested again, accuracy is being determined from both the first test and the second test: If a participant scores <100% in the first test, they repeat the training as intended, and then they can score 100% in this second test, but will be directed to training again instead of onto the next part of the experiment. Can I keep acc as referring to the whole test (rather than reset per trial), but also reset accuracy on a next occurrence of the test (while keeping JATOS-friendly)?
I have discovered the reset feedback item, so all solved! Unless there is a neater way to repeat a cycle than hard coding each loop (study-test x3) and using Run if in the sequence?
Thanks very much for your guidance! :)
Good to hear you figured it out!
Check out SigmundAI.eu for our OpenSesame AI assistant!