[open] Collecting multiple responses after succession of stimuli
Hi there, I've been browsing documentation and the forum but couldn't find an answer to my problem; if I overlooked anything obvious I apologize in advance.
I'm trying to set up an reading span test in such a way that a participant is first presented with a number of phrases (one by one), and then afterwards is required to correctly input answers to each of them afterwards. I'm just at a loss how to refer back to past correct responses.
Any help would be greatly appreciated.
Regards,
Roelof
Comments
Hi Roelof,
Would you mind elaborating a bit on what you mean with "referring back to past correct responses"? it is not entirely clear to me yet what you want to accomplish here.
Generally, it is quite handy to use
dictionariesif you want to keep things that belong together, stay together (e.g. questions with corresponding answers).Once you give a little more detail, I will also explain better how to do that.
Thanks,
Eduard
Hi Eduard,
I want to ask multiple questions in succession and then collect all the answers at once. In particular, subjects would get a number of sentences drawn from the block loop at random and after X sentences have to enter the last word of each those sentences at once.
Does that make things clearer?
Regards,
Roelof
Hi,
And what would you measure? Only recall accuracy or anything else? And is
Xalways the same number?It seems the easiest (conceptual) solution would be having a
loop, in which you present all the sentences and that runs a certain number of iterations (either by setting it beforehand, or by breaking after a limit is exceeded). Once theloopis over, you could use atext_input_formto let participants type the words that they recall. Once you have the response, you can process it to find the corresponding response for each question.However, the implementation of that can be done in many ways. From what I get so far, it is hard to decide what would be the best. If you really need a clear advice, and some more guiding on how to structure your Opensesame code and such, you have to give us even more detail with respect to design, variables, etc.
Does this help?
eduard
Hi,
The measure would be number of correct answers in the correct spot.
That setup sounds like what I would like to have, but it's exactly that processing of the answers which I don't have working. I would like OpenSesame to automatically compare the answers in the text_input_form to the correct variable from the block_loop.
If it'd help I could upload my experiment as it is now?
Regards,
Roelof
Hi Roelof,
Sorry for having let you wait that long. Sure, just upload the experiment, and I will have a look.
Best,
Eduard
Hi Eduard,
No problem, here is the link to the experiment: http://www.filedropper.com/ospan2
I think the comparing of the answers can be handled afterwards with an excel script, so that's not a problem anymore. I have run into a different problem however. I want a few trials to be randomly drawn from a large pool of trials. In order to do this I have set the blocks to end at [count_trial_sequence] = 7. I've defined this variable to be 3 at the trial, and be randomized at the experiment, or at least I think that's what supposed to happen, but instead the variable behaves as if it is 0 the first time (and run 8 trials) and it doesn't get reset in the experiment. If you could help me with that that would be awesome!
Regards,
Roelof
Hi Roelof,
I am not super sure myself when It comes to use all the Opensesames
count_XXXvariables. When I want to do something similar, I usually create the variables myself.So in your case, you can say something like this:
in the prepare phase (not the run_phase) of that
inline_script. And use the[counter]=7.Does this help?
Eduard