Using visual AND auditive stimuli in the same loop/sequence for different trials ?
First of all, I'm quite new here and into opensesame. I know how to do a basic experiment but not so much more...
Here is my problem:
- at the moment, my global experiment is divided into 3 different opensesame tasks, meaning 3 .osexp files: one for my phonological modality, one for a semantic modality and another for the visual modality.
- the 3 tasks have almost the same design, excepting the phonological and semantic ones both include auditive stimuli presentation (using
sampler) whereas the visual one has a simple visual presentation (using
- what we do want to try now is to present the 3 modalities all together in a random way
- and I'm struggling to do an experiment where in a same bloc participant could see visual form on the screen in the visual condition or hearing words in both phonological and semantic conditions.
- I have diffrenciated the two types by adding a 'condition' column into my xlsx file and tried ti use the '
run if' option such as :
run (or show?) if [condition] = 'visual'for the sketchpad with my visual stimuli and
[condition] = 'verbal'for the auditory ones.
- Using the
if statementdoesn't prevent the items to be prepared and I got different errors messages (depending on my different attempts) but most of the time, it seems that as the run_if/show_if still allow the item prepration, the program searches for visual stimuli in auditive extension (.wav here) or for my auditive simuli in the visual extension (.png)
"cheval.png" does not exist (which indeed doesn't exist as the correct file is cheval.wav)
Here is an example of what I would like to get one day... is that even possible ? Should I create another loop or sequence? If yes, I tried in different ways but I don't really know how to do in a proper way ...
Well... I hope someone could give me some advice, thank you all.
PS: sorry for my english !