How to design a sound-picture mapping perception test using Opensesame
I'm a MA in linguistics student in the Chinese University of Hong Kong. And I want to design a sound-picture mapping test, namely, when the participants hear a sound, there are two pictures for them to choose, one picture serves as the target, and another picture serves as the baseline. Both the reaction time and correction rate need to be measured.
I was wondering how can I design such kind of experiments using Opensesame ? Are there any available model scripts ?
THANKS!!!!
Comments
Hi,
You can try to browse the forum for examples, or some other online resources (e.g. https://osf.io/d2ecb/).
However, I don't think the problem is very hard. You should be able to implement it yourself. You just need a sequence with sampler (play the sound), followed by a sketchpad (show the two images and ideally linked to the mousetrap plugin(https://osdoc.cogsci.nl/3.2/manual/mousetracking/)), and a logger.
Does that make sense?
Eduard
Edit: the experiment in this discussion might provide you with a starting point: http://forum.cogsci.nl/index.php?p=/discussion/4872/no-more-than-two-items-per-side-i-cant-use-a-constrain-bc-every-item-is-randomly-selected#latest