Singleton Visual search task - Flickering/movment and Non-Flickering/moving Trials Intertwined
Hi,
I am encountering a problem with my OpenSesame experiment.
Here's a brief overview:
Experimental Design:
I'm working on an experiment in OpenSesame that consist of a singleton visual search task with 8 different shapes positioned on the canvas following a circle pattern. From these shapes there is a target which participant are required to look at and indicate if inside that target there's an horizontal or vertical line (by pressing the 'left' ot 'up' arrow keys). The target is different than the other shapes (distractors). In this particular case our target is an orange circle whereas the distractors are orange diamonds. From those distractor there is a shape that is of a different colour (singleton), in this case green. The idea behind this paradigm is to see how people supress distractors when conducting a visual task. In our particular experiment we have three conditions, one in which the singleton is just of a different colour (still condition), a second one where the singleton flickers (flicker condition) and a third one where the singleton move sides (motion condition). It is worth noting that the shapes change positions randomly from trial to trial.
Open Sesame Structure:
To achieve this I have structured the experiment in Open Sesame using its interface as well as in line scripts. For each condition I have created a loop element which includes a sketchpad with first a mask, then a fixation point, and then a sketchpad with the shape images positioned in the canvas named :'target', 'dis1', 'dis2', 'dis3', 'dis4', 'dis5', 'dis6', 'dis7' (the singleton distractor in these shapes is 'dis1'). Following the sketchpad there is an inline script with a python code to randomly sort the positions of the shapes as well as creating the flicker/motion effect for the shape 'dis1' of the canvas.
The Current Issue:
The code works very well, in all conditions (particularly the motion and flicker ones) , presenting the stimuli in random positions for each trial and with the proper flicker and motion effect for the singleton 'dis1'. The only issue is that in the flicker and motion conditions, a frame or trial with 'dis1' flickering/moving is presented allowing a response. However, immediately after another frame is presented (with the same shape positions as the previous one) where only 'dis1' is not flickering/moving, and a response is asked to move to the next trial. This sequence repeats for the number of rows specified in the loop element (four in my case) but you end up having double (eight trials in this case).
What we are asking is guidance on how to ensure that only the trials in the flicker and motion conditions progress smoothly without the unintended static frame and response prompt. Any suggestions or insights would be greatly appreciated.
Hereby I am attaching a brief version of this task.
Any help would be highly appreciate it!
Best,
Anton
PS: Bare in mind I am not a Python expert, so I apologise in advance if the code is a bit messy.
Comments
Hi Anton,
I edited your experiment somewhat. There were multiple problems, all somewhat related to the prepare-run strategy: https://osdoc.cogsci.nl/4.0/manual/prepare-run/
If you dynamically change the experience of stimuli (flicker, shift), these changes should take place in the run phase of the sequence. Furthermore, if you present your stimuli dynamically with inline_scripts, you need to poll the response also in the inline_script. In that case, the keyboard_response item is redundant (and is the reason you have to press they resp key 8 times instead of 4). Similarly, you could actually drop the sketchpads altogether and draw the canvasses directly in inline_scripts (though the way you do it now, is fine as well).
My example, just demonstrates the general approach, I did not try to "fix" your experiment completely. Hopefully it gets you into the right direction. Let me know if not!
Good luck,
Eduard
Hi Eduard!
Thank you so much for your response and help with this, I really appreciate it.
What you are suggesting seems to deal with the problem I was having.
Thanks again,
Anton
Hi again,
I have followed Eduard's suggestions and everything is running well.
The only aspect I might need some help wiht is that when running the task , in all trial, after pressing the keyboard response there's an approximately 3 second delay until the next trial .
I was wondering if there's a way of reducing that delay in each loop sequence.
Thanks in advance,
Anton
PS: Experiment attached in this message
Hi Anton,
One part of the delay is certainly the start/stop of the eye_tracking. 3 second appears a bit excessive though. If my memory doesn't trick me, I experienced about 1s of delays due to start/stop the eye tracker. Perhaps there is some variability across operating system? I suggest following:
Hope this helps,
Eduard