[Solved] Timing glitch?
Good afternoon everyone,
I have a question about a possible timing glitch. I'm developing an OS experiment, but I have the feeling some items appear sooner than they should. What could I do to fix this?
Experiment setup:
Participants first hear the first syllable of a Spanish word over their headphones (e.g. SÁ_), and at the word offset they see a word on a screen which is either a complete match (SÁBADO), a match but with stress on a different syllable (e.g. SALÓN), a mismatch (e.g. MOSQUITO) or a non-word (e.g. FÍTULA). They need to judge whether the word exists in Spanish or not, yes or no (SÍ/NO). Presumably their response time is faster when seeing the matching prime (the target word would have been primed in their minds).
Open Sesame situation:
Four asterisks are presented in the middle of a screen ****. This is a sketchpad item, duration 0 ms. The next item plays an audio file, duration 'sound'. As the duration of the sketchpad item is 0 ms, the asterisks and the audio file are presented simultaneously, and the presentation of these items takes as long as the audio file takes to play (because: duration 'sound').
Then the target word is presented in the middle of the screen. This is a sketchpad item, [target_word], but it's often presented before the audio file has stopped playing.
Why is this and what can I do to remedy it?
Many thanks in advance,
Daan
Comments
So to be clear: [target_word] should be presented the moment the audio item stops, not before it.
Hi @DPvanSoeren,
Thanks for your post and the description of the problem. It is difficult to ascertain where the problem lies without seeing how you're implementing the task but I believe that it originates in the way you set it up.
I built a basic task following the sequence of events you described and I get the target word to appear at the sound's offset, not before. This is how my sound event is set up:
There must be something specific to your task or your sound files (do they play in full when you play them in a different application? Is the problem occurring if you use sounds in a different format? Have you tried varying what follows the sound object to work out in what circumstances the problem occurs?). Have you checked the time stamps of the sound and target word objects in the log? That would tell you for sure when these objects start so that you could assess how often the problem occurs and whether it does so randomly or under certain circumstances.
I attach my basic example so that you can see how I implement it and compare it to yours. It works without any timing problem whichever back-end I select.
Incidentally, I also included code that counterbalances the response keys across subjects (thought it might be useful if you haven't implemented that yet).
Let me know if that helps.
Good luck!
Fabrice.
Example task:
Hello @Fabrice,
Thank you very much for your help (and sorry for the late reply). Things seem to be working fine now.
I've also had a look at the need/possibility to counterbalance the response keys but I think for my particular experiment it's not necessary -- I'll only be looking at the SÍ-responses anyway. NO-responses are only relevant because I need a NO-condition in my experiment.
It did inspire me to counterbalance something else though, so many thanks again.
I'm treating you to a coffee or two!
Kind regards,
Daan
@Fab : you're psyfab?
Hi @DPvanSoeren,
Glad I could help and that your task is now working! And thanks for the coffee! 😀 (Yes, psyfab is my name on buymeacoffee).
Good luck with your experiment!
Fabrice.