[open] multisensory timing
I'm making an audiovisual temporal order judgement task and I've run into some timing issues. I am currently working on a 60Hz LCD and my experiment will eventually run on a 120 Hz LCD (to which I will adapt the current SOA's).
First an array of dots is presented that all have the same color. Then, one of them changes color and a sound is played at different SOA's from the color change (-166, -116, -50, -33, -16, 0, 16, 33, 50, 116, 166 ms).
Each array is built up of 49 dots or "stim"s, the piece of code below is the part where one of the stims is assigned a color change and is then presented, with a preceding, following or simultaneous sound, depending on the SOA. I'm using time_audio to get a time stamp of the audio presentation and time_viz to get a time stamp of the visual presentation:
# Flip the color of the target stim, x, y, col = l_stim col = col_t stim.setColor(col) l_stim = stim, x, y, col # Draw all stimuli (doesn't show it yet) for stim,x,y,col in l_stim: stim.draw() # Play a sound and show the target color change if SOA == -166 or SOA == -116 or SOA == -50 or SOA == -33 or SOA == -16: my_synth.play() time_audio = self.time() self.sleep(abs(SOA)) win.flip() time_viz = self.time() elif SOA == 166 or SOA == 116 or SOA == 50 or SOA == 33 or SOA == 16: win.flip() time_viz = self.time() self.sleep(abs(SOA)) my_synth.play() time_audio = self.time() else: my_synth.play() time_audio = self.time() win.flip() time_viz = self.time()
Using these time stamps I've checked the real SOA (time_audio - time_viz) vs. the programmed SOA. In trials with a negative and 0 SOA (when the audio stimulus comes before the visual stimulus or should be presented simultaneously) there is a relatively constant error of about 5 ms:
Instead of -166, -116, -50, -33, -16, 0 the values that I'm getting for time_audio - time_viz are (rounded to 1 ms) -171,5, -121,0, -55,2, -38,3, -21,6 and -4,6. I looks to me like the presentation of the visual stimulus is delayed.
1.- Am I getting the time stamp corresponding to the actual presentation of the stimuli, or am I getting something else with this script?
2.- Can I avoid this issue with better programming?
Many thanks in advance for any pointers!
In general, audio timing is a bit more difficult than visual timing! For your visual timing:
win.flip()blocks until the vertical refresh starts, so the timestamp
time_vizgives you almost the exact moment of the start of the vertical refresh.
With audio, that's a bit trickier! Depending on your sound card, there can be delays up to tens of milliseconds between calling
my_synth.play()and the actual sound playback. This issue has been discussed before, and a solution has been found in this setup.
To answer your questions directly: 1) most likely they are not, and 2) no, this is more of a hardware issue. The only way to resolve it via your code, is to measure the delay between calling
my_synth.play()and the sound onset. If this is a stable delay (with a low variation), you could let the system sleep for a bit to compensate for this. But please refer to the forum thread and the report I link to.
Thanks for your reply. Its very useful, but somewhat bad news as I have my first sub's tomorrow... Do you know of any way I can measure the delay using code or is it only possible using an external device?
Usually, these kind of things are measured using an oscilloscope. You simply cannot do it just by code, the oscilloscope is there to measure when the sound is actually produced. Some oscilloscopes can be interfaced with using the serial port on your computer. So the way to test the delay would be to compare time between a trigger sent via the serial port with the onset of the sound according to the oscilloscope.
Probably best to check with the department's technician, or via the link to the setup that I gave in my previous post.