[solved] Audio timing (PyAudio) & "parallel" item
Dear all,
I'm working on an experiment in which up to eight different sounds are simultaneously presented from eight individual speakers. The task requires participants to identify the speaker location of a designated target sound among several distractors.
As suggested in the Documentation, I decided to handle the sound presentation with the PyAudio module. I included the PyAudio code in my OpenSesame experiment via the inline_script item. The code in question can be found here.
As the sounds are fairly long (800 ms), response time measurement should immediately start after sound onset. For this purpose, I implemented the parallel item and connected it to the inline_script (containing the sound presentation code) as well as the keyboard_response item. This somewhat follows the rationale outlined here. If I understand the functionality of all three items correctly, then response measurement should start together with the sound presentation.
At first glance, everything worked well but I still have two questions:
(1) Does the parallel item work properly also with respect to timing? In November 2012, Sebastiaan mentioned here that the item is in an experimental state. Has anyone any recent experience with the item type?
(2) Does it also work with the PyAudio code included in inline_script item? I thought about the latter aspect because sound presentation in PyAudio employs a while-loop:
[...]
while data != '':
stream.write(data)
data = wf.readframes(CHUNK)
[...]
and, as a Python/OpenSesame novice, I was not sure which other functions can be executed during this loop.
However, thanks for your help in advance. If the item-based solution does not work or is not recommended, I am inclined to try a pure script-based solution. Any suggestions/links/recommendations in this regard are highly appreciated.
Thanks again!
Cheers,
Malte
Comments
For your purpose it should be fine. The timing of audio playback is not nearly as good as that of video, so this will probably be your main source of noise. I haven't benchmarked it, but I don't think the parallel plug-in will add much temporal jitter to that. (And in fact the timing of pyaudio is a bit better than that of pygame, which OpenSesame uses by default.)
Sure.
The parallel item is still a bit of a problem child, but I think it should be ok for your purpose. The main problems happen (on some systems) if you do display presentation in one item, and response collection in a parallel-running item. But pyaudio is separate from the libraries that OpenSesame uses internally, so I think it can safely run in parallel.
Cheers!
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi Sebastiaan,
sorry for the delayed response and thanks for the information about the "parallel"-item. I will give it a try. I think this will include some initial benchmarking of the timing properties of the item as well as the sound events played by PyAudio. If I stumble upon something noteworthy during the course of testing, I will let the forum know.
One quick question regarding sound presentation (with PyAudio) and parallel keypress recording with an inline script: I wonder if and how any other events (visual, auditory or keypress responses) can be presented or collected after a sound presentation has been initiated. Does that pose a threading issue because of the while loop in the PyAudio script?
In other words: Is it possible to write an inline script in OpenSesame/Python that does the same thing as the "parallel" item attached to an inline script controlling the sound playback (PyAudio) and the keyboard_response item. Could this be done by generating a while loop (with a specific stop criterion) that streams the audio chunks and, in the next line, checks whether a key had been pressed in each iteration of the loop.
Is the while loop of PyAudio a suitable candidate?
Is that generally possible or would that distort the sound presentation or affect the response collection in any way. Perhaps you could give me a hint on how to do that. However, this question might be a bit far fetched :-).
Thanks again for the quick and nice help! Keep up the good work with this amazing scientific tool!
Cheers,
Malte
Hi Malte,
If I understand correctly, you're wondering whether you could avoid using the parallel item by writing an inline_script that plays sounds and collects key presses in a continuously running while loop. Right?
I don't think this is possible with the example that you're working with, because it seems to block the script during playback (haven't tested this, but that's what it looks like).
But
pyaudio
also supports so-called callbacks, which allows you to play sounds in the background, without needing to bother with threads yourself. You can see how this works under the 'Play (callback)' example on the pyaudio site:I don't really understand what you mean, but just to avoid confusion: Using a loop (a while loop or otherwise) is very different from threading. Threading means that two processes are running simultaneously. With a loop you can do things in very rapid alternation, which gives more-or-less the same behavior as threading. But it's conceptually and technically very different. Does that make sense?
Cheers!
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi Sebastiaan,
thanks again! You have helped a lot.
Cheers,
Malte
Hello,
I'm preparing an experiment with sequences of four audio stimuli at 200ms each. So, each sequence has a duration 4x200ms = 800ms. At first I used a sampler item specifying the sound file: [sound1].wav, where "sound1" is the first column in my experiment loop.
The result was fine but after reading in the documentation that is preferable to use pyaudio ("if you require very accurate temporal precision when presenting auditory stimuli you may want to write an inline_script that plays back sound using a different module, such as PyAudio.") I decided to change the implementation. So I followed the example as presented here: http://people.csail.mit.edu/hubert/pyaudio/. I used the "play" example and I changed only this part to fit my experiment:
Once I finished the implementation and I run the experiment, there was definitely a noticeable delay in the stimuli presentation.
I chose pyaudio to avoid any temporal jitter and finally the result was worse. Is there something wrong with my implementation?
Thank you very much!