Detecting voice onsets (voicekey)
Lately, there have been a few requests for a script to detect voice onsets (i.e. a voicekey script). The original example script is no longer available, so below is a new script, written for OpenSesame 3.0.
You can insert this script into the run phase of an
inline_script, and use it like a
keyboard_response. It creates the following variables:
responsewhich is either 'timeout' (when a timeout occurred) or 'detected' (when an onset was detected)
response_timewhich is the response time of the voice onset, which is equal to the timeout when a timeout occurred
loudnesswhich is the loudness of the voice onset that triggered the response, or
Nonewhen a timeout occurred.
You need to tweak (at least) the following properties in the script:
sound_threshold, which determines how sensitive the detection is. The appropriate setting varies a lot from system to system.
timeoutis the maximum response time, after which a timeout is triggered.
Important disclaimer: This code should work, but is based on a very crude algorithm. In most cases, it's probably better to record the voices, and detect the onsets afterwards, rather than doing this online with a voicekey. Also, the code uses
pyaudio, which is sometimes a bit unstable.
import pyaudio import struct import math # A low threshold increases sensitivity, a high threshold # reduces it. sound_threshold = 0.5 # Maximum response time timeout = 5000 FORMAT = pyaudio.paInt16 SHORT_NORMALIZE = (1.0/32768.0) CHANNELS = 2 RATE = 44100 INPUT_BLOCK_TIME = 0.01 INPUT_FRAMES_PER_BLOCK = int(RATE*INPUT_BLOCK_TIME) chunk=1024 def get_rms(block): """Get root mean square as a measure of loudness""" count = len(block)/2 format = "%dh" % (count) shorts = struct.unpack( format, block ) sum_squares = 0.0 for sample in shorts: n = sample * SHORT_NORMALIZE sum_squares += n*n return math.sqrt( sum_squares / count ) # Open the mic stream = pyaudio.PyAudio().open( format=FORMAT, channels=CHANNELS, rate=RATE, input=True, input_device_index=0, frames_per_buffer=INPUT_FRAMES_PER_BLOCK ) # Listen for sounds until a sound is detected or a timeout occurs. start_time = clock.time() while True: if clock.time() - start_time >= timeout: response_time = timeout response = u'timeout' var.loudness = None break try: block = stream.read(chunk) except IOError as e: print e loudness = get_rms(block) if loudness > sound_threshold: response_time = clock.time() - start_time response = u'detected' var.loudness = loudness break # Process the response set_response(response=response, response_time=response_time) # Close the audio stream stream.close() pyaudio.PyAudio().terminate()
Hello, I was excited to find this, as what you describe as its functions are exactly what I am trying to do. I am looking to find out how long it takes someone to name an image on screen, and so I need those vocal reaction times.
My problem is that whenever I run the experiment, it seems like it is not detecting any audio input. The response always times out and the loudness variable never reports back any value. I am using a standard headset which is set to the default recording device in windows 10. I'm hoping that I'm just missing something obvious, but any help would be greatly appreciated. I'm very new to OpenSesame and Python in general.
You can print out the loudness values to the debug window by adding a simple
print()statement below the call to
If you see that they fluctuate, but simply have really low values, then you can reduce
sound_thresholdaccordingly. If you see the values are always 0, then no sound is being recorded at all. In that case, you probably have to specify a different audio device through the
Is pyaudio not included in the standalone OpenSesame distribution? I get an Import Error when running the above script.
Can you tell us which OpenSesame version you are using and on which platform?
PyAudio should be included in the default package but it could be that we forgot it this time (we are revamping the packaging process so this might be a result of the transition). The good thing of this new packaging process is that it is really easy to install extra modules yourself at any time, see http://osdoc.cogsci.nl/3.1/manual/environment/.
once in the debug window should do the trick (although maybe not for pyaudio as it depends on C libraries and is thus kind of a 'difficult' package).
I was using 3.1.2 for Windows.
I just tried using pip and it worked! I didn't realise the installed Win version supported pip; thank you for adding this!
@sebastiaan? Shouldn't this post be an announcement so it stays on top? Or are you planning on integrating it in the general documentation at some point?
Good point. Once I'm back I'll add this snippet to the documentation.
My experiment is meant to
I only need the voice onset time. And I copied the script above to the inline script.
I got 'loudness' result in numbers.
However, I didn't get 'response' right. The response represents the key-board response that I pressed.
I don't understand Sebatian wrote as
What do you mean by 'use it like a keyboard_response' ?
My loop contains below.
I'm totally new and don't know how to make script alone..
Please please help me.
Thanks in advance.
cf. I don't know what to write in the blanks in the keyboard response..
keyboard_responseand the script create a variable called
response. Because the
keyboard_responsecomes last, that's the one that you see in the log file.
What you can do is change this line in the script:
By specifying an
itemkeyword, you tell OpenSesame that it should also log all variables with
_[item_name]appended. So you'll get
response_time(which is not unique and overwritten by the
keyboard_response) but also
response_time_voice_key(which is probably unique and therefore not overwritten). Do you see the logic?
The beginner and intermediate tutorials are good places to get started:
Thanks alot. I did what you said, and it worked!
However, the value of "response_voice_time" is same to every item.
In other words, I present Audio(question) + Picture at the same time, and my participants should answer to the questions while watching the picture. I have about 40 items in total.
But, the value of 'response_voice_time' and 'loudness' is all the same to every item..
cf) of course I studied all the manuals and tutorials.. the problem is.. I don't get it.. idiot.. ..
Sorry, and Thanx, .
response_voice_timeshould be 'detected' when a voice onset is detected, and 'timeout' otherwise. There's no voice recognition that will tell you what the participant said, if that's what you were expecting‽ That
loudnessis also the same is a bit odd though. What about the
response_time_voice_time? Does that vary?
It would be helpful if you upload an example datafile here, so that I can see whether the response values make sense
I have opensesame 3.2 and I've tried the inline script that Sebastiaan uploaded up on this page but the program doesn't work. I am new with open sesame and I have to build a simple stroop test where I need to calculate reaction times from the stimulus presentation to the voice onset.
Each session I would have :
-sketchpad (fixation dot)
-tone (corresponding to the stimulus presentation)
Now I don't know what to do to make the inline script work.
Do you have some idea of what I did wrong?
Thank you so much guys.
If you provide more detail on what exactly doesn't work, we might be able to help. Could you post your error message? And did you follow all steps described in this discussion, e.g. installing pyaudio?
I need the exact same type of experiment as eleonoragalletti two comments up - a simple stroop test which relies on voice response, measures the time from the stimulus presentation to the voice onset, and records the participants' responses. I have absolutely no programming background, so I wondered if there are any new tutorials on the topic? I tried to follow the instructions in this thread but to no avail ("Failed to compile inline script") - I am sure that what I did wrong was absolutely basic, but since I am so inexperienced an example could be way more efficient than trying to correct my mistakes.
For the people who are using psychopy for the backend: they also have a dedicated module for this task: https://psychopy.org/api/voicekey.html
Thanks, although it seems as if this solution still requires certain programming skills.
A different approach - I suppose I can use the "sound recording" option and just calculate the RTs manually. My only concern now is that I need to run this experiment online, so if I do try to do that, how do I get the sound files?
Hi @RoyT. Currently it is not possible to perform sound recordings online with osweb, and this is very difficult in general because of many browsers' security models. If you require to perform these kinds of sound recordings for you experiment, I think collecting data online currently is not an option.
Thanks for the quick response. Assuming I can give up on the sound recordings and stay just with RT measurement, should the voicekey script Sebastian uploaded work online? Would it require the participants to download anything? (I haven't yet used OpenSesame online)
Currently sound recording is only possible offline! This is really hard to implement online because of all the security measures that browsers pose (and also is different per browsers). I don't expect this to be available soon.
thank you for the effort you put into Opensesame!
I'm going to conduct an experiment in which I want to measure reaction times from stimulus onset (jpg-files picturing non-words) to voice onset. The task is to read the non-words out aloud. A Session for one subject will take about 35 minutes. I don't need to record the utterances, since I plan to do that external via Audacity.
I wonder if the voice key
Does anyone conducted a study comparable to mine yet and can share his experiences regarding precision of the measurements?
Thanks in advance!
Hi @plex84 ,
Are you referring to the PsychoPy voicekey class (which you can use in OpenSesame as well)? I don't have experience with this, but when it comes to precision there is no general yes or no answer. It depends on:
If you're going to record the utterances with Audacity anyway, you could also consider playing some sound that serves as an auditory trigger (e.g. a click) whenever a stimulus is presented, and then mix that with the microphone input during recording. This will allow you to determine the voice onsets offline and (semi)manually in Audacity, which will almost certainly be more accurate than doing it online with an automated algorithm.
(And yes, it should not be a problem to use this for more than half an hour!)
thank you for your reply!
Concerning recording via Audacity and measuring voice onsets offline:
As far as I know Audacity is only able to record sound from one source during recording, which means I can record either the sound within my OpenSesame experiment or the microphone input.
Concerning source code for psychopy.voicekey:
As I mentioned above, I want to measure reaction times from stimulus onset (jpg-files picturing words) to voice onset. Accordingly, in a sequence I included a sketchpad for the jpgs and, for the voicekey, an inline script. I wonder where (prepare or run phase?) to put which parts of the source code. I got error messages regarding modules 'pyo' and 'pyo64':
Looking forward towards your feedback!
I haven't followed the entire discussion, but to chip in on your last set of questions:
The error message suggests you haven't installed the python package Pyo
Here it is explained how you can do it in Opensesame.
Not sure, whether this is the main problem, but it certainly is a problem.
With respect to prepare/run phase. Generally, you need to put parts of your code that set up things in the prepare phase and stuff than runs things in the run phase (sorry if this is too obvious). A sketchpad is being executed in the run phase (and prepared in the prepare phase). So you participants will only see the stimulus in the run phase. Therefore, the voice recording hast to start in the run phase as well.
The source code that you linked above is really just the source code of the module defined in Psychopy. It doesnt actually do anything by itself. Insofar you would want to put it in the prepare phase in the beginning of the experiment (not even in the loop). However, you dont atually need to copy it to Opensesame. You should be able to simply import it (
from psychopy import voicekey) and then use its methods, as listed here: https://www.psychopy.org/api/voicekey.html
Hope this clears things up a bit.
thank you again for the explanation!
As you proposed I put (
from psychopy import voicekey) in an inline script at the very beginning of the experiment. I got the following error message:
("Das angegebene Modul wurde nicht gefunden." - Module was not found.)
Please note that we definitely installed pyo with pip before I got this error message. All folders and files mentioned exist.
Thanks for your effort!
Yes, if you read through the stack trace of the error message, I can see that pyo seems to have been installed. However, the installation seems to not have worked properly. As the installed version of pyo has issues when it comes to also loading the DLLs. Honestly, I don't know what is causing this or what the precise problem is. The DLL thing is something specific to windows, which I don't have. My guess is that there is a conflict of versions of Psychopy, pyo, and maybe more packages.
unless @sebastiaan has an idea, I would recommend googling the problem. There seem to be others that had issues with that. Maybe those ideas/solutions will help?
Hi @plex84 and @eduard ,
My guess is that
dllfiles (windows libraries) to a folder that is not included in the path, and therefore it cannot find them. On GitHub you can find the following
dllfiles, which are presumably installed somewhere on your system.
Maybe you can see where those files are exactly, and then add them to the system path, for example by editing
opensesame.bat? (Not the Python path, because that's only for Python scripts, not
dllfiles.) And let us know how that works out!
Hi, @eduard & @sebastiaan,
thank you for your replies. I found the files here:
I also found
opensesame.bat. I opened it with the editor; content is the following:
What exactly do you mean with "editing"?
(Am I right when I assume that I also can copy the files to another distinct position/folder, so Opensesame is able to work with it...?)
Thank you in advance,
Hi @plex84 ,
opensesame.batis a 'batch file', kind of like a bash script but then for Windows (and previously for MS-DOS). In it, you can change the path that Windows scans for (among other things)
dllfiles. I think the following might work:
Am I right when I assume that I also can copy the files to another distinct position/folder, so Opensesame is able to work with it...?
Yes, that may also work. The easiest place would be the OpenSesame program folder itself.
In general though, it's difficult to predict what will and won't work, because it depends on how
pyoitself does things. And it looks like its behavior is a bit unstandard in this regard.
I put the path into the script of the batch file as you proposed, but the error message stays the same as posted above.
Hi @plex84 ,
I suspect that you pip-installed
--userflag. Is that correct? If so, does the problem go away if you start OpenSesame as administrator and re-install it regularly? You'd have to pass
--force --reinstallflags to trigger a proper reinstallation of the package.