Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Detecting voice onsets (voicekey)

edited December 2015 in OpenSesame

Lately, there have been a few requests for a script to detect voice onsets (i.e. a voicekey script). The original example script is no longer available, so below is a new script, written for OpenSesame 3.0.

You can insert this script into the run phase of an inline_script, and use it like a keyboard_response. It creates the following variables:

  • response which is either 'timeout' (when a timeout occurred) or 'detected' (when an onset was detected)
  • response_time which is the response time of the voice onset, which is equal to the timeout when a timeout occurred
  • loudness which is the loudness of the voice onset that triggered the response, or None when a timeout occurred.

You need to tweak (at least) the following properties in the script:

  • sound_threshold, which determines how sensitive the detection is. The appropriate setting varies a lot from system to system.
  • timeout is the maximum response time, after which a timeout is triggered.

Important disclaimer: This code should work, but is based on a very crude algorithm. In most cases, it's probably better to record the voices, and detect the onsets afterwards, rather than doing this online with a voicekey. Also, the code uses pyaudio, which is sometimes a bit unstable.

import pyaudio
import struct
import math 

# A low threshold increases sensitivity, a high threshold
# reduces it.
sound_threshold = 0.5
# Maximum response time
timeout = 5000

FORMAT = pyaudio.paInt16 
SHORT_NORMALIZE = (1.0/32768.0)
RATE = 44100  

def get_rms(block):

    """Get root mean square as a measure of loudness"""

    count = len(block)/2
    format = "%dh" % (count)
    shorts = struct.unpack( format, block )
    sum_squares = 0.0
    for sample in shorts:
        n = sample * SHORT_NORMALIZE
        sum_squares += n*n
    return math.sqrt( sum_squares / count )

# Open the mic
stream = pyaudio.PyAudio().open(

# Listen for sounds until a sound is detected or a timeout occurs.
start_time = clock.time()
while True:
    if clock.time() - start_time >= timeout:
        response_time = timeout
        response = u'timeout'
        var.loudness = None
        block =
    except IOError as e:
        print e
    loudness = get_rms(block)
    if loudness > sound_threshold:
        response_time = clock.time() - start_time
        response = u'detected'
        var.loudness = loudness

# Process the response  
set_response(response=response, response_time=response_time)

# Close the audio stream

Buy Me A Coffee


  • edited 2:46PM

    Hello, I was excited to find this, as what you describe as its functions are exactly what I am trying to do. I am looking to find out how long it takes someone to name an image on screen, and so I need those vocal reaction times.

    My problem is that whenever I run the experiment, it seems like it is not detecting any audio input. The response always times out and the loudness variable never reports back any value. I am using a standard headset which is set to the default recording device in windows 10. I'm hoping that I'm just missing something obvious, but any help would be greatly appreciated. I'm very new to OpenSesame and Python in general.

  • edited 2:46PM


    You can print out the loudness values to the debug window by adding a simple print() statement below the call to get_rms():

    # ...
    loudness = get_rms(block)
    # ...

    If you see that they fluctuate, but simply have really low values, then you can reduce sound_threshold accordingly. If you see the values are always 0, then no sound is being recorded at all. In that case, you probably have to specify a different audio device through the input_device_index keyword.


    Buy Me A Coffee

  • Hi,
    Is pyaudio not included in the standalone OpenSesame distribution? I get an Import Error when running the above script.

  • Hi jpg446,
    Can you tell us which OpenSesame version you are using and on which platform?
    PyAudio should be included in the default package but it could be that we forgot it this time (we are revamping the packaging process so this might be a result of the transition). The good thing of this new packaging process is that it is really easy to install extra modules yourself at any time, see

    Simply running

    import pip
    pip.main(['install', 'pyaudio'])

    once in the debug window should do the trick (although maybe not for pyaudio as it depends on C libraries and is thus kind of a 'difficult' package).

    Buy Me A Coffee

  • I was using 3.1.2 for Windows.

    I just tried using pip and it worked! I didn't realise the installed Win version supported pip; thank you for adding this! :)

  • @sebastiaan? Shouldn't this post be an announcement so it stays on top? Or are you planning on integrating it in the general documentation at some point?

    Buy Me A Coffee

  • Good point. Once I'm back I'll add this snippet to the documentation.

    Buy Me A Coffee

  • edited January 2017

    My experiment is meant to

    1. Play audio.wav & Show a picture at the same time [solved]
    2. Detect participant's voice onset time [no timeout]

    I only need the voice onset time. And I copied the script above to the inline script.
    I got 'loudness' result in numbers.

    However, I didn't get 'response' right. The response represents the key-board response that I pressed.
    I don't understand Sebatian wrote as

    You can insert this script into the run phase of an inline_script, and use it like a keyboard_response. It creates the following variables:

    What do you mean by 'use it like a keyboard_response' ?

    My loop contains below.

    • sequence
    • audio
    • picture
    • keyboard response
    • logger.

    I'm totally new and don't know how to make script alone..

    Please please help me.

    Thanks in advance.

    cf. I don't know what to write in the blanks in the keyboard response..

  • Hi Lauren,

    Both the keyboard_response and the script create a variable called response. Because the keyboard_response comes last, that's the one that you see in the log file.

    What you can do is change this line in the script:

    set_response(response=response, response_time=response_time)


    responses.add(response=response, response_time=response_time, item='voice_key')

    By specifying an item keyword, you tell OpenSesame that it should also log all variables with _[item_name] appended. So you'll get response_time (which is not unique and overwritten by the keyboard_response) but also response_time_voice_key (which is probably unique and therefore not overwritten). Do you see the logic?

    See also:

    I'm totally new and don't know how to make script alone..


    cf. I don't know what to write in the blanks in the keyboard response..

    The beginner and intermediate tutorials are good places to get started:


    Buy Me A Coffee

  • Thanks alot. I did what you said, and it worked!

    However, the value of "response_voice_time" is same to every item.

    In other words, I present Audio(question) + Picture at the same time, and my participants should answer to the questions while watching the picture. I have about 40 items in total.

    But, the value of 'response_voice_time' and 'loudness' is all the same to every item..

    Please help..

    cf) of course I studied all the manuals and tutorials.. the problem is.. I don't get it.. idiot.. .. :(

    Sorry, and Thanx, .

    Best regards.


  • edited January 2017

    But, the value of 'response_voice_time' and 'loudness' is all the same to every item..

    The variable response_voice_time should be 'detected' when a voice onset is detected, and 'timeout' otherwise. There's no voice recognition that will tell you what the participant said, if that's what you were expecting‽ That loudness is also the same is a bit odd though. What about the response_time_voice_time? Does that vary?

    It would be helpful if you upload an example datafile here, so that I can see whether the response values make sense

    Buy Me A Coffee

  • edited February 2018

    Hi guys,

    I have opensesame 3.2 and I've tried the inline script that Sebastiaan uploaded up on this page but the program doesn't work. I am new with open sesame and I have to build a simple stroop test where I need to calculate reaction times from the stimulus presentation to the voice onset.
    Each session I would have :
    -sketchpad (fixation dot)
    -tone (corresponding to the stimulus presentation)
    -inline script
    Now I don't know what to do to make the inline script work.

    Do you have some idea of what I did wrong?

    Thank you so much guys.

  • Hi,

    If you provide more detail on what exactly doesn't work, we might be able to help. Could you post your error message? And did you follow all steps described in this discussion, e.g. installing pyaudio?


    Buy Me A Coffee

  • Hi there,

    I need the exact same type of experiment as eleonoragalletti two comments up - a simple stroop test which relies on voice response, measures the time from the stimulus presentation to the voice onset, and records the participants' responses. I have absolutely no programming background, so I wondered if there are any new tutorials on the topic? I tried to follow the instructions in this thread but to no avail ("Failed to compile inline script") - I am sure that what I did wrong was absolutely basic, but since I am so inexperienced an example could be way more efficient than trying to correct my mistakes.



  • For the people who are using psychopy for the backend: they also have a dedicated module for this task:

    Buy Me A Coffee

  • Thanks, although it seems as if this solution still requires certain programming skills.

    A different approach - I suppose I can use the "sound recording" option and just calculate the RTs manually. My only concern now is that I need to run this experiment online, so if I do try to do that, how do I get the sound files?


  • Hi @RoyT. Currently it is not possible to perform sound recordings online with osweb, and this is very difficult in general because of many browsers' security models. If you require to perform these kinds of sound recordings for you experiment, I think collecting data online currently is not an option.

    Buy Me A Coffee

  • Thanks for the quick response. Assuming I can give up on the sound recordings and stay just with RT measurement, should the voicekey script Sebastian uploaded work online? Would it require the participants to download anything? (I haven't yet used OpenSesame online)

  • Hi @RoyT,

    Currently sound recording is only possible offline! This is really hard to implement online because of all the security measures that browsers pose (and also is different per browsers). I don't expect this to be available soon.

    Buy Me A Coffee

  • Hello Forum,

    thank you for the effort you put into Opensesame!

    I'm going to conduct an experiment in which I want to measure reaction times from stimulus onset (jpg-files picturing non-words) to voice onset. The task is to read the non-words out aloud. A Session for one subject will take about 35 minutes. I don't need to record the utterances, since I plan to do that external via Audacity.

    I wonder if the voice key

    • can precisely detect voice onsets,
    • and can do that for more than half an hour for about 700 non-words/utterances.

    Does anyone conducted a study comparable to mine yet and can share his experiences regarding precision of the measurements?

    Thanks in advance!


  • Hi @plex84 ,

    Are you referring to the PsychoPy voicekey class (which you can use in OpenSesame as well)? I don't have experience with this, but when it comes to precision there is no general yes or no answer. It depends on:

    • what you consider precise;
    • whether you have configured sound playback for low latency, which you can specify under backend-settings in OpenSesame (see also here);
    • and (perhaps most important of all) the quality of the recording, i.e. participants should be far enough from the microphone to avoid breathing artifacts, yet close enough to have a clear signal.

    If you're going to record the utterances with Audacity anyway, you could also consider playing some sound that serves as an auditory trigger (e.g. a click) whenever a stimulus is presented, and then mix that with the microphone input during recording. This will allow you to determine the voice onsets offline and (semi)manually in Audacity, which will almost certainly be more accurate than doing it online with an automated algorithm.

    (And yes, it should not be a problem to use this for more than half an hour!)

    — Sebastiaan

    Buy Me A Coffee

  • Hello Sebastiaan,

    thank you for your reply!

    Concerning recording via Audacity and measuring voice onsets offline:

    As far as I know Audacity is only able to record sound from one source during recording, which means I can record either the sound within my OpenSesame experiment or the microphone input.

    Concerning source code for psychopy.voicekey:

    As I mentioned above, I want to measure reaction times from stimulus onset (jpg-files picturing words) to voice onset. Accordingly, in a sequence I included a sketchpad for the jpgs and, for the voicekey, an inline script. I wonder where (prepare or run phase?) to put which parts of the source code. I got error messages regarding modules 'pyo' and 'pyo64':

    Error while executing inline script
    item: new_inline_script
    phase: prepare
    item-stack: experiment[run].loop_ohne_Markierung_u_Filler[run].new_sequence_1[prepare].new_inline_script[prepare]
    time: Wed Jun  9 14:32:58 2021
    exception type: ModuleNotFoundError
    exception message: No module named 'pyo'
      Inline script, line 13, in <module>
    ModuleNotFoundError: No module named 'pyo64'
    During handling of the above exception, another exception occurred:
    Traceback (most recent call last):
      File "C:\Program Files (x86)\OpenSesame\Lib\site-packages\libopensesame\", line 92, in prepare
      File "C:\Program Files (x86)\OpenSesame\Lib\site-packages\libopensesame\", line 124, in _exec
        exec(bytecode, self._globals)
      Inline script, line 16, in <module>
    ModuleNotFoundError: No module named 'pyo'

    Looking forward towards your feedback!

    Best wishes,


  • Hi Plex,

    I haven't followed the entire discussion, but to chip in on your last set of questions:

    The error message suggests you haven't installed the python package Pyo

    Here it is explained how you can do it in Opensesame.

    Not sure, whether this is the main problem, but it certainly is a problem.

    With respect to prepare/run phase. Generally, you need to put parts of your code that set up things in the prepare phase and stuff than runs things in the run phase (sorry if this is too obvious). A sketchpad is being executed in the run phase (and prepared in the prepare phase). So you participants will only see the stimulus in the run phase. Therefore, the voice recording hast to start in the run phase as well.

    The source code that you linked above is really just the source code of the module defined in Psychopy. It doesnt actually do anything by itself. Insofar you would want to put it in the prepare phase in the beginning of the experiment (not even in the loop). However, you dont atually need to copy it to Opensesame. You should be able to simply import it (from psychopy import voicekey) and then use its methods, as listed here:

    Hope this clears things up a bit.


    Buy Me A Coffee

Sign In or Register to comment.

agen judi bola , sportbook, casino, togel, number game, singapore, tangkas, basket, slot, poker, dominoqq, agen bola. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 50.000 ,- bonus cashback hingga 10% , diskon togel hingga 66% bisa bermain di android dan IOS kapanpun dan dimana pun. poker , bandarq , aduq, domino qq , dominobet. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 10.000 ,- bonus turnover 0.5% dan bonus referral 20%. Bonus - bonus yang dihadirkan bisa terbilang cukup tinggi dan memuaskan, anda hanya perlu memasang pada situs yang memberikan bursa pasaran terbaik yaitu Bola168. Situs penyedia segala jenis permainan poker online kini semakin banyak ditemukan di Internet, salah satunya TahunQQ merupakan situs Agen Judi Domino66 Dan BandarQ Terpercaya yang mampu memberikan banyak provit bagi bettornya. Permainan Yang Di Sediakan Dewi365 Juga sangat banyak Dan menarik dan Peluang untuk memenangkan Taruhan Judi online ini juga sangat mudah . Mainkan Segera Taruhan Sportbook anda bersama Agen Judi Bola Bersama Dewi365 Kemenangan Anda Berapa pun akan Terbayarkan. Tersedia 9 macam permainan seru yang bisa kamu mainkan hanya di dalam 1 ID saja. Permainan seru yang tersedia seperti Poker, Domino QQ Dan juga BandarQ Online. Semuanya tersedia lengkap hanya di ABGQQ. Situs ABGQQ sangat mudah dimenangkan, kamu juga akan mendapatkan mega bonus dan setiap pemain berhak mendapatkan cashback mingguan. ABGQQ juga telah diakui sebagai Bandar Domino Online yang menjamin sistem FAIR PLAY disetiap permainan yang bisa dimainkan dengan deposit minimal hanya Rp.25.000. DEWI365 adalah Bandar Judi Bola Terpercaya & resmi dan terpercaya di indonesia. Situs judi bola ini menyediakan fasilitas bagi anda untuk dapat bermain memainkan permainan judi bola. Didalam situs ini memiliki berbagai permainan taruhan bola terlengkap seperti Sbobet, yang membuat DEWI365 menjadi situs judi bola terbaik dan terpercaya di Indonesia. Tentunya sebagai situs yang bertugas sebagai Bandar Poker Online pastinya akan berusaha untuk menjaga semua informasi dan keamanan yang terdapat di POKERQQ13. Kotakqq adalah situs Judi Poker Online Terpercayayang menyediakan 9 jenis permainan sakong online, dominoqq, domino99, bandarq, bandar ceme, aduq, poker online, bandar poker, balak66, perang baccarat, dan capsa susun. Dengan minimal deposit withdraw 15.000 Anda sudah bisa memainkan semua permaina pkv games di situs kami. Jackpot besar,Win rate tinggi, Fair play, PKV Games