thank you for your latest hint. I first, as you mentioned earlier, copied all the DLL files to the OS folder, and I fortunately didn't get an error message anymore! So I didn't try to run OS as admin.
To sum up -
I put from psychopy import voicekey in the very beginning of my exp (in the prepare phase of an inline script) to activate the voicekey. There I also added class _BaseVoiceKey(object)[...] to activate the abstract base class for virtual voice-keys.
In the sequence (in which I want to measure voice onsets), right as first item, i put the following code for the class for speech onset detection (inline script, run phase):
class OnsetVoiceKey(_BaseVoiceKey):
"""Class for speech onset detection.
Uses bandpass-filtered signal (100-3000Hz). When the voice key trips,
the best voice-onset RT estimate is saved as `self.event_onset`, in sec.
"""
def detect(self):
"""Trip if recent audio power is greater than the baseline.
"""
if self.event_detected or not self.baseline:
return
window = 5 # recent hold duration window, in chunks
threshold = 10 * self.baseline
conditions = all([x > threshold for x in self.power_bp[-window:]])
if conditions:
self.event_lag = window * self.msPerChunk / 1000.
self.event_onset = self.elapsed - self.event_lag
self.trip()
self.event_time = self.event_onset
What exactly does "... is saved as 'self.event_onset'" mean here?
It seems that you're trying to implement your own OnsetVoiceKey class by inheriting from BaseVoiceKey . Unless you have a good reason to do that (?), this isn't necessary because this class already exists. So in theory it should be as simple as:
from psychopy.voicekey import OnsetVoiceKey
vk = OnsetVoiceKey(sec=3)
vk.detect()
And then the onset time should be available as:
vk.event_onset
That being said, for me this is only theory, because I haven't gotten this to work, neither on Ubuntu or Windows. It seems that PsychoPy's Voicekey relies on libraries that are pretty fragile. Given the proper configuration, I'm sure it works, but I just haven't managed to so far.
thanks for your answer. I put your latest script in the inline script and the following error message occurred:
File "C:\Program Files (x86)\OpenSesame\Lib\site-packages\libopensesame\inline_script.py", line 116, in run
self.workspace._exec(self.crun)
File "C:\Program Files (x86)\OpenSesame\Lib\site-packages\libopensesame\base_python_workspace.py", line 124, in _exec
exec(bytecode, self._globals)
Inline script, line 3, in <module>
File "C:\Program Files (x86)\OpenSesame\Lib\site-packages\psychopy\voicekey\__init__.py", line 112, in __init__
raise VoiceKeyException(msg)
psychopy.voicekey.VoiceKeyException: Need a running pyo server: call voicekey.pyo_init()
(Sidenote: Pyo was installed as admin a few weeks ago.)
I will soon implement a picture naming task: the participants will see a fixation point and then an image depicting a concrete noun and will be asked to name it as quickly and as accurately as possible.
We need to calculate the speech onset latency from when the image appears and the participant starts naming it. From what I have read above this should be working on OpenSesame, but not open sesame web (as of 2021), which is fine because we plan to collect the data in person.
I just wanted to check if substantial changes were made, and if there are some updates as to how to implement this on the newer versions of OpenSesame?
I am planning to set up a picture naming task the participants will see a fixation point and then an image depicting a concrete noun and will be asked to name it as quickly and as accurately as possible.
We need to calculate the speech onset latency from when the image appears and the participant starts naming it. From what I read this can been implemented with an inline script. We are planning to collect the data in person, so we will not be suing OSWeb.
Since the last post in this thread is from 2021, I want to ask if there have been any substantial changes to this and if the reported script still works n newer versions of OpenSesame.
We are planning to run a Noun Naming Task through Open sesame. The participants will see a fixation point and then an image depicting a concrete noun and will be asked to name it as quickly and as accurately as possible.
We need to calculate the speech onset latency from when the image appears and the participant starts naming it.
I have read the threads above, but since they are a few years old, I am wondering if these will still work on the newest version of OpenSesame. Or is there a simpler solution now?
I have been using Sebastian's script for the voice key (see first message in this post) for several years now. It has worked very well in version 2.x and 3.x of Open Sesame. However, the script does not seem to work in OS version 4.0.
Several people in my team has tried it and they also get different error messages.
Has anybody managed to use Sebastian's voice key script in OS 4.0 by any chance? Any tips/help will be welcome!
Comments
Hello @sebastiaan,
(Sorry for my delayed reply.)
thank you for your latest hint. I first, as you mentioned earlier, copied all the DLL files to the OS folder, and I fortunately didn't get an error message anymore! So I didn't try to run OS as admin.
To sum up -
from psychopy import voicekeyin the very beginning of my exp (in the prepare phase of an inline script) to activate the voicekey. There I also addedclass _BaseVoiceKey(object)[...] to activate the abstract base class for virtual voice-keys.class OnsetVoiceKey(_BaseVoiceKey): """Class for speech onset detection. Uses bandpass-filtered signal (100-3000Hz). When the voice key trips, the best voice-onset RT estimate is saved as `self.event_onset`, in sec. """ def detect(self): """Trip if recent audio power is greater than the baseline. """ if self.event_detected or not self.baseline: return window = 5 # recent hold duration window, in chunks threshold = 10 * self.baseline conditions = all([x > threshold for x in self.power_bp[-window:]]) if conditions: self.event_lag = window * self.msPerChunk / 1000. self.event_onset = self.elapsed - self.event_lag self.trip() self.event_time = self.event_onsetWhat exactly does "... is saved as 'self.event_onset'" mean here?
How can it be written into the logfile?
Thanks in advance,
plex84
Hi @plex84 ,
It seems that you're trying to implement your own
OnsetVoiceKeyclass by inheriting fromBaseVoiceKey. Unless you have a good reason to do that (?), this isn't necessary because this class already exists. So in theory it should be as simple as:And then the onset time should be available as:
That being said, for me this is only theory, because I haven't gotten this to work, neither on Ubuntu or Windows. It seems that PsychoPy's Voicekey relies on libraries that are pretty fragile. Given the proper configuration, I'm sure it works, but I just haven't managed to so far.
— Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi @sebastiaan,
thanks for your answer. I put your latest script in the inline script and the following error message occurred:
(Sidenote: Pyo was installed as admin a few weeks ago.)
plex84
Hi @plex84 ,
I got the same error. A proposed solution is already in the error message, namely to call the following somewhere at the start of the experiment:
You can try that. However, this didn't work for me either.
— Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi All,
I will soon implement a picture naming task: the participants will see a fixation point and then an image depicting a concrete noun and will be asked to name it as quickly and as accurately as possible.
We need to calculate the speech onset latency from when the image appears and the participant starts naming it. From what I have read above this should be working on OpenSesame, but not open sesame web (as of 2021), which is fine because we plan to collect the data in person.
I just wanted to check if substantial changes were made, and if there are some updates as to how to implement this on the newer versions of OpenSesame?
Thank you All!
Marta
Hi All,
I am planning to set up a picture naming task the participants will see a fixation point and then an image depicting a concrete noun and will be asked to name it as quickly and as accurately as possible.
We need to calculate the speech onset latency from when the image appears and the participant starts naming it. From what I read this can been implemented with an inline script. We are planning to collect the data in person, so we will not be suing OSWeb.
Since the last post in this thread is from 2021, I want to ask if there have been any substantial changes to this and if the reported script still works n newer versions of OpenSesame.
Marta
Hi All,
We are planning to run a Noun Naming Task through Open sesame. The participants will see a fixation point and then an image depicting a concrete noun and will be asked to name it as quickly and as accurately as possible.
We need to calculate the speech onset latency from when the image appears and the participant starts naming it.
I have read the threads above, but since they are a few years old, I am wondering if these will still work on the newest version of OpenSesame. Or is there a simpler solution now?
Thanks!
Marta
Hi Marta,
Maybe check out this script here (or talk to @cltsson ), his voice key detection seams to work.
Good luck,
Eduard
Dear all,
I have been using Sebastian's script for the voice key (see first message in this post) for several years now. It has worked very well in version 2.x and 3.x of Open Sesame. However, the script does not seem to work in OS version 4.0.
Several people in my team has tried it and they also get different error messages.
Has anybody managed to use Sebastian's voice key script in OS 4.0 by any chance? Any tips/help will be welcome!
Thanks.