Issue with response times and loudness in a reading aloud task with voice key
Hi,
first of all, thanks for the amazing resources that OpenSesame and this forum are.
It's my first experience with a reading aloud task with voice key on OpenSesame. I am using a code that I received from someone else and is largely based on the code posted by Sebastiaan in these two threads (http://forum.cogsci.nl/index.php?p=/discussion/1772/ and http://forum.cogsci.nl/index.php?p=/discussion/107).
The task is as follows: a word is presented until the participant starts reading it aloud, that is, until voice onset, or for a maximum of 2000 ms (our timeout).
After adjusting the microphone parameters in the script and using a professional microphone, we seem to get good quality recordings.
We still seem to have the following issues, about which I'd be extremely grateful for any help.
1) the value for 'loudness' in our output file is always 'None', even though we do get correct recordings and voice onset does trigger disappearance of the stimulus (as desired). I thought that this problem might be related to something in this code snippet, but I cannot see anything obviously wrong:
# variables all=[] var.A_Response_time = timeout a=0 start_time = clock.time() # Listen for sounds until a sound is detected or a timeout occurs. while True: if clock.time() - start_time >= timeout: var.loudness = None break try: block = stream.read(chunk) all.append(block) except IOError as e: print e loudness = get_rms(block) if loudness > sound_threshold: if a==0: var.A_Response_time = clock.time() - start_time var.loudness = loudness a=1
2) I am not sure whether this point might be related to the first one, in that something might be wrong in the last if-loop reported in the code snippet above. I am attaching a png of a trial recording that I did, hoping it might help. The issue is the following: The RTs that we get look a bit suspiciously too similar to one another. Not being sure whether these RTs are reliable, I tried importing the .wav recordings into Audacity, to see whether the actual RTs could be recovered like that. Ideally, our recording should start at time 0 (that is, when the word appears on the screen), and we expect response times in the order of at least 550-600 ms (so not too far from what we see in the attached results, although these all look suspiciously similar to one another). However, when I opened the audio files in Audacity to try and recover the RTs, I saw that voice onset is always around 300-400 ms, which is very different from what we see in the results, and much earlier than expected, as if the recording is not really starting at 0. Any idea whether this might be related to something in the code?
Thanks a lot in advance,
Valentina
Comments
Apologies for the possibly not-so-useful thread, but I seem to have solved the main problems by making some changes to the code. After removing var.loudness = None and all the apparently useless references to variable a, we always get loudness values and (hopefully) the correct RTs.
Cheers,
Valentina
Once again, apologies, but I may need some input.
It looked as if the RT-related problem was solved, but there still seems to be a weird pattern in the results. Sometimes streams of 10 stimuli have (impossible) identical RTs. Could this be because of some delay in performing canvas.show( )? A message with the ms used by canvas.show( ) appears in the debug window for every iteration. Any idea whether this is okay and whether the fact that these times can differ quite a lot (I don't have a log, but I remember them spanning between 17 ms and 63 ms) could create some delay? Is it possible to trace back this information from the logger, which contains all the variables?
This is the relevant bit of code, slightly modified with respect to the one I posted earlier, and a bit more comprehensive.
Any help would be much appreciated!
Thanks,
Valentina
Hi Valentina,
Identical response times would only make sense if the loudness was instantly exceeded or only after the timeout. I suppose this is not what happens? Anyway, the messages on the screen should not be the reason that you get weird effect on the RTs. One thing that you could do to reduce the possibilities for things to go wrong is to add to the if statement in which you set the RT another condition that makes sure you enter this block of code only once. I don't think this causes the issue, but better save than sorry.
Does that do anything?
Eduard
Thanks so much for your reply, Eduard.
In a previous version of this experiment, there was a variable (a) which would be set to 0 at the beginning of each iteration and would change to 1 whenever a response onset was recorded, that is when loudness was above sound threshold (see snippet below). Unfortunately, this did not seem to change anything. Were you referring to something similar or maybe I did not understand correctly?
Regarding what you suggested:
Identical response times would only make sense if the loudness was instantly exceeded or only after the timeout. I suppose this is not what happens?
yeah, this doesn't seem to be the case although I'm not 100% sure. But it looks like loudness is detected correctly: disappearance of the word seems to be correctly triggered by voice onset, and I get different values for loudness in the output file.
Thanks so much for your help
Valentina
Were you referring to something similar or maybe I did not understand correctly?
Yes, this is what I meant.
yeah, this doesn't seem to be the case although I'm not 100% sure.
Can you check, the response times? Are they all as large as the timeout or unreasonably low?
Another, though unlikely possibility could be that there happened to be some rhythmic background noise that drove the voice detection. However identical response times are still rather unlikely. If you check the recordings and the measures responses, do they match up?
Eduard
Thanks once again for your response, Eduard.
Can you check, the response times? Are they all as large as the timeout or unreasonably low?
The response times in the output file look somewhat in a plausible range, but here's an example of an output with some (practically impossibly) identical values.
All the other relevant variables look okay: loudness values are different for each response (I can't tell if they're acceptable but the recordings are clear), and so are start_time and clock_time. But it seems quite unlikely for so many responses in a row to have the same values. This is not the case in all the outputs, but on all the other occasions I would still get a few identical values (not always in a row) in the output file. I'm really struggling to understand what is going wrong...
Another, though unlikely possibility could be that there happened to be some rhythmic background noise that drove the voice detection. However identical response times are still rather unlikely. If you check the recordings and the measures responses, do they match up?
As mentioned, these RTs look at least to be in a correct range; however, when I import the .wav files into Audacity to check whether voice onset time matches with the RT reported in the output file, another problem arises: it might have to do with Audacity somehow, but all the recordings seem to start within 300-400ms from 0, and such short RTs are impossible with this task. I might just need to use a different software or method for this.
Thanks!
Valentina
Can you share the experiment? Please, simplify as much as possible without removing the issue.
Eduard