Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Synchronization issue between the sound recorder plugin and the time fixed for displaying stimuli

edited March 2016 in OpenSesame

Hello,

First, I just wanna thank all of your team for OpenSesame :)

I want to set up an experiment in order to get a reaction time between visual stimuli and vocal responses. I noticed that many topics on the forum were about this type of experiment but I didn't find any with the same issue.

I created a loop with 2 sketchpads. The first one is a cross that the individual has to stare during the all experiment, and the second one is the same cross with an added circle around. I want the participant to say "Ah" out loud when the circle appears around the cross.
The loop runs 12 times. Thus, I had to set 24 times (because I have two sketchpads). I set those time so the whole loop would last 30000 ms exactly.

Then, I used the sound recorder plugin to add a "new sound start recording" before the loop, and a "new sound stop recording" after the loop.

The experiment runs perfectly. But, when I use Audacity to read the Wave file, I see that it lasts 32200ms (exactly, for each trial). I tried with another computer, and the record lasts 30159 exactly, for each trial with the same computer. It just varies when I set different sample rates (11025, 22050, or 44100).
I also tried without a loop, and the time I got in Audacity was the same. Then, I tried a simple sequence with "new sound start recording", a single sketchpads that I set on 1000 ms, and "new sound stop recording", and I got a 907ms Wave file.

To conclude, I can't get the sound recorder plugin to be exactly synchronized with the times that I set to display stimuli. Consequently, I don't have any benchmark to compare the wave file with these times, and I can't get the reaction times I need.

Would you have an idea to help me getting a 30000 ms recording, synchronized perfectly with the beginning and the end of the loop ?

Thank you for reading this. To explain things in english can be a little complicated for me, please do not hesitate to ask me if anything is not clear :)

Here is a screenshot, if you want to picture the sequence and the settings I'm trying to explain :)

Thank you for your help,

Elodie

image

«1

Comments

  • edited 1:30AM

    Hi Elodie,

    As far as I know, it is quite hard to get very accurate timing with the sound_recorder plugin. If the issue is related to that, I won't be able to help you.

    But as a first check, to make sure that at least your trial_loop is doing what it is supposed to, you could see whether the time difference between starting and stopping the sound recorder is exactly 30 seconds. To do that, you can have a look in the OpenSesame output csv file and take the difference between the variables time_new_sound_start_recording and time_new_sound_stop_recording. Once you have that, we know a little better where the delay is coming from.

    Sorry that I can't help more.

    Eduard

    Buy Me A Coffee

  • edited 1:30AM

    Hi Eduard,

    Thank you for you response.
    I tried to get this information from the csv file, but it only shows the variables without numbers but "NA" instead. Here is another screenshot, in which you can see the data I get from the csv file (I removed the useless ones).

    image

    Do you think that any clue could come from these data ?

    Also, what could I do to get a time under "time_new_sound_recording" and _time_new_sound_recording" instead of "NA" ?

    Thank you again,

    Elodie

  • edited March 2016

    Hi Elodie,

    I didn't expect that. Are there only NAs in the entire column of both start and stop recording? If they are outside the a sequenceit can happen that the start and stopping time are shifted down in the logfile (as the current value is only written, once the item has been executed. Therefore, the stop recording value should appear later than start recording). Would you mind uploading one of the logfiles? You can use FileDropper for that.

    Once we know whether the duration of the sound file is constant in relation to the time in between when the start and stop recording was executed, we might be able to find a workaround.
    However, I don't think it is possible to have your sound files being recorded with such a high accuracy as you wish. Maybe, @dschreij knows more. He is the guy who created the soundrecording plugins.

    Does this make sense?

    Eduard

    Buy Me A Coffee

  • edited 1:30AM

    Hi Eduard,

    I couldn't use FileDropper but here is a WeTransfer link with the logfile : http://we.tl/6ERQV1tsqg

    About the start_recording and stop_recording times, I'm not sure that I understood well what you meant by "shifted down". Do you mean that the values could be in the file but in another place that their column ?

    Even if the higher accuracy isn't possible, I think that it could get ok if we're able to have a time of starting and ending for the audio record.
    Indeed, I didn't notice at first but I know now that my sequence does not last 30000ms exactly, thank to the datas from the logfile which give me the exact time of each loop. Using these datas instead of the ones I set in the loop settings, I now have only 30ms difference between the WAVE file and the loop time.

    Thank you again for your time. I'm suppose to start my experiment next week.. So i'll start to look for another plan in case it doesn't work out. But I wish it will :)

    Elodie

  • edited March 2016

    Hi,

    Do you mean that the values could be in the file but in another place that their column

    Not in another column, but in another row. So, if you repeated your 12 trials again, the proper times would appear in the following row (which is number 14). However, I don't understand why the time for the start recording item does not appear. Regardless of what I described earlier, this value should be in the list already now.

    Unfortunately, I don't have the plugins installed on my computer, so I can't try it myself.

    But let's think this through again. Ideally, we would get a WAV-file whose duration and onset is exactly aligned with duration of the entire trial sequence and the onset of the first stimulus. If we can't accomplish this, it would be sufficient, to have a WAV-file of some duration, and a trial sequence of some duration, which don't have to be the same. But we would know when in the WAV-file the first stimuli occurred and that the time in between two trials is the same in the recording and the output of Opensesame. Do I understand this correctly?

    If so and based on some recommendations how to accomplish ideal timing with Opensesame, I would propose some changes to the structure of your experiment.

    Firstly, to have minimize the delay between two trials, it is important to make sure that trials are run within a phase (preferably the run phase) and not across prepare phase and run phase. Your current version is implemented in a way, that during on iteration of the loop, there will be no delay between Fixation, Stimulus, new_logger. However, between two iterations (so basically between the previous logger and the current Fixation) there is (potentially) a delay, because the before the Fixationis shown, the prepare_phase of the sequence is executed first. Normally, this doesn't matter so much, because the timing within a trial is what is important and not so much between trials. If you need a more detailed explanation, please have a look at the link I mentioned above.

    Honestly, I don't know whether this structure adds much delay in your experiment (your stimuli are very simple), but I certainly is worth a try. So, to overcome this issue, I would present your stimuli in an inline_script. If you do that, you have a little more control over the timing of your trials. In the ìnline_script`, you can add following code:


    # this part comes in the prepare phase fixation_list = [800,1000,1200,1250,1400,1500,1750,1900,2100,2250,2500,2750] stimDur = 800 fix_cv = canvas() stim_cv = canvas() fix_cv.fixdot(0,0,style = 'medium-cross') stim_cv.fixdot(0,0,style = 'medium-cross') stim_cv.circle(0,0,50) # and this in the run phase t0 = clock.time() for i in range(12): fix_cv.show() clock.sleep(fixation_list[i]) stim_cv.show() clock.sleep(stimDur) var.seqDur = clock.time() - t0 log.write_vars()

    If I am not mistaken, the delay should be reduced now. Can you check that?

    In a last step, we should probably add something like a trigger; a signal that serves as a marker in both the sound file as well as the log file, so that we have a common reference for both files. One way to do that, is while recording, you play a sound (e.g. a short beep) before the first trial. With all the timings logged you should know (in theory) all the information to deduce the reaction times to each trial. At work, I have an example script, of playing this sound. Tomorrow, I will post it here. However it is based on pygame, which has also some inevitable delay included (~30ms according to this). So again, if you try it out and let me know about how far we got already, and what we still need to do.

    I hope this all makes sense to you. If something is unclear, please let me know.

    Good luck,

    Eduard

    Buy Me A Coffee

  • edited 1:30AM

    Hi,

    As promised, here a piece of code to play a sound (example sound) from within an inline_script.

    import pygame
    # feedback sound initialization
    pygame.mixer.init()
    sound_path = 'sound.wav'
    soundF = pool[sound_path]
    sound = pygame.mixer.Sound(soundF)
    sound.play()
    

    So, together with the rest of the code, your script should something like this:

    #  this part comes in the prepare phase
    import pygame
    
    fixation_list = [800,1000,1200,1250,1400,1500,1750,1900,2100,2250,2500,2750]
    stimDur = 800
    
    fix_cv = canvas()
    stim_cv = canvas()
    
    fix_cv.fixdot(0,0,style = 'medium-cross')
    stim_cv.fixdot(0,0,style = 'medium-cross')
    stim_cv.circle(0,0,50)
    
    # feedback sound initialization
    pygame.mixer.init()
    sound_path = 'sound.wav'
    soundF = pool[sound_path]
    sound = pygame.mixer.Sound(soundF)
    
    # and this in the run phase
    sound.play()
    t0  = clock.time()
    for i in range(12):
        fix_cv.show()
        clock.sleep(fixation_list[i])
        stim_cv.show()
        clock.sleep(stimDur)
    var.seqDur = clock.time() - t0
    log.write_vars()
    

    Can you try how accurate the timing is with these settings?

    Thanks,

    Eduard

    Buy Me A Coffee

  • edited 1:30AM

    Hi Eduard,

    Thank you so much for you help. About the inline_script, I think that I understand and I want to try it, but I'm not sure where to put it in the sequence. Where should I insert it ?
    Should I remove the loop and put this inline_script instead ?
    Or should I leave everything like it was and add the inline_script between two steps of the sequence ?

    I can't answer your question as long as I don't know where to insert the inline_script, but meanwhile, I am trying the trigger idea.
    I tried to add sounds in the sequence to get a marker in the Wave file. To do that, I used :

    • A "synth item" before the loop, called "Sound1_2000ms" and set to last 2000ms,
    • Another "synth item" after the loop, called "sound2_2000ms" and set to last 2000ms.

    Here is a picture of this new sequence : image

    As you can see,

    • The record starts at the very begining of the sequence.
    • Then, a sketchpad says "hello" and waits for me to press the keyboard (I had to add this sketchpad, because if I set the sound to be displayed just after the "RecordStarts", I didn't get the beginning of the sound in the Wavefile, because of the record delay. Thank to the sketchpad, I can wait long enought for the record to have started before the sound does).
    • Then, the Sound1 is played, and is supposed to last 2000ms. In the log file, I get a "Time_Sound1_2000ms" which seems to give the exact time where the sound1 started.
    • Then, the loop is played. In the log file, I still have every exact stimuli times.
    • Then, I added another Sound, called Sound2, in order to check if I had the same gap of time between the two sounds on the log file, and on the wave file.

    With this sequence, I get the following datas, which seem to be almost enought for what I want :
    --- In the Wave File :
    - the time where the first sound beggins,
    - the time where it ends,
    - the times where the 12 vocal responses beggin,
    - the time where the second sound beggins.
    --- In the Log File :
    - the time where the first sound beggins,
    - the times where the 12 stimuli are displayed,
    - the time where the second sound stops.
    Using these data, I could calculate everything.

    However, to check the accuracy, I compared both gaps :

    • In the Wave File, [beginning_of_Sound2] - [beginning_of_Sound1] = 35958.71068
    • In the Log File, [beginning_of_Sound2] - [beginning_of_Sound1] = 35946,358.

    So now, I have only 12 ms difference between the log file and the audio file. I don't know what it worths but it is much better than before !

    I don't know if I was clear enought. Please tell me if I was not.
    If you did understand well what I meant, what do you think of the accuracy of this procedure ? Does the 30ms delay you were telling me about applies here to, or is it more by using the "synth item" instead of an inline script ? I looked on the link you gave, and I saw that they used a "sampler item" in their example, I don't know if this difference matters.
    Furthermore, is this delay constant or does it varies in time for every sound displayed ? If it's constant, I could just add 30ms to the datas from the Wave File (but it does seem a bit too simple...)

    Thank you again for reading this. I try to give a lot of details to make sure that I make my point. I'll try the inline script as soon as I have your answer about where to insert it in the sequence.

    Elodie

    PS : As you may notice on the picture of my current sequence, I had to add a new sketchpad called "empty". I'm sorry about that (I realize that the problem is complicated enought and does not need anymore change) but I learnt yesterday by showing it to a professor that an empty image had to take place between the fixation and the stimulus.

  • edited 1:30AM

    Hi Elodie,

    the inline_scripts replaces the entire loop. You can delete it, and just put the inline_script in between start and stop recording.

    Ok. The empty canvas is no problem. You just add it to the inline_script (see below).
    I'm not sure whether the delay is already good enough. Let's try the performance with the inline_script first. Then we will know more. Btw. the test sound is being played now in the inline_script, so you don't have to use the synth item anymore. And for most of your other questions, I just don't know. I guess you just have to experiment with some settings and run a couple of tests until your happy.

    I hope this helped.

    Edaurd

    #  this part comes in the prepare phase
    import pygame
    
    fixation_list = [800,1000,1200,1250,1400,1500,1750,1900,2100,2250,2500,2750]
    stimDur = 800
    
    fix_cv = canvas()
    stim_cv = canvas()
    
    fix_cv.fixdot(0,0,style = 'medium-cross')
    stim_cv.fixdot(0,0,style = 'medium-cross')
    stim_cv.circle(0,0,50)
    
    # feedback sound initialization
    pygame.mixer.init()
    sound = exp.pool[ 'sound.wav']
    my_samp = sampler(sound)
    
    # and this in the run phase
    my_samp.play()
    t0  = clock.time()
    for i in range(12):
        fix_cv.show()
        clock.sleep(fixation_list[i])
        stim_cv.show()
        clock.sleep(stimDur)
    var.seqDur = clock.time() - t0
    log.write_vars()
    

    Buy Me A Coffee

  • edited 1:30AM

    Hi Eduard,

    I tried to run the inline_script but I get the following error message :
    "The experiment did not finish normally for the following reason:
    openexp._sampler.legacy.init() the file 'sound.wav' does not exist"

    Should I add a Wave file myself somewhere for it to be displayed ? If so, in which folder ? I was thinking maybe in : C/ProgramFile/OpenSesame/Pygame ?

    Thank you again,

    elodie

  • edited 1:30AM

    You can do two things. First, you can put the file into the same folder as your experiment. Second, you can add the file to the file pool (see the documentation of Opensesame online, if you don't know how). I think, the first option will be enough, but the second will do for sure.

    Eduard

    Buy Me A Coffee

  • edited March 2016

    Hi Eduard,

    I made many tests with the code you gave me. I'm sorry it took me so long to answer, but I had a hard time discovering and understanding Python. I'm glad I can use it a little now. I tried to improve the code so it would give the same experiment that the one I had. Here is how it looks now :


     
    #  this part comes in the prepare phase
    import pygame
    from random import randint
    
    empty_list = [500, 550, 600, 650, 700, 750, 800, 850, 900, 950, 1000, 1050]
    
    # var.randomization list contains a random permutation of range(12) in order to use randomly times from empty_list
    list1 = [i for i in range(12)]
    var.randomization = []
    for i in range(12) :
        n = list1[randint(0, 11-i)]
        list1.remove(n)
        var.randomization.append(n)
    
    stimDur = 600
    fixDur = 1125
    
    fix_cv = canvas()
    stim_cv = canvas()
    empty_cv = canvas()
    
    fix_cv.fixdot(0,0,style = 'large-cross', penwidth = 4)
    stim_cv.circle(0,0,30,fill=True, penwidth = 4, color = 'green')
    
    # feedback sound initialization
    pygame.mixer.init()
    sound = exp.pool[ 'sound.wav']
    my_samp = sampler(sound)
    
    # and this in the run phase
    
    t0  = clock.time()
    var. T0 = t0
    
    clock.sleep(1000)
    var.beg_sound_play_1 = clock.time() -t0
    my_samp.play()
    clock.sleep(4000)
    var.end_sound_play_1 = clock.time() -t0
    
    var.stim = []
    
    for i in var.randomization:
        fix_cv.show()
        clock.sleep(fixDur)
        empty_cv.show()
        clock.sleep(empty_list[i])
        var.stim.append(clock.time() - t0)
        stim_cv.show()
        clock.sleep(stimDur)
    var.seqDur = clock.time() - t0
    
    var.beg_sound_play_2 = clock.time() -t0
    
    my_samp.play()
    clock.sleep(2004)
    
    var.end_sound_play_2 = clock.time() -t0
    
    log.write_vars()
    
        </div>
    

    As you can see, I added a sound at the end of the experiment in order to have 2 landmarks (the time where sound1 begins, and the time where sound2 begins). I compared the differences between those two times (the difference between the two times in the wave file, and the difference between the two times in the log file) ; and I get a difference between these two differences. The difference between the two differences varies between 1 ms and 15 ms. I think that it corresponds to a variable delay that takes my cumputer, between the moment it sends the sound and the moment where the sound is really played.

    Anyway, I don't have any other idea to improve it. If you do, please tell me :) Would you have a last clue ?

    Again, thank you so much for your help.

    Elodie

  • edited 1:30AM

    Hi Elodie,

    Sounds great. I don't think a much better delay than that is possible. Did you also check the delay in between each trial? Those are the most important timings, right?
    To be able to do that, you probably have to change the code to some extent.
    So here a some more changes that you could try:

    # even though your randomization is ok, it can be more efficient
    import random
    var.randomization = empty_list
    random.shuffle(var.randomization)
    
    # and to test the timing of the stimuli (if it is what you want to do), you could do this
    var.time_fix = fix_cv.show()
    # for each canvas presentation. 
    
    # Next, appending to a list is not very efficient compared to assigning 
    # values to a certain position in that list. I am not sure whether it matters
    # much in this case, but you can surely try
    
    var.stim = list(range(len(var.randomization)))
    # and in the loop:
    var.stim[i] = time
    
    # and here the entire loop in one piece
    # note that I added enumerate to it. If you google, you will understand why
    for i,j in enumerate(var.randomization):
        var.time_fix = fix_cv.show()
        clock.sleep(fixDur)
        var.time_empty =empty_cv.show()
        clock.sleep(j)
        var.stim[i]=clock.time() - t0
        var.time_stim= stim_cv.show()
        clock.sleep(stimDur)
        print var.stim-var.time_fix
    

    I hope this helps (and improves the timing even more).

    Let me know how it worked.

    Thanks,

    Edaurd

    Buy Me A Coffee

  • edited 1:30AM

    Hi Eduard,

    Actually, I was talking about a delay between trials (when I said that the difference between the delay seen in the Log file and the one seen in the wave file varied between 1 and 15 ms). I indeed can't have a constant error that wouldn't change between trials.

    I'll try your improvements right now and let you know. Thank you again !

  • edited 1:30AM

    Actually, I can't get the code to work. I get the following error message :
    item-stack: experiment[run].new_inline_script[run]
    exception type: IndexError
    exception message: list assignment index out of range
    item: new_inline_script
    time: Tue Mar 15 14:13:33 2016
    phase: run
    File "", line 25, in

    Maybe I removed something I shouldn't have removed ?

    Here is how look the code that include your improvements :

    
    #  this part comes in the prepare phase
    import pygame
    import random
    
    
    empty_list = [500, 550, 600, 650, 700, 750, 800, 850, 900, 950, 1000, 1050]
    
    # Randomization of empty_list 
    var.randomization = empty_list
    random.shuffle(var.randomization)
    
    stimDur = 600
    fixDur = 1125
    
    fix_cv = canvas()
    stim_cv = canvas()
    empty_cv = canvas()
    
    fix_cv.fixdot(0,0,style = 'large-cross', penwidth = 4)
    stim_cv.circle(0,0,30,fill=True, penwidth = 4, color = 'green')
    
    # feedback sound initialization
    pygame.mixer.init()
    sound = exp.pool[ 'sound.wav']
    my_samp = sampler(sound)
    
    # and this in the run phase
    
    t0  = clock.time()
    var. T0 = t0
    
    clock.sleep(1000)
    var.beg_sound_play_1 = clock.time() -t0
    my_samp.play()
    clock.sleep(4000)
    var.end_sound_play_1 = clock.time() -t0
    
    var.stim = []
    
    for i,j in enumerate(var.randomization):
        var.time_fix = fix_cv.show()
        clock.sleep(fixDur)
        var.time_empty =empty_cv.show()
        clock.sleep(j)
        var.stim[i]=clock.time() - t0
        var.time_stim= stim_cv.show() --- This is the 25th line in the run phase, that is given by the error message 
        clock.sleep(stimDur)
        print var.stim-var.time_fix
    
    var.beg_sound_play_2 = clock.time() -t0
    
    my_samp.play()
    clock.sleep(2004)
    
    var.end_sound_play_2 = clock.time() -t0
    
    log.write_vars()
    
  • edited 1:30AM

    Instead of var.stim =[] it should be list(range(len(var.randomization))).

    I think we are not on the same page with respect to a trial definition. I thought a trial is the repetition of the sequence "Fixation-Blank-Circle", so in the code, every iteration of the for-loop. Do you mean the same, or do you mean a trial to be the entire sequence, so everything that occurs in between the first and second sound?

    Eduard

    Buy Me A Coffee

  • edited 1:30AM

    Hi Eduard,

    Indead, I meant the entire sequence by "trial". Sorry I didn't understand. And I indead would like very much to test the code to get an idea about the accuracy of trials you're talking about.

    However, it still doesn't work : I put list(range(len(var.randomization))) instead of var.stim=[] as you said, but I now have an error message that says "the variable 'stim' does not exists" (because the loop that comes after contains var.stim[i]=clock.time() - t0).

    I tried to replace list(range(len(var.randomization))) by or stim = [list(range(len(var.randomization)))] and I still get the same error message. How should I define the variable 'stim' ? Or, should I change the line var.stim[i]=clock.time() - t0 ?

  • edited 1:30AM

    Ah, my bad. I meant to say:

    Instead of var.stim =[] it should be var.stim = list(range(len(var.randomization)))

    Sorry

    Buy Me A Coffee

  • edited 1:30AM

    Thanks. Again, a new error message : "TypeError: unsupported operand type(s) for -: 'list' and 'float'".
    Again, I can't understand why.
    Sorry my knowledge about Python is so limited...

  • edited 1:30AM

    The program stopped after the first loop and the line that correspond to the error is the line 28, the line that is just after the loop. So I think that it can't return to the beginning of the loop.

  • edited 1:30AM

    You are right. The last line of the loop is not working. This time we need to replace var.stim with var.stim[i].

    I hope this was the last bug....

    Buy Me A Coffee

  • edited March 2016

    It still doesn't work. It's back to the previous error : if I replace var.stim with var.stime[i] I get again "the variable 'stim' does not exists"

    I replaced it here (first line), is that what you meant ? :

    var.stim[i] = list(range(len(var.randomization)))
    
    for i,j in enumerate(var.randomization):
        var.time_fix = fix_cv.show()
        clock.sleep(fixDur)
        var.time_empty =empty_cv.show()
        clock.sleep(j)
        var.stim[i]=clock.time() - t0
        var.time_stim= stim_cv.show()
        clock.sleep(stimDur)
        print var.stim[i]-var.time_fix 
  • edited 1:30AM

    almost:

    var.stim=list(...) not var.stim[i]=list(...)

    Buy Me A Coffee

  • edited 1:30AM

    Ok but then what did you mean by : "we need to replace var.stim with var.stim[i]" ? Because if I remove this [i] I have the same code than one comment earlier (with the impossibility to return at the beginning of the loop)

  • edited 1:30AM

    I lost track what the latest code looks like. To be clear, I expect everything to work if you have this stuff in the run_phase:

    # and this in the run phase
    
    t0  = clock.time()
    var. T0 = t0
    
    clock.sleep(1000)
    var.beg_sound_play_1 = clock.time() -t0
    my_samp.play()
    clock.sleep(4000)
    var.end_sound_play_1 = clock.time() -t0
    
    var.stim = list(range(len(var.randomization)))
    
    for i,j in enumerate(var.randomization):
        var.time_fix = fix_cv.show()
        clock.sleep(fixDur)
        var.time_empty =empty_cv.show()
        clock.sleep(j)
        var.stim[i]=clock.time() - t0
        var.time_stim= stim_cv.show() # This is the 25th line in the run phase
        clock.sleep(stimDur)
        print var.stim[i]-var.time_fix
    
    var.beg_sound_play_2 = clock.time() -t0
    
    my_samp.play()
    clock.sleep(2004)
    
    var.end_sound_play_2 = clock.time() -t0
    
    log.write_vars()
    

    Do you have the same?

    Buy Me A Coffee

  • edited March 2016

    Ok, I do have the exact same one, and I don't know why it didn't work but it now runs perfectly :)

    However, I'm not sure to understand how should I look at each trial delay. I got the same log file that with the earlier code, and I got 12 numbers from the print var.stim[i]-var.time_fix ; but these 12 numbers are :
    -3807.3105223
    -4008.78252543
    -4058.99745913
    -4209.83127423
    -4158.14177739
    -3659.20225962
    -3758.90337909
    -3709.09292612
    -3958.37303111
    -3908.47937809
    -3860.16524828
    -4110.41175551

    But the 12 loops togethers are supposed to last 30 seconds so I don't understand how they could contain 4 seconds each...

    I'll get back on this reflexion tomorow, first thing in the morning.

    Thank you for everything.

    Elodie

  • edited 1:30AM

    Yup, the variables don't represent what they're supposed to.

    Well, in addition to the time between the first and the second sound play, you probably want to check the time between the beginning of a trial (fixation) and the onset of the stimulus (circle). So, to get this we need a time stamp of when each of the canvasses appeared on screen. Luckily, canvas.show() returns such a timestamp. When we call e.g. var.time_fix = fix_cv.show(), the time when the canvas was drawn to screen will be saved in var.time_fix. If we do the same now for the stimulus onset, we have all the information we need.

    for i,j in enumerate(var.randomization):
        var.time_fix = fix_cv.show()
        clock.sleep(fixDur)
        var.time_empty =empty_cv.show()
        clock.sleep(j)
        var.time_stim= stim_cv.show() # This is the 25th line in the run phase
        clock.sleep(stimDur)
        print 'Interval Fix-Stim ', str(var.time_stim-var.time_fix)
        print 'Interval first sound-Stim ', str(var.time_stim-t0)
    

    Btw. With this code, the entire mess with var.stim = list() is not needed anymore...

    Buy Me A Coffee

  • edited March 2016

    I tried this code and it indead gave me an interesting information.
    There is an about 3 seconds delay between the variable time_stim (that is, if I understood well, the time when the stimulus is really drawn on the screen), and the variable stim (that is, if I understood well, the time when the order of drawing the stimuli is given).

    time_stim is about 3 seconds later than stim.

    Did I understood well ? If so, guess that I would be better to use time_stim : if it is the real time when the stimulus is drawned on the screen, it seems more accurate. What do you think ?

    I also used the previous code we had and added the line var.time_stim[i]= stim_cv.show() - t0 to check the information. Here are the differences between the two variables :
    image

    About the line print 'Interval Fix-Stim ', str(var.time_stim-var.time_fix), I also noticed a delay that varies between 4 and 6 ms. Here it is :
    image

    As canvas.show was also used in this line, I think that the same type of delay could be involved. Following what I was saying earlier, it would mean that I would be better to use var.any_time_I_want_to_get=canvas.show that var.any_time_I_want_to_get=clock.time().
    Again, I don't know if I'm right here. I hope that you can give me your opinion.

    Thank you so much for this idea, I'm glad we noticed this delay !

  • edited 1:30AM

    Hi Elodie,

    Do you mean seconds or milliseconds? In the table you posted it seems to be milliseconds. Aside from that, I don't really know what the variable stim is supposed to represent. I can't find it anywhere in the code above.

    The reason why I suggested these latest checks is to see whether the duration between two trials (fix onset-stim onset) is the same in the wav file and in the opensesame file (or if not the same, that at least with a constant delay). However, not I realized, that there is no sound played, but only the stimuli presented during a trial. So, it doesn't make any sense trying to check it. Sorry for the confusion.

    What you could check (if you want to), is the time between the first fixation canvas and the last stimulus canvas (+ the time you present your circle). If the timing is good, this should add up to 30 seconds. However, as you already mentioned with the small delays of 4-6 ms between fixation and stimulus, it probably will be somewhat longer.

    In general, I believe that you are good to go. There is not much room for improvement and the timing is already pretty good, right? The only thing that could make it a bit more reliable is to play a brief sound on every trial (preferably together with the onset of the fixation) to have a landmark for each trial.

    Does this all make sense to you? I hope, the last tests didn't waste too much of your time.

    Eduard

    Buy Me A Coffee

  • edited 1:30AM

    Hi Eduard

    I was indeed talking about ms. Sorry about that.

    In the last tests I made, the variable stim was related to the code var.stim[i]=clock.time() - t0, and the variable time_stim to the code var.time_stim[i]= stim_cv.show() - t0. About what I said, is open sesame able to make a difference between the time it gives the order to display the stimulus (that would be "stim" here) ans the time when the stimulus is really drawned on the screen (that would be "time_stim" here), taking into account the refresh rate of the screen ? If so, it would be right to conclude that it seems better to use the "show" function than the "clock" function ; but again, I don't know if I understood well.

    Yes, everything makes sense to me. Don't worry about the last test, it wasn't an entiere waste of time since it made me notice a difference between the "clock" function and the "show" function in variables. Also, you made me earn a lot of time by helping me that much.

    About the landmark for each trial, I thought about it but it would create a bias because the individual wouldn't have just a visual stimulus but a visual one and an auditory one. Then, I thought about sending such a sound in ultrasound that could be detected by the microphone but not heard by the individual, but then I thought that anyway, adding all these landmarks would add others delays into the loop.
    Nervertheless, I will test this type of sequence later today to see what information I can get from it, even if I don't really use it with subjects.
    What do you think ?

    Anyway, I agree : in general, I do feel good to go ! I'm just not sure about the total delay that I should take into account. As I already said, I know that a 10ms delay exists between my two sounds on the log file and my two sounds on the wavefile. I just don't know if I can talk about a 10 ms accuracy in my paper, or if I should take others delays into account and specify a lowest accuracy. However, I do need to specify a delay in order to be rigorous. Moreover, if I know, for an exemple, that I have a 25ms accuracy, I won't take 25ms of difference between simple task and dual task as a significant difference.
    Anyway, if you have an opinion about others delays I should take into account, please tell me. Just note that any constant delay shouldn't be an issue : for an exemple, I don't have to take into account the 3ms that the sound wave takes to go to the microphone, because the goal of this part of the experiment is to control the difference between reaction time in dual task and reaction time in simple task, so this type of delay is constant in both conditions.
    I don't know if I'm clear about this. Please, don't hesitate to tell me if I'm not.

  • edited 1:30AM

    Hi,

    If so, it would be right to conclude that it seems better to use the "show" function than the "clock" function

    Well, it doesn't really matter. But of course, even a very simple command like clock.time() takes some time, so returning the time stamp of an action (like showing a canvas) is always more accurate than measuring the time stamp slightly earlier (by calling clock.time()). Effectively, you're right; use show instead of clock.time().

    I thought about sending such a sound in ultrasound

    This sounds quite cool. If you ask me, this is worth a try.

    adding all these landmarks would add others delays into the loop

    You are right, adding all these things will slow everything down. However, if you have the landmarks, you don't need to have your entire recording match the logfile, because you can rely on the landmark within a trial to measure your response times (e.g. if the landmarks comes 20 ms later in the wave-file than in opensesames log-file, you can simply substract those 20ms from your response time, and should have a pretty good estimate). Or is this not what you intend to do?

    I just don't know if I can talk about a 10 ms accuracy in my paper,

    Me neither. You should ask your supervisor. As long as the delay is constant (and rather small), you should be able to retrieve a rather clean estimate of your dependent measure. At least, I think so.

    I won't take 25ms of difference between simple task and dual task as a significant difference

    Isn't that the job of a statistical test? Whatever the difference is, a t-test (for example) will tell you whether the difference is significant or not

    I don't know if I'm clear about this.

    I think you are :)

    I hope I was as well.

    edaurd

    Buy Me A Coffee

Sign In or Register to comment.

agen judi bola , sportbook, casino, togel, number game, singapore, tangkas, basket, slot, poker, dominoqq, agen bola. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 50.000 ,- bonus cashback hingga 10% , diskon togel hingga 66% bisa bermain di android dan IOS kapanpun dan dimana pun. poker , bandarq , aduq, domino qq , dominobet. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 10.000 ,- bonus turnover 0.5% dan bonus referral 20%. Bonus - bonus yang dihadirkan bisa terbilang cukup tinggi dan memuaskan, anda hanya perlu memasang pada situs yang memberikan bursa pasaran terbaik yaitu http://45.77.173.118/ Bola168. Situs penyedia segala jenis permainan poker online kini semakin banyak ditemukan di Internet, salah satunya TahunQQ merupakan situs Agen Judi Domino66 Dan BandarQ Terpercaya yang mampu memberikan banyak provit bagi bettornya. Permainan Yang Di Sediakan Dewi365 Juga sangat banyak Dan menarik dan Peluang untuk memenangkan Taruhan Judi online ini juga sangat mudah . Mainkan Segera Taruhan Sportbook anda bersama Agen Judi Bola Bersama Dewi365 Kemenangan Anda Berapa pun akan Terbayarkan. Tersedia 9 macam permainan seru yang bisa kamu mainkan hanya di dalam 1 ID saja. Permainan seru yang tersedia seperti Poker, Domino QQ Dan juga BandarQ Online. Semuanya tersedia lengkap hanya di ABGQQ. Situs ABGQQ sangat mudah dimenangkan, kamu juga akan mendapatkan mega bonus dan setiap pemain berhak mendapatkan cashback mingguan. ABGQQ juga telah diakui sebagai Bandar Domino Online yang menjamin sistem FAIR PLAY disetiap permainan yang bisa dimainkan dengan deposit minimal hanya Rp.25.000. DEWI365 adalah Bandar Judi Bola Terpercaya & resmi dan terpercaya di indonesia. Situs judi bola ini menyediakan fasilitas bagi anda untuk dapat bermain memainkan permainan judi bola. Didalam situs ini memiliki berbagai permainan taruhan bola terlengkap seperti Sbobet, yang membuat DEWI365 menjadi situs judi bola terbaik dan terpercaya di Indonesia. Tentunya sebagai situs yang bertugas sebagai Bandar Poker Online pastinya akan berusaha untuk menjaga semua informasi dan keamanan yang terdapat di POKERQQ13. Kotakqq adalah situs Judi Poker Online Terpercayayang menyediakan 9 jenis permainan sakong online, dominoqq, domino99, bandarq, bandar ceme, aduq, poker online, bandar poker, balak66, perang baccarat, dan capsa susun. Dengan minimal deposit withdraw 15.000 Anda sudah bisa memainkan semua permaina pkv games di situs kami. Jackpot besar,Win rate tinggi, Fair play, PKV Games