Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[open] Timing issue

edited January 2013 in OpenSesame

Hi,
I'm running an fMRI experiment an therefore the timing is very important. While analyzing the data I realized that I was accumulating ms delays during the experiment that by the en of the experiment amounted to over 2 seconds (which of course means that my analyses were measuring the wrong stimulus. I was able to correct the issue afterwards using all the different timestamps but it would be better to be able o use the planned timed sequence. this means using absolute rather than relative time during the presentation of stimuli.
Is it possible to code the experiment so that stimuli appear as a number of ms from the beginning of the experiment instead of as a number of ms from the previous stimulus?

Thanks!

Comments

  • edited October 2013

    Yes, there's always some slippage between trials due to the preparation time of the stimuli, etc. The strategy used by OpenSesame is to keep the timing within a sequence as controlled as possible, but allow some unpredictable variation between sequences. In general, this is a good strategy, I think. But of course not if you assume an exact duration for each trial.

    If you want non-slip timing, you can add a simple script to the start of the trial. This will essentially pad the duration of each trial, so that every trial is the same length.

    Hope this helps!

    # Specify how long you want the trial the last. Make sure that trials do not last
    # longer!
    trial_duration = 3000
    
    # Get a trial id
    trial_id = self.get_check('trial_id', default=0)
    
    # For the first trial, just note the time
    if trial_id == 0:
            exp.set('first_trial_timestamp', self.time())
            time_to_pad = 0
    
    # For the other trials, wait until the the trial should start, based on the time
    # of the first trial and the trial duration
    else:
            time_to_pad = self.get('first_trial_timestamp') + trial_id * trial_duration - self.time()
            if time_to_pad > 0:
                    self.sleep(time_to_pad)
    
    # Remember the trial id and padding time
    exp.set('trial_id', trial_id+1)
    exp.set('time_to_pad', time_to_pad)
    
  • edited January 2013

    Thanks I think this is exactly what I wanted!
    However, in my case there is usually a delay so time_to_pad would probably be a negative number most of the time.

  • edited 12:52PM

    Thanks I think this is exactly what I wanted!

    Great!

    However, in my case there is usually a delay so time_to_pad would probably be a negative number most of the time.

    Just to clarify: This script only pauses the experiment for a certain amount of time to make sure that all trials have the same length. So if you find that some trials are 4010ms, some 4050ms, etc., you can set trial_duration to say 4100. Setting it to 4000, thus ending up with negative time_to_pad values, will not do anything!

  • edited 12:52PM

    A couple of questions...

    what is exactly hapenning during sleep? Is there a sketchpad being shown (with a fixation for example? or just a blank screen?

    At the end of my trial I show a feedback that lasts for (duration - RT) Is there a way to include the time to pad at this point in something like (duration -100ms - RT + time_to_pad)?

  • edited 12:52PM

    what is exactly hapenning during sleep? Is there a sketchpad being shown (with a fixation for example? or just a blank screen?

    The sleep() command just freezes the computer and doesn't change what's currently on the display. So if you want to present something during the sleep() time, you have to do that explicitly before the command is called.

    At the end of my trial I show a feedback that lasts for (duration - RT) Is there a way to include the time to pad at this point in something like (duration -100ms - RT + time_to_pad)?

    Yes. If you do not add a blank display to the end of the trial, the feedback from the previous trial (assuming that you present feedback, as in your case) will still be shown during the 'padding time' at the beginning of the trial (because, as mentioned above, sleeping does not by itself change the display).

    So probably you could even set the feedback duration to 0 and just increase trial_duration a bit in the script. Does that make sense?

  • edited 12:52PM

    Hi Sebastiaan,
    I don't understand what is wrong with the setup that I have. I tried to do what we were discussing and it seems to work for a while but then suddenly a stimulus stays on the screen for like 6 seconds when they all should stay on the screen around 2 seconds can you check this file and let me know what you think?

    https://docs.google.com/file/d/0BypwdDwK7MYWaWJ2MVpubXJjcms/edit

  • edited 12:52PM

    Well, that's quite a complicated experiment you have there! But, as far as I understand it, you have the trial_duration specified as:

    trial_duration = Prev_Jitter + 1000 + 2000

    This Prev_Jitter is read out from the various run[X].csv files in the file pool, and has a(n apparently) random value that ranges up to 10 seconds. So some trials will be very long by design. Since the Hatch_Continued item is the last display item in the trial sequence, this may stay on the screen for a long time, until the first display item for the next trial is presented. If you want this last item to be presented for exactly 2 seconds, you should set its duration to 2 seconds, and have it followed by an empty sketchpad (or something that you want to be shown in the intertrial interval).

    Does this answer your question?

    Cheers!

  • edited 12:52PM

    Well, so the main reason why I am interested in the whole padding time is to make sure that the trial lasts the amount of time that I wanted to last. As I said previously the problem I have is that small delays start to accumulate over trials and by the end I am 2 seconds over the time I am supposed to be. So, what I was trying to do is have the last Hatch_Continued item for 1900 ms and have the sleep time wait (with the same item on screen) for the remaining 100 ms. The logic is that the small delays would make the sleep time smaller than 100 ms and that way I could make sure that the predicted time of trial is consistent with what I designed on the run[x].csv file.
    what I tried to accomplish with trial_duration is the ideal measure of how long was the previous trial so at the start of the next trial the sleep functions pads the time to a point that is equal to the variable jittered time, plus the time of the other stimuli of the task. However, this didn't work as I expected but I am not sure why...
    Does this make sense?

  • edited 12:52PM

    what I tried to accomplish with trial_duration is the ideal measure of how long was the previous trial so at the start of the next trial the sleep functions pads the time to a point that is equal to the variable jittered time, plus the time of the other stimuli of the task.

    Hmm, I'm not sure I understand, because that's exactly what you did! Isn't the problem simply that the values for the jitter are set too high? If I look at run[x].csv, some jitter values are more than 10 seconds. If you really want the jitter to be that long, in what sense does the experiment not work as expected?

  • edited 12:52PM

    Well trial_duration IS taking into account the variable jitter time isn't it? Therefore it should not matter that some trials are really long. The calculation of trial_duration should take that into account.
    The sense in which the experiment is not working as expected is that the final hatch_continued should still be 2 seconds long every time and when I run it sometimes it stays on screen for something like 6-8 seconds that means the sleep jumps one trial or something like that...
    Maybe I have to take more than 100ms because I am getting negative time_to_pads?

  • edited February 2013

    Ah, I think I start to see the issue now. I'm not a 100% sure, because the flow of the experiment is quite difficult to follow!

    I suspect that there is a mismatch between Jitter, which is used as the duration for the Fixation item, and JitterPrev, which is used to add padding time to the start of a trial. In principle your approach should work, I agree, but there must be something wrong with the logic here (I haven't debugged it, but that's my guess).

    The easiest fix is probably to not work with two Jitter variables at all. That's not necessary, and makes it much more complex. I would simply end each trial with:

    • Hatch_Continued feedback ([timeleft] ms)
    • Fixation sketchpad (0 ms)

    And then you remove the Fixation display at the start of the trial. Because the Prepare_Trial script will pause the experiment to make sure that each trial is equally long, the Fixation sketchpad from the previous trial will remain visible. Do you see what I mean? So you don't need to explicitly add a sketchpad with a [Jitter] duration, this is already implicit in the Prepare_Trial script.

    Hope this helps!

  • edited October 2013

    "If you want non-slip timing, you can add a simple script to the start of the trial. This will essentially pad the duration of each trial, so that every trial is the same length."

    what is this simple script ? I cant' see that in this topic. Can you send again Sebastian

    Best,

    Cumhur

  • edited 12:52PM

    It is simply the script pasted below my first comment in this discussion. You can also get it from pastebin: http://pastebin.com/iMCvFJuX

Sign In or Register to comment.