Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[solved] No-go trials and feedback

edited October 2012 in OpenSesame

Hi,

I got a problem concerning the feedback in my experiment. One of the conditions is a no-go condition. That is participants are asked to not respond if a target does not appear.

As I always want to present an error notification to the participant I change the value of the correct variable in the run phase:

if location_target == "none":   
    resp = exp.get("response_keyboard_response")
    if resp == "timeout": # response timed out = no response
        exp.set("correct",1)
    else:
        exp.set("correct",0)
        exp.error_notification.text("WRONG",True,cx,cy)
        exp.error_notification.show()
        exp.sleep(1500)



Note that the actual stimuli are presented via another inline script which is run before this inline script. In the first inline script I do not define a correct_response for a no-go trial (location_target == "none").


The problem that now occurs is that in the feedback not only the response time average rises to a very high value as OpenSesame also averages over trials in which the participant was asked to not respond. How can I just leave those trials out of the averaging procedure.

Another issue is also the percentage correct. It now falsely refers to the no-go trials as wrong trials and therefore calculates a wrong percentage. I guess this occurs because it is initially defined as wrong in the first inline script. How can I work around this problem?
Is there some way to define the correct_response in the first inline script as no response?

Comments

  • edited October 2012

    Hi!

    You could use an inline_script instead of a keyboard_response:

    from openexp.keyboard import keyboard
    
    # timeout in milliseconds
    maxtime = 3000
    
    # list of allowed keys
    allowedkeys = ['return', 'space', 'k', ';', '9', 'right']
    
    # create keyboard object
    my_keyboard = keyboard(self.experiment, keylist=allowedkeys, timeout=0)
    
    # wait for keyboard response or timeout
    no_timeout = True
    no_response = True
    starttime = self.experiment.time()
    while no_timeout and no_response:
        if no_response:
            pressed, presstime = my_keyboard.get_key()
            if pressed:
                self.experiment.set("response", my_keyboard.to_chr(pressed))
                self.experiment.set("response_time", presstime - starttime)
                no_response = False
        if self.experiment.time() - starttime >= maxtime:
            self.experiment.set("response", "timeout")
            self.experiment.set("response_time", presstime - starttime)
            no_timeout = False
    
    # determine correctness
    if location_target == None:
        if self.get("response") == "timeout":
            correct = 1
        else:
            correct = 0
    #HANDLE REST OF LOCATIONS (I don't know which response keys you use)
    #elif location_target == ... and self.get("response") == ...
    # rest of code
    
    self.experiment.set("correct", correct)
    
    # show message
    if not correct:
        exp.error_notification.text("WRONG",True,cx,cy)
        exp.error_notification.show()
        exp.sleep(1500)
    

    Hope this helps!

  • edited 12:32AM

    Hey,

    Thanks for the help. I used your script now and it works the way I want it to work concerning the correct error messages and stuff. However, I can't use the standard variables of the feedback item anymore (avg_rt, acc) as they are now undefined. Do I have to calculate them on my own now or is there a way to nevertheless gather these data (plus I actually don't really know what other information the keyboard_response item normally saves on default)?

  • edited October 2012

    Hi Michel,

    If I understand your question correctly, you want to

    • calculate a new accuracy score from the changed 'correct' variable
    • calculate a new average RT from the RT's on go-trials only

    This can be achieved by storing the appropriate variables in a list and, at the end of a block, displaying the averages of those lists in a feedback item. More specifically:

    • append an inline_script item to your block sequence. Here, you define the (still empty) lists and make them global. (Note that by doing this you make sure the lists are reset at the beginning of a new block.)
      [gist:3948487]

    • next, you can adapt the following code to your logic in an inline_script item in your trial sequence:
      [gist:3948601]

    • finally, you can display the new block averages in a feedback item by using square brackets, like so: [my_average_rt]

    I hope this helps. Please let us know if you have any further questions!

    Best wishes,

    Lotje

  • edited October 2012

    Thanks Ivanderlinden!

    I have however, worked out a work-around yesterday (see below) by using counters in an inline script in the block sequence. I guess both do the trick properly.

    Again thanks for your solution!

    # Set average response time for target trials excl. no-go trials
    if location_target != "none":
    self.experiment.set("resp_counter_none",exp.get("resp_counter_none")+1)
    self.experiment.set("resp_time_total",exp.get("resp_time_total")+exp.get("response_time"))
    self.experiment.set("avg_rt",(exp.get("resp_time_total")/exp.get("resp_counter_none"))) If the first condition of the experiment is the none condition avg_rt can be undefined as it is devided by 0 (resp_counter_none will be). These lines prevent this from happening. elif location_target == "none" and exp.get("resp_counter_none") == 0:
    self.experiment.set("avg_rt",0) Set accuracy of answers Note that accuracy needs an own counter because no other variable resets the stimulus count after the practice trials. self.experiment.set("resp_counter_all",exp.get("resp_counter_all")+1)
    self.experiment.set("correct_total",exp.get("correct_total")+exp.get("correct"))
    self.experiment.set("acc",(float(exp.get("correct_total"))/exp.get("resp_counter_all"))*100) #float it otherwise acc stays 0 after a mistake
  • edited 12:32AM

    Regarding the division by 0: I do not think you have to worry about that with the code you have now:

    First, you up the counter by one.

    self.experiment.set("resp_counter_none",exp.get("resp_counter_none")+1)
    

    So now (if it started as 0) [resp_counter_none] will be 1.

    Then you do the following (for which I suspect you have already set [resp_time_total] somewhere before this is run?):

    self.experiment.set("resp_time_total",exp.get("resp_time_total")+exp.get("response_time"))
    

    Then you divide by [resp_counter_none], which was 1.

    self.experiment.set("avg_rt",(exp.get("resp_time_total")/exp.get("resp_counter_none")))
    

    So you won't ever divide by zero, as long as you start [resp_counter_none] as 0 or higher! :)

  • edited October 2012

    Hi Edwin,

    yes I set resp_time_total to 0 in an inline script before this inline script.

    At first I had the same logic as you. But then I encounter a particular situation were it nevertheless happened and it took me quite some time to figure it out. However, actually it is fairly easy:

    The problem occurs in a situation in which after the practice trials the first trial of the experimental block starts with target_location == "none" (thus with a condition in which no target will be shown). Since the very first line you quoted will only be called if it is NO no target trial (location_target != "none") resp_time_total will not receive +1 but will remain zero, cascading to an undefined avg_rt!

    I might figure out a smoother way to work around this problem but with this line it works properly as well.

  • edited 12:32AM

    Sorry, my bad, should've read a bit better. I was confused by the name [resp_counter_none] :p

    Anyhow, it's great that it works!

Sign In or Register to comment.