Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

[solved] keyboard response

AnnaAnna Posts: 6
edited January 2014 in OpenSesame

Hi Sebastiaan,
it´s me again. I have two further questions. First: The program doesn´t accept my correct_response variables. My experiment consists of three parts. In every part there are different response variables (keyboard responses are numbers), all defined as correct_response in the block_loops. I want the target to be present for 300 ms and afterwards I want to present the fixation dot for 1500 ms. The keyboard response should occur during this time span (1800 ms). But after I defined it, the target is presented for more than 300 ms and the feedback is 0% accuracy. In the trial sequence there is first the fixation dot sketchpad and then the target sketchpad, because I don´t want the sequence starting with the target. Maybe that´s the problem?
My second question regards randomization. The same target should be presented no more than two times in series. Is there a possibilitiy to restrict the random function in this way?

Thanks in advance!
Kind ragards, Anna

Comments

  • sebastiaansebastiaan Posts: 2,737
    edited 4:59AM

    Hi Anna,

    I'm not sure I completely understand your question. Could you perhaps provide some more details about what exactly you want to do, and in what respect it doesn't work? If you find it difficult to describe your experiment, you could also attach some screenshots and/ or paste some code (to preserve formatting use <pre> [code] </pre> tags).

    Regarding your question about the randomization. This is possible, but only with some inline code. You may want to take a look at this topic: http://forum.cogsci.nl/index.php?p=/discussion/80/open-pseudorandom-order

    Cheers,
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • AnnaAnna Posts: 6
    edited 4:59AM

    The experiment consists of three parts. I´ve created every part (the trial sequence of every part) like this:
    trial_sequence: fixation_dot, target, keyboard_response, sampler, logger

    I defined the correct_response variable in the block loop and set the timeout to 1800 ms (target 300 ms + fixation dot 1500 ms):

    [set repeat "7"
    set description "Repeatedly runs another item"
    set item "trial_sequence_right"
    set column_order "target_colour;target_number;correct_response"
    set cycles "3"
    set order "random"
    setcycle 0 target_number "8"
    setcycle 0 correct_response "8"
    setcycle 0 target_colour "#00ff00"
    setcycle 1 target_number "9"
    setcycle 1 correct_response "9"
    setcycle 1 target_colour "#ff0000"
    setcycle 2 target_number "0"
    setcycle 2 correct_response "0"
    setcycle 2 target_colour "#0000ff"
    run trial_sequence_right]
    [set allowed_responses "8;9;0"
    set description "Collects keyboard responses"
    set timeout "1800"
    set flush "yes"]

    I also set a sampler, which plays a sound if the response was incorrect. When I run the experiment after every keypress the (incorrect) sound occurs and the feedback item (including accuracy and avg_rt) „says“ 0% accuracy. So all my keypresses were wrong, although I defined it. How can I solve this?

    Thank you for the link, I will try it :)

  • WouterWouter Posts: 48
    edited March 2012

    I think my first OpenSesame question is something related to Anna's: I'm curious to know what's being used to compute the [acc] variable on the feedback screen as well.

    My correct_response is set at runtime in a script. I have an inline script set after a "text_input" item, that reads [response] to determine (after some string formatting) whether the resp was correct. After this check, I've tried setting both [correct] and [correct_text_input] accordingly, but still, the feedback screen after blocks says "0% accuracy"

    
    self.experiment.set("correct_response", self.experiment.get("T2") )
    
    if self.experiment.get("response")[0].upper() == self.experiment.get("correct_response"):
        self.experiment.set("correct", 1)
        self.experiment.set("correct_text_input", 1)
    else:
        self.experiment.set("correct", 0)
        self.experiment.set("correct_text_input", 0)
    
  • sebastiaansebastiaan Posts: 2,737
    edited January 2014

    Hi Anna and Wouter,

    First of all, Wouter: good to see you on the forum!

    There are three different questions here:

    Anna's question regarding the timing

    Right now your structure is as follows:

    target (300ms)
    fixation dot (1500ms)
    response (max. 1800ms)

    This means that the experiment will show the target for 300ms (you say that it's more, but what exactly do you mean by this?), followed by a fixation dot for 1500ms. And only after this will the experiment start collecting keyboard responses, with a timeout of 1800ms. So people will not be able to respond during the target presentation and the first 1500ms of the fixation dot. Does this make sense? OpenSesame works purely sequential. The timeout setting in the keyboard_response refers to the timeout after the onset of the keyboard_response item, not relative to the start of the trial or some other point in time.

    If you want to have a more complicated structure, you will need a bit of inline code, but nothing too overwhelmingly complex (we can get to that later, if you wish). This may be needed in your case, if you want to collect responses while presenting sketchpads, rather than present a number of sketchapds followed by response collection.

    Anna's question about the feedback

    I tested it, and it's a bug! I filed a bug report: https://github.com/smathot/OpenSesame/issues/77
    Numeric responses are not processed correctly, in the sense that they are always counted as incorrect. Letters and other characters should work fine, so if it's all the same you might want to use letters for responses. Otherwise you'll have to use a bit of inline code (see next point) to keep track of the feedback manually.

    Wouter's question about keeping track of feedback using inline code

    If you want to keep track of feedback variables using inline code, there are a number of variables that have to be explicitly set. Just setting the correct variable is not sufficient. The code below shows how you can keep track of accuracy and average response time in the same way that response items do automatically. It assumes that you know the response time (my_response_time) and whether the response was correct (is_correct).

    # ***
    # * Keep track of averages, for example to be used in a feedback item
    # ***
    
    my_response_time = 1000 # Indicates response time
    is_correct = True # Indicates whether the response is correct
    
    self.experiment.total_responses += 1
    if is_correct:
        self.experiment.total_correct += 1
    self.experiment.total_response_time += my_response_time
    self.experiment.acc = 100. * self.experiment.total_correct / self.experiment.total_responses
    self.experiment.avg_rt = self.experiment.total_response_time / self.experiment.total_responses
    self.experiment.accuracy = self.experiment.acc
    self.experiment.average_response_time = self.experiment.avg_rt
    

    For more info, see: http://osdoc.cogsci.nl/usage/giving-feedback-to-participants

    Hope this gets you started!

    Cheers,
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • WouterWouter Posts: 48
    edited 4:59AM

    Hi Sebastian,

    Thanks for the welcome, I think you've done a great job, especially when it comes to accessibility. My only experiment-building experience thus far was one expt in E-prime, and I'm surprised at how easily I can already start working on simple experiments here.

    More importantly, thanks for the response; that partially solves my problem. However, I'm still curious/confused about a few things:

    1) if I put this in my inline script, which runs at the end of every trial, just before the logger, does this replace 'what is being done automatically'? For example the line

    self.experiment.total_responses += 1

    shouldn't be run twice.
    I briefly tested it, and I got 21% accuracy with 41 out of 64 correct; that doesn't seem right.

    2) My initial question was mainly target at 'what is being done automatically'. Where is that done? Is it the logger item, or a response collection item...?

    3) On a very related note; what variables does 'what is being done automatically' use at that point? the reason I figured just setting 'correct' would work was that I assumed that that variable would be used.

    I understand that I could always work around this and keep track of my own defined variables, but I'd rather learn to use the built in tools for these purposes.

    Thanks,
    Wouter

    P.S. I'm sorry Anna for unintentionally hijacking your question thread!

  • sebastiaansebastiaan Posts: 2,737
    edited January 2014

    Hi Wouter,

    I'll try to clear things up a bit.

    The response items take care of themselves. This means that when a keyboard_response is finished (i.e. you've pressed a key or a timeout occured) all the relevant variables are automatically updated. Basically, the code that I pasted above comes straight from the response items. They do exactly the same thing. Therefore, if you use a response item and execute the script above to process the same response, things will get messed up.

    So, the moment at which all the "response bookkeeping" is performed is right after the collection of a response, at the end of a response item. The feedback item doesn't collect a list and calculate averages, or anything like that. The averages are calculated along the way, updated with each response.

    So the take home message is that you should do the response bookkeeping right after you collect a response. Let's say that I've created a custom response collection script. If I want to play nice with the rest of the items, it should look something like this:

    Update 9/4: Corrected a mistake in the script below

    from openexp.keyboard import keyboard
    
    timeout = 1500
    
    # This example collects a response while waiting for 1500ms. Unlike
    # a normal keyboard_response, this example completes waiting for
    # the specified time after the participant has responded.
    t = self.time()
    my_keyboard = keyboard(self.experiment, timeout=timeout)
    key, key_timestamp = my_keyboard.get_key()
    
    # Sleep for the remaining time
    rest_time = timeout - self.time() + t
    if rest_time > 0:
        self.sleep(rest_time)
    
    # Determine the response (convert to normal character) and response time
    self.experiment.response = my_keyboard.to_chr(key)
    self.experiment.response_time = key_timestamp - t
    
    # Determine if the response was correct
    if not self.has('correct_response'):
        self.experiment.correct = 'undefined'
    elif self.experiment.response == str(self.get('correct_response')):
        self.experiment.correct = 1
    else:
        self.experiment.correct = 0
    
    # Do response bookkeeping
    self.experiment.total_responses += 1
    if self.experiment.correct == 1:
        self.experiment.total_correct += 1
    self.experiment.total_response_time += self.experiment.response_time
    self.experiment.acc = 100. * self.experiment.total_correct / self.experiment.total_responses
    self.experiment.avg_rt = self.experiment.total_response_time / self.experiment.total_responses
    self.experiment.accuracy = self.experiment.acc
    self.experiment.average_response_time = self.experiment.avg_rt
    

    The example above is probably also helpful for Anna, since it fixes the numeric responses bug.

    Hope this helps!

    Cheers,
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • AnnaAnna Posts: 6
    edited January 2014

    Hi,

    My question regarding the timing:

    The timeout setting in the keyboard_response refers to the timeout after the onset of the keyboard_response item, not relative to the start of the trial or some other point in time. I thought that it would work because of the "start response interval-button"?

    I improved the structure of the trial_sequence (so that the timing is correct now) and now it looks like this:

    set flush_keyboard "yes"
    set description "Runs a number of items in sequence"
    run neutral "always"
    run target "always"
    run fixation_dot "always"
    run keyboard_response "always"
    run wrong_answer "[correct] = 0"
    run too_slow "[response_time] > 1800"
    run logger "always"]
    

    My question about the feedback:

    Furthermore I changed the keyboard response to letters. But still most of the answers are counted as incorrect. Is it possible that the problem arises because I´ve got defined 3 variables called correct_response (one in each block_loop for each part of the experiment)? In the first part the correct responses (and the allowed keyboard responses) are j,k,l. In the second one a,s,d and in the last one j,k,l,a,s,d.

    Thanks for your help!

    P.S. Wouter, that´s ok :)

  • sebastiaansebastiaan Posts: 2,737
    edited 4:59AM
    I thought that it would work because of the "start response interval-button"?

    Right, I see how that can be confusing. I'll think about the best way to make this a bit more transparent.

    Furthermore I changed the keyboard response to letters. But still most of the answers are counted as incorrect. Is it possible that the problem arises because I´ve got defined 3 variables called correct_response (one in each block_loop for each part of the experiment)? In the first part the correct responses (and the allowed keyboard responses) are j,k,l. In the second one a,s,d and in the last one j,k,l,a,s,d.

    No, in most cases that should be fine. Again, OpenSesame works sequentially. So the fact that some variable (say correct_response) may be changed at some later point in the experiment doesn't make a difference 'right now' (if that makes any sense?)

    You say that most of the answers are counted as incorrect. This suggests that the experiment basically works, but that there's minor glitch somewhere. Are you sure the loop table is defined correctly? If you look at the log-file, are there trials that are erroneously marked as incorrect (correct = 0) or is it only a problem with the feedback? If the latter appears to be the case, do you reset the feedback variables at the start of the block loop (using the reset_feedback plug-in)?

    If you can't figure it out, perhaps you could attach a version of the experiment, so that I can see what causes the problem.

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • WouterWouter Posts: 48
    edited 4:59AM

    Hey Sebastiaan,

    Thanks! I understand I should just code 'my' response collection (in a manner similar to how the text_input item would) as an inline script as well (for my purposes, i.e. to do some string formatting before comparing the response to correct_response).

    I thought (hoped) the design of the items in the GUI, for example text_input would be modular to the extent that I could use them as a starting point (e.g. setting up a canvas), then do some self-defined processing of the response instead of the predefined form of logging.

    Would there be a way for another workaround? For example to define a text_input item (either using the GUI or .opensesame scripting), then look at what the python source code snippet is that actually defines the 'prepare' and 'run' stages for this item (the function calls that draw the display, collect the response etc.) and then use that as a starting point for e.g. a my_text_input script?

    If not, I might be gradually changing this general question thread into a "requested features" - one, so I'll stop here and get coding :)

    Thanks again!
    Wouter

  • sebastiaansebastiaan Posts: 2,737
    edited January 2014

    Hi Wouter,

    Ah right, I think I see what you're trying to achieve now. You can handle your own correctness checking using inline_script, without having to re-implement the response item. Basically, you just make sure that the response_item always sets 'correct' to '0' or 'undefined'. And then you adjust the feedback variables afterwards, for example like so:

    # This example assumes that the response item automatically 
    # sets the response to incorrect (0) or 'undefined'. We can
    # override this using inline code, to do more advanced checking
    # of the response before judging its correctness.
    #
    # Count responses that start with an 'a' as correct
    r = self.get('response_text_input')
    if r[0] == 'a':
        self.experiment.correct = 1
        self.experiment.total_correct += 1
    else:
        self.experiment.correct = 0
    
    # Do response bookkeeping!
    self.experiment.acc = 100. * self.experiment.total_correct / self.experiment.total_responses
    self.experiment.accuracy = self.experiment.acc
    
    

    Hope this helps!

    (Btw, if you're interested, pre-release packages of 0.26 are available here: http://files.cogsci.nl/software/opensesame/pre-releases/)

    Cheers,
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

Sign In or Register to comment.