[solved] keyboard response
Hi Sebastiaan,
it´s me again. I have two further questions. First: The program doesn´t accept my correct_response variables. My experiment consists of three parts. In every part there are different response variables (keyboard responses are numbers), all defined as correct_response in the block_loops. I want the target to be present for 300 ms and afterwards I want to present the fixation dot for 1500 ms. The keyboard response should occur during this time span (1800 ms). But after I defined it, the target is presented for more than 300 ms and the feedback is 0% accuracy. In the trial sequence there is first the fixation dot sketchpad and then the target sketchpad, because I don´t want the sequence starting with the target. Maybe that´s the problem?
My second question regards randomization. The same target should be presented no more than two times in series. Is there a possibilitiy to restrict the random function in this way?
Thanks in advance!
Kind ragards, Anna
Comments
Hi Anna,
I'm not sure I completely understand your question. Could you perhaps provide some more details about what exactly you want to do, and in what respect it doesn't work? If you find it difficult to describe your experiment, you could also attach some screenshots and/ or paste some code (to preserve formatting use <pre> [code] </pre> tags).
Regarding your question about the randomization. This is possible, but only with some inline code. You may want to take a look at this topic: http://forum.cogsci.nl/index.php?p=/discussion/80/open-pseudorandom-order
Cheers,
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
The experiment consists of three parts. I´ve created every part (the trial sequence of every part) like this:
trial_sequence: fixation_dot, target, keyboard_response, sampler, logger
I defined the correct_response variable in the block loop and set the timeout to 1800 ms (target 300 ms + fixation dot 1500 ms):
I also set a sampler, which plays a sound if the response was incorrect. When I run the experiment after every keypress the (incorrect) sound occurs and the feedback item (including accuracy and avg_rt) „says“ 0% accuracy. So all my keypresses were wrong, although I defined it. How can I solve this?
Thank you for the link, I will try it
I think my first OpenSesame question is something related to Anna's: I'm curious to know what's being used to compute the [acc] variable on the feedback screen as well.
My correct_response is set at runtime in a script. I have an inline script set after a "text_input" item, that reads [response] to determine (after some string formatting) whether the resp was correct. After this check, I've tried setting both [correct] and [correct_text_input] accordingly, but still, the feedback screen after blocks says "0% accuracy"
Hi Anna and Wouter,
First of all, Wouter: good to see you on the forum!
There are three different questions here:
Anna's question regarding the timing
Right now your structure is as follows:
This means that the experiment will show the target for 300ms (you say that it's more, but what exactly do you mean by this?), followed by a fixation dot for 1500ms. And only after this will the experiment start collecting keyboard responses, with a timeout of 1800ms. So people will not be able to respond during the target presentation and the first 1500ms of the fixation dot. Does this make sense? OpenSesame works purely sequential. The timeout setting in the keyboard_response refers to the timeout after the onset of the keyboard_response item, not relative to the start of the trial or some other point in time.
If you want to have a more complicated structure, you will need a bit of inline code, but nothing too overwhelmingly complex (we can get to that later, if you wish). This may be needed in your case, if you want to collect responses while presenting sketchpads, rather than present a number of sketchapds followed by response collection.
Anna's question about the feedback
I tested it, and it's a bug! I filed a bug report: https://github.com/smathot/OpenSesame/issues/77
Numeric responses are not processed correctly, in the sense that they are always counted as incorrect. Letters and other characters should work fine, so if it's all the same you might want to use letters for responses. Otherwise you'll have to use a bit of inline code (see next point) to keep track of the feedback manually.
Wouter's question about keeping track of feedback using inline code
If you want to keep track of feedback variables using inline code, there are a number of variables that have to be explicitly set. Just setting the correct variable is not sufficient. The code below shows how you can keep track of accuracy and average response time in the same way that response items do automatically. It assumes that you know the response time (my_response_time) and whether the response was correct (is_correct).
For more info, see: http://osdoc.cogsci.nl/usage/giving-feedback-to-participants
Hope this gets you started!
Cheers,
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi Sebastian,
Thanks for the welcome, I think you've done a great job, especially when it comes to accessibility. My only experiment-building experience thus far was one expt in E-prime, and I'm surprised at how easily I can already start working on simple experiments here.
More importantly, thanks for the response; that partially solves my problem. However, I'm still curious/confused about a few things:
1) if I put this in my inline script, which runs at the end of every trial, just before the logger, does this replace 'what is being done automatically'? For example the line
shouldn't be run twice.
I briefly tested it, and I got 21% accuracy with 41 out of 64 correct; that doesn't seem right.
2) My initial question was mainly target at 'what is being done automatically'. Where is that done? Is it the logger item, or a response collection item...?
3) On a very related note; what variables does 'what is being done automatically' use at that point? the reason I figured just setting 'correct' would work was that I assumed that that variable would be used.
I understand that I could always work around this and keep track of my own defined variables, but I'd rather learn to use the built in tools for these purposes.
Thanks,
Wouter
P.S. I'm sorry Anna for unintentionally hijacking your question thread!
Hi Wouter,
I'll try to clear things up a bit.
The response items take care of themselves. This means that when a keyboard_response is finished (i.e. you've pressed a key or a timeout occured) all the relevant variables are automatically updated. Basically, the code that I pasted above comes straight from the response items. They do exactly the same thing. Therefore, if you use a response item and execute the script above to process the same response, things will get messed up.
So, the moment at which all the "response bookkeeping" is performed is right after the collection of a response, at the end of a response item. The feedback item doesn't collect a list and calculate averages, or anything like that. The averages are calculated along the way, updated with each response.
So the take home message is that you should do the response bookkeeping right after you collect a response. Let's say that I've created a custom response collection script. If I want to play nice with the rest of the items, it should look something like this:
Update 9/4: Corrected a mistake in the script below
The example above is probably also helpful for Anna, since it fixes the numeric responses bug.
Hope this helps!
Cheers,
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi,
My question regarding the timing:
The timeout setting in the keyboard_response refers to the timeout after the onset of the keyboard_response item, not relative to the start of the trial or some other point in time. I thought that it would work because of the "start response interval-button"?
I improved the structure of the trial_sequence (so that the timing is correct now) and now it looks like this:
My question about the feedback:
Furthermore I changed the keyboard response to letters. But still most of the answers are counted as incorrect. Is it possible that the problem arises because I´ve got defined 3 variables called correct_response (one in each block_loop for each part of the experiment)? In the first part the correct responses (and the allowed keyboard responses) are j,k,l. In the second one a,s,d and in the last one j,k,l,a,s,d.
Thanks for your help!
P.S. Wouter, that´s ok
Right, I see how that can be confusing. I'll think about the best way to make this a bit more transparent.
No, in most cases that should be fine. Again, OpenSesame works sequentially. So the fact that some variable (say correct_response) may be changed at some later point in the experiment doesn't make a difference 'right now' (if that makes any sense?)
You say that most of the answers are counted as incorrect. This suggests that the experiment basically works, but that there's minor glitch somewhere. Are you sure the loop table is defined correctly? If you look at the log-file, are there trials that are erroneously marked as incorrect (correct = 0) or is it only a problem with the feedback? If the latter appears to be the case, do you reset the feedback variables at the start of the block loop (using the reset_feedback plug-in)?
If you can't figure it out, perhaps you could attach a version of the experiment, so that I can see what causes the problem.
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hey Sebastiaan,
Thanks! I understand I should just code 'my' response collection (in a manner similar to how the text_input item would) as an inline script as well (for my purposes, i.e. to do some string formatting before comparing the response to correct_response).
I thought (hoped) the design of the items in the GUI, for example text_input would be modular to the extent that I could use them as a starting point (e.g. setting up a canvas), then do some self-defined processing of the response instead of the predefined form of logging.
Would there be a way for another workaround? For example to define a text_input item (either using the GUI or .opensesame scripting), then look at what the python source code snippet is that actually defines the 'prepare' and 'run' stages for this item (the function calls that draw the display, collect the response etc.) and then use that as a starting point for e.g. a my_text_input script?
If not, I might be gradually changing this general question thread into a "requested features" - one, so I'll stop here and get coding
Thanks again!
Wouter
Hi Wouter,
Ah right, I think I see what you're trying to achieve now. You can handle your own correctness checking using inline_script, without having to re-implement the response item. Basically, you just make sure that the response_item always sets 'correct' to '0' or 'undefined'. And then you adjust the feedback variables afterwards, for example like so:
Hope this helps!
(Btw, if you're interested, pre-release packages of 0.26 are available here: http://files.cogsci.nl/software/opensesame/pre-releases/)
Cheers,
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!