Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

How to implement feedback per response?

Hello!

For one condition of a simple reaction time experiment, I'd like to implement feedback per response. The participants are supposed to press the Space bar in one second intervals as accurately as possible. After every keypress, a sketchpad should show how many ms the response deviates from the desired second.

I started using "Your response time was [response_time] ms." on a sketchpad (although this is not the deviation from the one second interval). I noticed that the shown response time does not fit the actual response time. For example, when I wait 5 seconds to press the space bar, the feedback sometimes shows numbers like 365 ms. When I respond extremely fast, the feedback sometimes shows numbers like 2514 ms. Can someone explain to me what I am doing wrong or even better, how to implement feedback per response that shows the deviation from a one-second interval?

Comments

  • Hi @SQuestion,

    To provide feedback on single trials, you should use the feeback object instead of a sketchpad. Sketchpads are prepared and set before the trial runs, so their content cannot be updated as the trial runs. Feedback objects can. You can then display the score and the response time on that single trial using the correct and response_time to access the score and response time on that specific trial (you do that by inserting the following text on the feedback object: "[correct]" and "[response_time]" respectively.

    You can find more information and a working example with trial-based and block-based feedback in this previous post: https://forum.cogsci.nl/discussion/comment/23985#Comment_23985

    Hope this helps!

    Best,

    Fabrice.

    If you found my reply helpful and wish to invite me to a coffee, you can do so here πŸ˜‰ Buy Me A Coffee

  • PS: you may also be interested in the description of the response vs feedback variables here: https://osdoc.cogsci.nl/3.3/manual/variables/, and the difference between feedback and sketchpad objects here: https://osdoc.cogsci.nl/3.3/manual/stimuli/visual/

    If you found my reply helpful and wish to invite me to a coffee, you can do so here πŸ˜‰ Buy Me A Coffee

  • Thank you!

  • Hi @SQuestion,

    You're welcome. Glad it helped.

    Good luck with your experiment!

    Fabrice.

    PS: just FYI, when you reply in the forum, if you type @ followed by the first letters of your correspondent, you'll see a menu appear to select their username. That way, they get notified of your message.

    Example: if you type @fa, you'll see this menu appear:

    If you found my reply helpful and wish to invite me to a coffee, you can do so here πŸ˜‰ Buy Me A Coffee

  • edited September 8

    Hi @Fab,


    There still is a problem with my feedback per response. I tried to implement the feedback-item, but when I run through the experiment, the feedback does not match the actual response time. I tried to synchronise my "clicking" to one second using a timer, but still, the feedback shown is around 400-500 ms. I assume that the problem may be that the feedback variable is shown for 350 ms and this causes the feedback to be distorted, but I am not sure.

    I attached the file, maybe you know how to change that?

    The response_time variable lags behind about 590 ms after the actual interval.

    Kind regards!

  • Hi!

    After checking your experiment, I do believe that the response time you get is accurate. If you check the data file, and subtract "time_flicker" from "time_stimuli" you can check that the value is the same as (or very very close to) the one recorded in "response_time".

    You could do another check: in the keyboard response item, set "timeout" to a specific value, for example, 500. Then don't hit any key after the stimuli, just wait it out. Your response time shown in feedback should then be very very close to 500 ms (for my system, it is actually 502 ms, it depends a bit on the screen you have).

    If you want to add conditional feedback, you need to add the condition to the feedback, which could be something like,

    show_if="[correct] = 1 and [response_time] > 2000
    

    as the example below shows:

    draw textline center=1 color="#00ff00" font_bold=no font_family=arial font_italic=no font_size=75 html=yes show_if="[correct] = 1 and [response_time] >2000" text="Good" x=0 y=0 z_index=0

    Hope this helps :)

  • Hi @SQuestion,

    I'm no sure understand your description of the problem (I think there is some confusion about what you are measuring and what you actually want to measure, see below). I couldn't run the whole task because the .ogg file was not included in the file pool. However, I stripped down your task to leave only the _experiment loop to try it out.

    I first added a line in Javascript just to output the response time to the console and compare it to what is displayed on the feedback screen and they matched. I then looked at the data and, as an approximation of the response time, I subtracted the time at which the flicker sketchpad appears from that at which the stimuli sketchpad appears. Here is my calculation were you can see they match (roughly, since there is a tiny difference because I'm not actually comparing to the time when the subject is pressing the space bar but when the flicker comes on).

    In other words, the response time displayed on the feedback is correct: it shows the time elapsed between the presentation of the stimuli sketchpad and the subject's response (or more precisely: the time when the keyboard input object starts and when the response is registered, but since in this case your sketchpad as a duration of 0ms, it amounts to the same thing). That is what the response_time corresponds to. So, it is doing what it is designed to do. There is no distortion due to the feedback object; the feedback just doesn't impact on the computation of the response time of the keyboard object.

    I think you might be confused about what you are measuring and what you're actually aiming to measure. What you are measuring is the time the subject takes to respond once the keyboard object is executed. I have a feeling that what you're actually after is something different: the time between one response and the next. There are probably several ways of achieving this, but given the architecture of your current task as it is, the way to go would be to use coding to manually create timestamps.

    Here's the logic: you take note of the time it is when a response is produced and you compare it to the time it was when the previous response was produced. Practically, this means taking a timestamp1 when the response is produced, measure the difference since timestamp2 if it exists (on the first trial, it won't): timestamp1-timestamp2 (and store it somewhere, for example in variable "inter_response_interval", then set timestamp2 as the same value as timestamp1 (that way we keep track of the time it was when the latest response was produced somewhere else than in stamptime1, which will take a new value when the next response it produced).

    So far I've played around with Javascript more than with Python, but I don't know how to access the time elapsed since the experiment's onset in Javascript, so I looked up how to do it in Python and had a go at implementing the solution described above :

    timestamp1=self.time()
    
    try:
       timestamp2
    except NameError:
       timestampdif=""
    else:
       timestampdif=timestamp1-timestamp2
    
    self.experiment.set('inter_response_interval',timestampdif)
    print (timestampdif)
    timestamp2=timestamp1
    

    This appears to work. You now get a variable called inter_response_interval that stores the interval between responses (it takes an empty value on the first trial of a loop). The code examines whether timestamp2 is defined. if it is not, it set the timestamp difference to "", otherwise it calculates the difference between timestamp1 and timestamp2.

    You can download my version here (you'll need to update your version of Open Sesame to be able to read .osexp files):

    Hope this helps!

    Fabrice

    If you found my reply helpful and wish to invite me to a coffee, you can do so here πŸ˜‰ Buy Me A Coffee

  • Hi @Fab,


    Thank you so much, this worked perfectly so far. There is only one last problem that remains open.

    Obviously, the feedback for the first keyboard-response is empty and the first feedback in the experimental trial is extremely large (since it measures the interval from the last response in the practice trial and the beginning of the experimental trial - got it!:)). I was wondering whether it is possible to kind of "hide" these two feeback-illustrations since it might confuse the participants. So, is there any function to not display the first feedback in each practice and experimental trial? It would probably be enough for me, if I was able to change the color of those two specific feedback-displays to white (so that they are hidden). I googled my question already, but did not figure out how to achieve it.


    Best wishes!

  • Hi @SQuestion,

    Glad I could be of help! And many thanks for the β˜•οΈ, that's really nice!

    You're right, there is the problem of the feedback on the first trial of a block. As you correctly pointed out, one problem is that the feedback is either empty or that it takes into account the time stamp from the last trial of the previous block.

    There are different possible ways of going about solving that. I created a version of your task with a practice and a test phase to try it out, and I demonstrate one solution below that I think is simple and does the job.

    Before I do, let me share something I discovered while in the process of researching this. Declaring a variable in OS using Python can be done as I used in my earlier example:

    #self.experiment.set('inter_response_interval',timestampdif)
    

    Or in an easier way:

    var.inter_response_interval=timestampdif
    

    Much simpler. Didn't know that. Only just starting to play around with Python. I update the code from my previous example to use this simpler method.

    Now, on to your task and the challenge at hand...

    Basically, what you want is for the task to know when a trial is the first of a block and to then condition the feedback presentation as a function of it.

    Tracking what trial we're at

    This could be done by setting up the content of a variable to a certain value (e.g., "first_trial") before the block begins, and then, at the end of a trial, change it to something else (e.g., "first_passed"). That would work, but I opted for something slightly different and equally simple that could prove more informative later on when you analyze the data: a trial counter that is reset to 0 before each block.

    The data log will contain things that are a little similar but not quite. For example, it automatically contains a counter for every object in your task (e.g., the stimuli sketchpad). The problem with that is that it will begin at 0 at the onset of the experiment but it will keep incrementing across the experiment (it won't get reset before each block).

    What we want is a counter that gets reset before each block. We can do that easily with code creating a blocktrial_counter and setting it to 0:

    var.blocktrial_counter = 0
    

    ... and placing it before a block starts:

    We copy the same Python object and paste before the next block (using a linked copy, which means that a modification of any instance of this object is automatically applied to the other copies - this can be very useful when you want the same objects to be used in several places and want to be able to update all of them by editing just one):

    Next, we make sure to increment the blocktrial_counter at the end of every trial:

    To check that the counter works, I output its content to the console at the onset of the timestamp_response Python object:

    print (var.blocktrial_counter)
    

    It works. We now have a trial counter that is 0 for the first trial of a block, increases in steps of 1 unit every trial of that block, and gets reset to 0 before the next block.

    This is what the counter will look like in the raw data:

    Now on to the issue of the feedback display...

    Conditioning the feedback appearance to the value of the trial counter

    Here again, there are different ways to achieve this. One could have been to set a variable to contain a message that would be different for the first trial (an empty message) and the remaining trials (e.g., "You have taken ... ms since your previous response"), and to display that message on every trial.

    I chose a simpler method you might not have used yet and that may be useful to you in the future: to condition the display of a sequence object.

    Sequences have a "run if" property that allows you to decide under what condition each object in a sequence is run. By default, this property is set to "Always" for every object, but you can change it and use conditions based on variable tests. In this case, I tell OS to run the feedback object only if the blocktrial_counter is greater than zero. That is, it will not be run for the first trial of a block:

    Et voilà! You have a task that differentiates the first trial of a block and only displays feedback for trials other than the first.

    If you wanted to display a different feedback object for the first trial, all you'd have to do is to create another feedback object, place is just before or after the current one, and set "run if" to [blocktrial_counter]=0. This method is very handy to show different feedbacks for trials of different characteristics. The method works for all objects, not just feedback (for example, if you wanted a sound to be played only under certain conditions, you could condition the presentation of the sound object using the "Run if" property).

    Some final comments about the solution described in this post...

    Note that while the feedback is not displayed on the first trial of a block, the code contained in timestamp_response does run every time, which means that a value is set for the timestamp_dif variable every time. So, in your data log, you will see a value for that variable even for the first trial of a block (it'll be empty for the first trial of the whole experiment, and it'll have a larger value for the first trial of the next block) - see data output below. Just ignore these values in your analysis. The trial counter makes it easy to identify the first trials (trials with a blocktrial_counter value of zero).

    In order to make it easier for you to analyze your data and discard the data from the practice trials, I added a variable that codes for the experimental phase (Practice vs Test). A single code line does the trick.

    I then did something similar before the experimental block. That way, your output will contain a exp_stage column that will identify Practice and Test trials.

    This is what it will look like in the raw data:

    You can download my example here:

    Hope this helps. Good luck!

    Fabrice.

    If you found my reply helpful and wish to invite me to a coffee, you can do so here πŸ˜‰ Buy Me A Coffee

Sign In or Register to comment.

agen judi bola , sportbook, casino, togel, number game, singapore, tangkas, basket, slot, poker, dominoqq, agen bola. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 50.000 ,- bonus cashback hingga 10% , diskon togel hingga 66% bisa bermain di android dan IOS kapanpun dan dimana pun. poker , bandarq , aduq, domino qq , dominobet. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 10.000 ,- bonus turnover 0.5% dan bonus referral 20%. Bonus - bonus yang dihadirkan bisa terbilang cukup tinggi dan memuaskan, anda hanya perlu memasang pada situs yang memberikan bursa pasaran terbaik yaitu http://45.77.173.118/ Bola168. Situs penyedia segala jenis permainan poker online kini semakin banyak ditemukan di Internet, salah satunya TahunQQ merupakan situs Agen Judi Domino66 Dan BandarQ Terpercaya yang mampu memberikan banyak provit bagi bettornya. Permainan Yang Di Sediakan Dewi365 Juga sangat banyak Dan menarik dan Peluang untuk memenangkan Taruhan Judi online ini juga sangat mudah . Mainkan Segera Taruhan Sportbook anda bersama Agen Judi Bola Bersama Dewi365 Kemenangan Anda Berapa pun akan Terbayarkan. Tersedia 9 macam permainan seru yang bisa kamu mainkan hanya di dalam 1 ID saja. Permainan seru yang tersedia seperti Poker, Domino QQ Dan juga BandarQ Online. Semuanya tersedia lengkap hanya di ABGQQ. Situs ABGQQ sangat mudah dimenangkan, kamu juga akan mendapatkan mega bonus dan setiap pemain berhak mendapatkan cashback mingguan. ABGQQ juga telah diakui sebagai Bandar Domino Online yang menjamin sistem FAIR PLAY disetiap permainan yang bisa dimainkan dengan deposit minimal hanya Rp.25.000. DEWI365 adalah Bandar Judi Bola Terpercaya & resmi dan terpercaya di indonesia. Situs judi bola ini menyediakan fasilitas bagi anda untuk dapat bermain memainkan permainan judi bola. Didalam situs ini memiliki berbagai permainan taruhan bola terlengkap seperti Sbobet, yang membuat DEWI365 menjadi situs judi bola terbaik dan terpercaya di Indonesia. Tentunya sebagai situs yang bertugas sebagai Bandar Poker Online pastinya akan berusaha untuk menjaga semua informasi dan keamanan yang terdapat di POKERQQ13. Kotakqq adalah situs Judi Poker Online Terpercayayang menyediakan 9 jenis permainan sakong online, dominoqq, domino99, bandarq, bandar ceme, aduq, poker online, bandar poker, balak66, perang baccarat, dan capsa susun. Dengan minimal deposit withdraw 15.000 Anda sudah bisa memainkan semua permaina pkv games di situs kami. Jackpot besar,Win rate tinggi, Fair play, PKV Games