Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

Gaze-contingent online stimulus updating

adkinstyadkinsty Posts: 4

I would like to create a task in which participants are presented with six words, where the word they are currently reading (looking at) is visible, but all other words are masked.

For example:

Not looking at any word: #### #### #### #### #### ####
Looking at word 1: dart #### #### #### #### ####
Looking at word 2: #### home #### #### #### ####
etc.

It is important that the code updates the stimuli quickly enough that reading these words feel natural to the participants. So, as the participant scans from word to word, the stimuli should be appropriately masked/unmasked without the participant noticing.

Is this possible to do using PyGaze? If so, do you have any suggestions about where I should start? I will be using an Eyelink eye tracker.

Thank you very much!!!

-Tyler

Comments

  • eduardeduard Posts: 875

    Hi Tyler,

    That is very possible.

    If you use Pygaze to design your entire experiment, you probably start with an example experiment from their website (ww.pygaze.org). Then, you add step-by-step the gaze-contigent-functionality (see for example this script: http://www.pygaze.org/documentation/examples/#game)

    Does this make sense?

    Edard

    Thanked by 1adkinsty
  • adkinstyadkinsty Posts: 4
    edited March 28

    Thanks for the reply!

    I think that in order for the stimuli to update without the participant noticing, the code will need to predict where the participant will look next. So, say the participant is looking at word one, then a saccade occurs in the direction of word two, the code should update the stimuli so that word two becomes visible just before the participant actually looks at word two. With this in mind, do you still think that this is possible to accomplish using pygaze?

  • eduardeduard Posts: 875

    Hi,

    Well, what you describe is a little impossible. You can't predict where they will look based on where they just looked. However, I don't think you necessarily need to predict it, if you are just very quick with updating the words, you might still be able to cover the gaze-contingent aspect from the participant. The hard part here, is to tweak the settings (spatial and temporal accuracy) in a way that it works most of the time.

    The procedure I have in mind is based on sampling continuously the current eye positions (pygaze.eyetracker.sample() or something like that). and check where it is. If it is within the limits of word 1, you show the first word, once it moves on, you can update the screen and show the second word instead. Actually, now that I think about it, this should work pretty neatly.

    Let me know how it is going!

    Eduard

    PS. Make sure that you prepare all the different screen (word 1 shown, word 2 shown, etc) in advance, otherwise you will have some annoying delays.

    Thanked by 1adkinsty
Sign In or Register to comment.