Eye-Tracking with EyeLink: How to set backdrop on host PC?
Dear all,
I want to do a reading experiment where each trial consists of the following:
- they see the stimulus: a sentence [pretarget][target][posttarget] in a sketchpad
- they have to press space to continue to the question
- a yes and no question is shown [question] in a sketchpad
- they have to answer the question by either pressing "d" for yes or "l" for no
I use the PyGaze Plugins with an EyeLink Portable Duo. The experiment runs without a problem, but I can see neither see the sentence nor the question on the host PC (or in the Data Viewer afterwards). I know, I have to use an inline script to set the backdrop, but all my tries haven't worked out so far. Currently, I am using this script, placed in the trial before the drift correct and in the run phase:
bd_s= self.copy_sketchpad("sentence")
exp.pygaze_eyetracker.set_backdrop(bd_s)
I get the follwing error message: AttributeError: 'libeyelink' object has no attribute 'set_backdrop'
I am not really able to program at all and also new to Open Sesame and Eye-Tracking. How could I solve the problem of setting the backdrop?
And one additonal question: Ideally, I would also like to see the question after the sentence, but I really do not know how to do this. It is not strictly necessary for my experiment, but I would like to implement this as well when it is not too complicated.
Many thanks in advance!
Vanessa
Comments
Hi Vanessa,
Have you seen this discussion:
It sounds like @Jessica Bourgin managed to set the backdrop fine. She also shared the code that worked for her.
Ideally, I would also like to see the question after the sentence, but I really do not know how to do this.
Not sure what exactly you mean here. Do you mean to set the backdrop both for the sentence, as well as for the question? That shouldn't be a problem if you can get one to work, the other would work the same way. Or am I misunderstanding?
Hope this helps,
Eduard
Hi Eduard,
thank you for your reply! I had seen this discussion, but was not sure how I could implement this for my own/if I could use it at all, as I do not have a set image but my sketchpad that varies from trial to trial (and -as I said - I am not able to program at all).I have now just copied the whole script and it says that my pos_x is not defined. I assume that I have to somehow adjust this
so it works with the sketchpad. How would I do that? What would be the width and height for a sketchpad and can I just change that hard coded like : h = xxxx and w=xxxx? Maybe I could use the resolutions of my PC? Sorry for my very unknowledgable questions!
My second question was meant like this: I have to put the inline script for the backdrop at the very front of the trial. So, how can I define in the script that the sentence shows up until the space bar is pressed and after that the backdrop should change to the question. But I can understand if this question is to specfic and you do not want to write just code for me! ;) As I said, it is not strictly necessary.
Regards,
Vanessa
Quick update: I got the script running and it shows now a grey backdrop (instead of the standard black one) but unfortunately I can not see the sentence as well. Do you have any idea why this could be?
This is now my script:
Many thanks in advance,
Vanessa
Hi Vanessa,
Yeah, I see. You would need to extract the individual RGBA values for each pixel from your canvas. I had a quick look, but couldn't find an easy way to extract these pixels values from a canvas. @sebastiaan is there an easy way?
It would be easier, if you could provide your sentences as image stimuli. Would that be feasible for your specific experiment?
Eduard
Hi @vanessadcx and @eduard ,
There is no easy way to set the OpenSesame canvas as a backdrop on the EyeLink PC. Indeed, as Eduard says, the most practical way would be to use bitmaps for the stimuli and then send these bitmaps to the EyeLink using the
pylink
API, as you're already doing in the example.— Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hello Eduard, hello Sebastiaan,
many thanks for your help! I' ve now solved my problem using images instead of text and switching to the eylink plugins.
Regards,
Vanessa