Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

SMI 500: Opensesame crashes with Pygaze for max screen mode, but runs fine otherwise.

LJGSLJGS Posts: 19

Hi, this seems like a tough one, but wondering if anyone has had any experience with something like this.

My opensesame runs fine on max screen, until I add any pygaze to it. In the simple example below, I have an example without the pygaze that runs fine (stop-signal-task-keypress), and then the addition of a pygaze init which already stops me running max screen (but note it runs fine not maxed).

https://www.dropbox.com/s/bhkqhdiyerqk7na/stop-signal-task_keypress.zip?dl=0

Wondering if anyone has any idea what could cause this? The only clue I've got is that the SMI PC tells me 'windows 7 switched to basic colour' when I run it with the pygaze, due to something in the iview program, and I seemingly get no such thing for the key press version.

Comments

  • LJGSLJGS Posts: 19

    Oh, it looks like the problem is more minor than I first thought. If I double press "escape" I am full screened back into the experiment and I am fine to go from there. So it looks like the eye tracker calibration is taking over the screen and I just need open sesame to grab it back. Would this be an opensesame question rather than a pygaze one?

  • LJGSLJGS Posts: 19

    Hi all, for posterity note that I resolved this by switching my back end to PyGame. Now I am able to max screen after the SMI calibration runs, although I have to alt-tab back in which is a little bit inconvenient,

    Cheers

    Thanked by 1eduard
  • eduardeduard Posts: 848

    Great that you sorted it out!

  • LJGSLJGS Posts: 19

    Cheers.

    I actually have another question I am hoping to ask - it's a quickie, and you were helpful before, so I might chuck it here rather than spam the front page with threads.

    Do you know how to set the SMI to automatically keep attempting to calibrate until the calibration is valid inside of opensesame? When I attempt to run calibrate, it automatically switches auto accept to 'on' for my SMI iview program, even if I just switched it off, and then it always only has one go at it.

    Eye tracking is tough and buggy stuff! I am very thankful to have this package to make it a little bit easier.

  • sebastiaansebastiaan Posts: 2,701

    I don't have an SMI to test this with, but in principle eyetracker.calibrate() should return True if the calibration was successful and False otherwise. Therefore, a script like this should repeat calibration until success. But again, I'm not entirely of how the SMI behaves, and whether this accomplishes what you need. @Edwin?

    while not eyetracker.calibrate():
        pass
    

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • cescocesco Posts: 24
    edited August 2

    My script might help you out. However, I use it when calibration is valid but the errors are too large for my liking. It is essentially a function that you can use anywhere in the script: you can use it between blocks or you can activate it whenever you want to recalibrate (requires a few lines in an inline script). I put it directly after calibration so it gives me the option of rerunning it.

    It makes use of the GazeCursor function by @Edwin and displays a fixation dot with a circle drawn around it (not coincidentally this circle corresponds to my fixationAOI). I ask my participant to look at the fixation dot. If the gaze cursor is 'jumpy' or outside my AOI, I can press C to recalibrate or D for drift correction. You can press X to exit / continue.

    Let me know if you think that this might help and I'll post it here.

  • LJGSLJGS Posts: 19

    Thanks for the suggestion sebastiaan, unfortunately it seems that the SMI returns True no matter what happens during calibration so I cannot vet the calibrations that way. I am also having problems with the drift_correct. It seems to lag out the task. Wondering would I get bad data running without a drift_correct on every trial? Currently I have a fixation cross for them to look at, once eyetracker.sample() determines they have looked the task moves on. From what I understand the drift_correct is much better than that

    Hi Cesco, that might be my way to keep calibrating until I get good measures. I do think it may help.

    Cheers

  • cescocesco Posts: 24
    edited August 3

    Hi @LJGS. A few suggestions before I post the code. I'm having similar issues with drift corrections with the SMI as well. I suppose this is due to Pygaze's support for SMI only being experimental. Also keep in mind that you can't do drift corrections whilst iView is recording. Furthermore, I found that eyetracker.sample() plus a timer (say 100ms fixation) is very memory-heavy too. Instead, I use the eyetracker.wait_for_fixation_start() function. This may decrease timing precision, but that's not essential for my paradigm and moreover, it vastly increased stability for me.

    1. I tend to have an "Import and Settings" script at the beginning of all my experiments. These are the relevant lines:

    # Importing the pygaze AOI functionality for contingency. 
    from pygaze.plugins.aoi import AOI
    
    # Displays cursor at position of gaze
    from pygaze.plugins.gazecursor import GazeCursor as Cursor
    
    # Basic screen settings based on my screen. Not strictly necessary because we could use 'self' variables, but I wanted to keep the same size even when debugging on a different screen size
    my_resolution = (1680, 1050)
    my_width = my_resolution[0]
    my_height = my_resolution[1]
    my_center_hor = (my_width/2)
    my_center_ver= (my_height/2)
    
    # This keyboard is my keyboard
    my_keyboard = keyboard(timeout=0)

    2. Then after Pygaze initialisation (and calibration), I put the following function in an inline script called calib_check:

    def calibration_check():
    
        """Called by the q key in the experiment. Presents a screen with a circle   
        that indicates the AOI, a fixation dot, and live gaze position. From    
        here, different actions are possible: C = calibrate, D = drift correction,
        x = exit. 
    
        arguments
        None
    
        returns
        Nothing
        """
    
        while True:
        # Get gaze position from pygaze and draw a fixation dot (see Pygaze functions) and a circle around it. If you have an AOI in that position, you could use your AOI coordinates to be fancy.
            gx, gy = eyetracker.sample()
            my_canvas.clear()
            my_canvas.circle(840, 525, 100, fill=False, color='red')
            my_canvas.fixdot(my_center_hor, my_center_ver, color='white')
            my_canvas.fixdot(gx, gy, color='green')
            my_canvas.show()    
    
            key, timestamp = my_keyboard.get_key()
            if key == 'x': # Exit / continue
                break
            elif key == 'c': # Calibrate
                eyetracker.calibrate()
            elif key == 'd': # Drift correction: buggy with SMI. At least avoid it whilst it is in a recording state.
                eyetracker.drift_correction(pos=None, fix_triggered=True)

    3. Between blocks, I call the following in an inline script called calibration_time_come_on (sorry not sorry):

    # Prepare
    calib_check_canvas = canvas()
    calib_check_canvas.text('Experimenter: Recalibration? Yes = Q, no = X')
    
    # Run
    calib_check_canvas.show()
    
    while True:
        key, timestamp = my_keyboard.get_key()
        if key == 'x':
            # Tudu
            break
        elif key == 'space':
            # Tudu du
            break
        elif key == 'q':
            # It's a calibration
            calibration_check()
    
    calib_check_canvas.clear()

    4. As another example, I inserted the in every inline-script that works with gaze-contingency, so I can potentially recalibrate if things appear to a bit iffy. This will result in a break in your current trial before picking up where you left off, but I only used it during debugging at the end of trials or the first fixation-screen in a trial at worst.

     # Insert your gaze-contingent loop here
        elif key == 'q':
            calibration_check() 

    Let me know how you're getting on. Apologies for any sloppy coding, things could probably be more efficient (and I'd like to hear it). I hope it makes sense!

  • cescocesco Posts: 24
    edited August 4

    I just realised that in the function

    my_canvas.circle(840, 525, 100, fill=False, color='red')

    should be

    my_canvas.circle(my_center_hor, my_center_ver, 100, fill=False, color='red')
  • LJGSLJGS Posts: 19

    Cesco, this function is great. It gives me a very good idea of how my calibration went and allows me to keep re-running til I get a good one. I did find that I needed to add one thing to make it run. One thing though (in case anyone else uses)- I had to add these lines:

    from openexp.canvas import canvas
    my_canvas = canvas(exp)
    

    I don't really know what they do. Anyway, thanks very much. I am still stuck on the lack of drift_correct, maybe it will suffice to just have an eye triggered fixation point trigger the trial. My task just requires them to look left or right to respond - only the x axis and it can be relatively coarse. RT matters, though. Any opinions on that?

  • cescocesco Posts: 24

    Hi Luke,

    I don't think you need to import canvas specifically, but I did forget to include the my_canvas = canvas() argument above. Good job picking that up.

    There are a number of ways to use gaze contingency to trigger the trial, but a disadvantage (to me) of drift correction is that -even when it works-, it needs to take place outside a recording. If timing precision matters to you, I would avoid that.

    Either way, you'll need to import aoi from pygaze. You can then define your aoi as the centre of your screen (it is common to use an area of about 1º visual angle, but go by the literature that's relevant to your experiment).

    My preferred method is presenting the sketchpad with the fixation dot with a duration (say 100ms, 500 ms, a random duration etc) and then use an inline_script making use of the eyetracker.wait_for_fixation_start() and aoi.contains() functions.

    Alternatively, if the fixation itself needs to have a certain duration; you could use a while loop that starts a timer when eyetracker.sample() is within the aoi and exits the loop when the timer hits your required duration.

    For the looking left or right part you could look at eyetracker.wait_for_saccade_end().

    I hope that's clear, I don't have access to my code at the moment.

  • LJGSLJGS Posts: 19

    Hi all,

    The eyetracker experiment is going well now, hopefully close to done. I am hoping to run something by anyone who is familiar with the best way to record responses using fixations.

    Currently, I am aiming to record when participants look left or right in a stop signal experiment, by using eyetracker.sample() in the generator function for the stop-signal coroutine. If the x coordinate of the sample gets passed a certain point, I record a response. The relevant generator function is below.

    One issue I had is that blinking, or looking away from the monitor seemed to register a response. I believe what was happenign was doing that returned a sample of 0,0 which I think would actually come out as a left response. I think I resolved that by ignoring any cases where the eyetracker.sample() spits out 0,0 (in function below), but I'm wondering whether that problem reflects a more general problem with this approach? Are there are other factors I'm missing? I am guessing that these types of issues would have come up when coding the more standard way to record fixation responses with pygaze.

    relevant function:


    def break_coroutines(): SS_staircase = self.get('SS_staircase') fixposx = my_center_hor #range surrounding fixation cross xmin = fixposx - 300 xmax = fixposx + 300 #approximate location of fixation cross on screen ... centre #left and right response cutoff locations leftx = xmin rightx = xmax response = None exp.set('response', None) # var.set(var.response, response) yield while True: localpos = eyetracker.sample() localx = localpos[0] # localy = localpos[1] if localpos[0] !=0 and localpos[1] !=0: if localx < leftx: if self.get('correct_response') == 'left' and self.get('stop') == 0: exp.set('correct', 1) else: exp.set('correct', 0) end = time.time() exp.set('end', end) exp.set('response', 'left') response = 'left' elif localx > rightx: if self.get('correct_response') == 'right' and self.get('stop') == 0: exp.set('correct', 1) else: exp.set('correct', 0) end = time.time() exp.set('end', end) exp.set('response', 'right') response = 'right' if response is not None: print localpos var.set(var.response, response) items['stop_signal_coroutines'].var.duration = 0 # Also break if the coroutines signals that it's over keep_going = yield if not keep_going: exp.set('end', time.time()) break

    relevant osexp:

    https://www.dropbox.com/s/n6szte6z3fcq0a0/SS_neardone.osexp?dl=0

  • eduardeduard Posts: 848

    Hi,

    If you are only interested whether your eyes are to the left or the right of your middle, this procedure is fine conceptually. If you want to know when fixations occurred, you would want to make sure that the current eye position is stable across time (end coordinates after, say 50 ms, are not much different than in the beginning.

    To make your check a little more robust you can define regions of interest and use them in a function, like this for example:

    def checkSacDirection(curpos, leftbox, rightbox):
        """
        curpos:     is a tuple with the current coordinates (as given by eyetracker.sample())
        left/rightbox:     a list with 4 values, specifying x,y of top left corner of a rectangle defining  a ROI, 
                                    and its width and height
        returns None if eyes not clearly left or right, 'left', or 'right' otherwise
        """"
        # function only checks x-dimension, if you want to check also y, you have to add those conditions
        if curpos[0]>leftbox[0] and curpos[0]<(leftbox[0] + leftbox[2]):
            return 'left'
        elif curpos[0]>rightbox[0] and curpos[0]<(rightbox[0] + rightbox[2]):
            return 'right'
        else:
            return None
    

    This code you can put in a while loop like so:

    timeout = False # change to true if trial duration is exceeded
    while True:
        fixatedSide = checkSacDirection(eyetracker.sample(),[0,0,50,50],100,100,50,50])
        if fixatedSide != None or timeout
            break
    

    In case you don't do that yet, if you are interested in left/right decision it is important that you make sure that the eyes were int he middle in the beginning of every trial. Otherwise, you might get biases towards one side.

    Hope this helps.

    Eduard

  • LJGSLJGS Posts: 19
    edited August 18

    Cheers Eduard, that looks good, I will implement it.

    One thing I don't quite understand :

    "If you want to know when fixations occurred, you would want to make sure that the current eye position is stable across time (end coordinates after, say 50 ms, are not much different than in the beginning."

    Are you suggesting I log the eye movements once at the start of the trial, and again at 50ms into the trial to determine how stable they are? When I check my calibrations (by drawing eyetracker.sample() on screen as cesco showed above) I definitely see some degree of uncertainty around the detected fixations, they seem to rapidly jump around a reasonably small fixation area - on a 1680 x 1050 display they seem to jump all around around a 60 x 60 pixel area. Is that weird? I had assumed it was a fairly standard degree of eye tracking error

    Another thing, is there anyway I could detect when participants blink or the eyetracker loses contact with them? I have noticed that if I blink repeatedly the sample will sometimes throw some weird numbers and hence I may spuriously record a response, I think it is because my eyes look somewhere weird after blinking or because the eyetracker is fooled by my eyes only being half open.I am wondering if I should do so by logging when eyetracker.sample returns 0,0 but maybe there is something more sophisticated

    Thanks for the tip about keeping gaze in the middle, I think I am already assuring that by getting them to trigger each trial looking at a fixation dot.

  • eduardeduard Posts: 848

    What I meant is that just taking a sample from the current eye position doesn't tell you whether the participants was fixating at that location or whether his/her eyes just happened to be at that location. It would be possible that your eye position is part of a saccade or even a blink (before the eyes are entirely closed, the eyetracker doesn't necessarily know whether a blink is a blink or a downward saccade).

    By sampling the eye position twice within 50 ms, you can evaluate how much the eyes moved, if they moved a lot, it is probably a saccade, if they didn't move much, they were probably a fixation. See what I mean?

    Eduard

  • LJGSLJGS Posts: 19

    Ah, yes, I see. We are using saccading left or right as a response, not fixating (the idea is to get highly ballistic responses that participants cannot 'stop' once they are already in the pipeline), my bad I used the wrong term in the previous post. I am hoping your code to only accept eyetracker.samples within an area of interest as a response should make the procedure fairly robust against blinking, I think all I can do is test and see. Thanks very much for your help.

Sign In or Register to comment.