Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[open] Mousetracking and Visual World Paradigm

edited March 2013 in OpenSesame

Good afternoon,

I am attempting to use Open Sesame for a mouse-tracking experiment in the visual world paradigm. The trial procedure is basically thus:

Participant sees a fixation dot for a fixed amount of time.
After the delays, a sound file plays while the fixation dot remains.
They click on the dot and three pictures appear, one each in the lower left, top right, and bottom right corners.
A sound file also plays immediately.
They move the mouse to click on one of the pictures.
End of trial. Repeat randomly with other stimuli.

Most of this is quite straightforward, though I am hitting a few problems I've been unable to solve so far. I will mention the biggest one first.

1) I can display three images in the right locations using the standard Sketchpad. Plus, I can make the fixation dot follow the mouse until a mouse-click occurs. I am using the inline code from http://osdoc.cogsci.nl/python-inline-code/mouse-functions/ for the mouse action. What I don't understand yet is how to integrate the two if possible. For instance, the "follow the mouse" inline code continually clears the canvas (I could remove that line if needed). I assume I need to draw the canvas inside the same inline code that handles the mouse-tracking, but my attempts to do that so far using the documented canvas functions are unsuccessful.

2) Displaying the mouse during a fixation cross. I'm guessing this one is trivial to the knowledgeable. The fixation dot appears. It remains until mouseclick. The purpose of that is so that the mouse is at the center when the pictures and sample are played. However, the mouse does not show during this fixation cross time, so the participant doesn't know where their mouse is. (It does just show with my tracking inline code.) I've been trying to use set.visible, but I'm apparently not placing it in the right location.

3) Finally, I'm having trouble getting it to write the continuous mouse positions (pos and time) to the log file. Advice here?

Apologies to have my first support request be so needy, but I am not solving these problems myself fast enough for a student who needs this up and going.

Thanks for any time you have for this,

Hunter

Comments

  • edited March 2013

    Hi Hunter,

    Welcome to the forum!

    I can display three images in the right locations using the standard Sketchpad. Plus, I can make the fixation dot follow the mouse until a mouse-click occurs. (...) What I don't understand yet is how to integrate the two if possible.

    Generally speaking, if you do part of the drawing with an inline_script item, you have to do all of it that way (at least for things that happen simultaneously). So instead of a sketchpad item, you would need to use openexp.canvas, as described here:

    Basically, instead of drawing just the fixation dot, you would also draw the three images to the canvas and omit the sketchpad altogether.

    Incidentally, might I ask why you are drawing a mouse-contingent fixation dot, instead of using the regular mouse cursor? If the mouse cursor doesn't show (after calling mouse.set_visible()), this might be due to the back-end. Some back-end configurations prevent the mouse cursor from being shown (in fullscreen only), in which case you could try a different back-end.

    However, the mouse does not show during this fixation cross time, so the participant doesn't know where their mouse is.

    The easiest way is probably to insert a mouse_response after the item used to draw the fixation cross (a sketchpad, probably?) and tick the 'Visible mouse cursor' box. Make sure to set the duration of the preceding item to 0, so that it advances immediately to the mouse_response.

    Finally, I'm having trouble getting it to write the continuous mouse positions (pos and time) to the log file. Advice here?

    Polling the mouse position is done with mouse.get_pos(). However, logging a continuous signal into a row-per-trial logfile is problematic, because it's not clear how the data should be organized in that case. Ideally speaking, how would you like to organize your data file?

    Good luck and cheers!

    Sebastiaan

  • edited 10:36AM

    Hi Hunter

    For a experiment, I did something similar to what you need to do. The problem I had using openexp.canvas is that there's a lot jitter in the cursor trajectory. So I prefered to use Pygame in inline script to present the mouse cursor.

  • edited March 2013

    I think I can help with the data logging here.
    I've been building a template for something similar for touchscreens (so haven't needed to see the cursor).
    What I've done is, after the stimuli have been presented (most conveniently with the sketchpad, although you can control this in the inline script for more flexibility), I create an inline object called "Tracker".
    In the prepare phase of this, I put:

    xList, yList, tList = [], [], []
    global xList, yList, tList
    

    which sets things up for logging.

    In the run phase, use a variation of what's below.

    start = self.time()
    tracking = True
    while tracking:
        position, timestamp = my_mouse.get_pos()
        x, y = position
        #print str(x)+', '+str(y)
        t = timestamp - start
        #uncomment this to print to the debug window.
        #print str(t)+', '+str(x)+', '+str(y)
        xList.append(x)
        yList.append(y)
        tList.append(t)
        if x < 350 and y < 175:
            resp = 1
            print 'left'
            tracking = False
            break
        if x > 930 and y < 175:
            resp = 2
            print 'right'
            tracking = False
            break
        #This dictates sampling rate. 50 hertz is more than enough
        self.sleep(20)
    my_canvas.clear()
    print 'answered'    
    self.experiment.set("response", resp)
    if resp == self.get('correct'):
        self.experiment.set("accuracy", 1)
    else:
        self.experiment.set("accuracy", 0)
    self.experiment.set("xTrajectory", str(xList))
    self.experiment.set("yTrajectory", str(yList))
    self.experiment.set("tTrajectory", str(tList))
    self.experiment.set("position", position)
    

    Because on the touchscreen, I just need the cursor to be over the response buttons in the two corners to register a response (touching the screen is one big click).
    As of today, I know that you can place

    from pygame.mouse import get_pressed
    global get_pressed
    

    at the start of your experiment somewhere, call it in the 'while tracking' loop above as

    lclick, mclick, rclick =  get_pressed()
    

    and change my:
    if x < 350 and y < 175:
    to
    if x < 350 and y < 175 and (lclick == 1):

    Hope this helps!
    Eoin.

    Edit: You obviously need to stick a logger item after this each time, and log xTrajectory, yTrajectory, tTrajectory, response, accuracy, and position.

  • edited 10:36AM

    Thanks, everyone, for your assistance and apologies for being so slow to get back. Basically, I'm able to work on research Saturday - Monday and then disappear into lectures Tuesday - Friday right now. It's Saturday here in New Zealand and so I'm back on this. :)

    First, before I forget, Eoin, my collaborator and I have been working on an open source Android-platform that's under development (I've seen Open Sesame's work on this as well.) The project is located at https://github.com/Otago-PsyAn-Lab/Otago-PsyAn-Lab. Yell if you'd like to discuss projects. It's only at a state where it runs designed experiments but has no way to design them. The latter's currently being created. (And I hope mentioning that work here is not inappropriate.)

    Sebastiaan, there's no reason actually to have the fixation dot follow the mouse. It simply did no harm and was ready-made inline code. If I remove that and just track the cursor, then the clearing of the canvas line is unneeded and that can help solve my presentation of pictures.

    Yes, how to write the tracking information? Ideally, I believe it would be something like:
    x-pos ; y-pos ; timestamp ; item number.

    where item number is some identification of which item the participant is currently engaged with. One would then append these rows at an appropriate sample rate.

    Which gets us to Eoin's code. Thanks very much indeed for this. Reading your code, am I correct in understanding that you are comparing the location of the mouse against a determined pixel number in order to categorise? I don't believe we will need that as we're just interested in raw location with the hope of comparing curvature towards pictures on the right as a result of the sentence played. (The picture on the left is a distractor for non-filler items.) Is there more going on in that section of the code that I am not following?

    OK, I will attempt to implement the suggestions so far and report back.

  • edited March 2013

    Psy-An lab looks interesting. I'll be sure to take a look at it later in the week.
    You're right about my code, I have a response button in the top left and top right corners (as in Freeman's MouseTracker program), and the trial ends and logs a response as soon as the cursor is over either button.
    My code saves the x and y positions and the timestamp as 3 python lists (values in square brackets, separated by commas), so that they can be relatively opened an manipulated with SciPy/NumPy, although I haven't got as far as this yet. Because I actually wrote this code for a simple lexical decision task to show my supervisors OpenSesame's potential, on each trial I log the following:

    LoggerCount (or similar) - chronological trial number.
    stimulus
    response
    accuracy
    RT
    xTrajectory (list of values on x axis)
    yTrajectory
    tTrajectory (a misleading name: the timestamps corresponding to each entry on x and y trajectories).

    Finally, I had a go at showing the cursor position last week.

        position, timestamp = my_mouse.get_pos()
        x, y = position
        my_canvas.clear()
        my_canvas.fixdot(x, y, color = 'red')
        ...
        #the rest of your canvas (response buttons, stimuli, etc)
        ...
        my_canvas.show()
        ...
        #the rest of your script
        ...
        self.sleep(5)
    

    ...worked fine on my desktop in the office (Intel 3.4GHz, 8GB), but I haven't tried it on the Nexus 7).
    Eoin

  • edited March 2013

    Thanks Eoin and everyone,

    I'm making a good bit of progress on this and I think it's going to work out when I have all the little details worked out. When the whole thing is ready, I will post the entire code here, because I think this is an experimental design others will be interested in.

    The particular issue I'm dealing with at the moment is "flushing". Based upon Eoin's code I am using get.pressed() from pygame to register the mouse click. Everything goes as expected for the first trial. However, when I go through the second trial, the while loop ends immediately as if the mouse has already been pressed. I think I need some way to clear out the value of get_pressed from a previous trial.

    The format is roughly: inline code 1 presents a fixation dot and waits for mouse click. (This is where my problem eventually comes.) After click, I have a couple samplers and a sketchpad using the standard OpenSesame building tools. Then there's the final Tracker inline code based largely on Eoin's sample. I've commented out a few things that may be deleted or implemented as I keep working.

    Fixation Dot script (where I see the problem on trial 2)

    #Use inline script to as to call the set_visible function
    my_mouse.set_visible()
    w,h = exp.resolution()
    #print (get_pressed)
    
    #a while loop that presents a dot until click
    showdot = True
    while showdot:
        my_canvas.clear()
        my_canvas.fixdot(w/2,h/2,color='black')
        my_canvas.show()
        lclick2, mclick2, rclick2 =  get_pressed()
        get()
        #print (lclick2)
        if lclick2 == 1:
            print 'Fixdot Clicked'
            showdot = False
            break
    

    And the second Tracker script (still has clutter to be removed later)


    start = self.time() my_mouse.set_visible() tracking = True while tracking: position, timestamp = my_mouse.get_pos() x, y = position #print str(x)+', '+str(y) t = timestamp - start #uncomment this to print to the debug window. #print str(t)+', '+str(x)+', '+str(y) xList.append(x) yList.append(y) tList.append(t) lclick, mclick, rclick = get_pressed() get() if lclick == 1: #resp = 1 print 'Clicked' tracking = False break #if x > 930 and y < 175: #resp = 2 #print 'right' #tracking = False #break #This dictates sampling rate. 50 hertz is more than enough self.sleep(20) my_canvas.clear() print 'answered' #self.experiment.set("response", resp) #if resp == self.get('correct'): #self.experiment.set("accuracy", 1) #else: #self.experiment.set("accuracy", 0) self.experiment.set("xTrajectory", str(xList)) self.experiment.set("yTrajectory", str(yList)) self.experiment.set("tTrajectory", str(tList)) self.experiment.set("position", position) self.experiment.set("flush","yes")

    Should add that it's possible that I have not diagnosed the error correctly. Both scripts are using get_pressed() and on trial 2, only the first script immediately advances as if the mouse has already been clicked. The second script (Tracker) correctly waits for the mouse click.

  • edited 10:36AM

    You're supposed to call pygame.event.get() before calling pygame.mouse.get_pressed(). You could try this and hope it works (makes for a little less unpredictable script). Apart from this, note that the print sometimes has the same effect.

    By the way: might I suggest to try the playground version of OpenSesame? (github.com/smathot/OpenSesame then click to 'playground' branch). A get_pressed was added to the openexp mouse class very recently. Why not try this one?

    Good luck!

  • edited 10:36AM

    Hi Edwin,

    Thanks for catching the get() issue so quickly. Yep, that was it. I reordered when it was done and that problem went away. I've been making further modifications and I am 95% certain, we've pulled this off. I'm playing with alternate data logging a bit and then will post the final solution here for posterity in case someone else wants to do mouse tracking in Open Sesame. Thanks again.

  • edited 10:36AM

    I think we have the mouse tracking experiment working now. Thanks to everyone for their help again. I am going to document it here because I think this is a fairly common design. The experiment shown here is missing a training period and some more instructions, but the critical trial sequence is displayed in the image below. Below that, I copy in the necessary inline scripts. One odd thing in the current design is that we're both sending some things to logger and some things to a custom myLog file. The purpose of the latter is because it makes for pretty easy analysis in R later without heavy manipulation. However, some Logger items, such as the location and area of the final click are only appropriate in Logger. Clearly, one could remove the trajectories from logger since myLog is implemented.

    Design:
    The participant, during trials, is presented with a fixation dot. When they left click on the dot (and only there, or a square around it), a question is played as a sound file. After the question, three images are displayed and a further sentence is played through sampler. The unusual thing about the design here is that there are three possible destinations rather than the usual two. Mouse position and timestamps are recorded until they left-click on one of the three designated areas. There is a slight delay and then the next trial proceeds.

    image

    Script opening up MyLog. Run Phase only.

    #initialise myLog
    
    #note that this will append data to myLog, not overwrite it.
    global myLog
    myLog = open('T:/Supervision/Ryosuke/mylog.csv','a')
    
    #Remove comment if you want a Header.
    #Will add a single header for each run through experiment.
    #Operating with a file that already has a header to prevent duplicate headers.
    #myLog.write('x_coor' + ',' + 'y_coor' + ',' + 'timestamp' + ',' + 'count_seq' + ',' + 'participant' + ',' + 'condition + ',' + 'snumber' + chr(10))
    
    

    Show Dot prepare phase

    #Initialise mouse object
    from openexp.mouse import mouse
    global my_mouse
    my_mouse = mouse(exp, visible = True)
    my_mouse.flush()
    my_mouse.set_visible()
    
    #Initialise canvas
    from openexp.canvas import canvas
    global my_canvas
    my_canvas = canvas(exp)
    
    #initialise pygame.mouse
    from pygame.mouse import get_pressed
    global get_pressed
    
    from pygame.event import get
    global get
    

    Show Dot Run Phase

    self.experiment.set("flush","yes")
    
    #Use inline script to as to call the set_visible function
    my_mouse.set_visible()
    #get screen size and set a border .02% around center
    w,h = exp.resolution()
    c_right = w/2+.02*w
    c_left = w/2-.02*w
    c_above = h/2+.02*w
    c_below = h/2-.02*w
    
    #print (get_pressed)
    
    #a while loop that presents a dot until click
    showdot = True
    while showdot:
        #clear and display fixdot
        my_canvas.clear()
        my_canvas.fixdot(w/2,h/2,color='black')
        my_canvas.show()
        #get position of mouse to compare to window
        position, timestamp = my_mouse.get_pos()
        x1, y1 = position   
        get()
        lclick2, mclick2, rclick2 =  get_pressed()
        #If click is inside border, break   
        if x1 < c_right and x1 > c_left and y1 < c_above and y1 > c_below and lclick2 == 1:     
            print 'Fixdot Clicked'
            showdot = False
            break
    
    

    Tracker Prepare Phase

    xList, yList, tList = [], [], []
    global xList, yList, tList
    
    global myLog
    
    #initialise sampler
    from openexp.sampler import sampler
    src = exp.get_file(self.get("sent") + ".wav")
    my_sampler = sampler(exp,src)
    global my_sampler
    
    

    Tracker Run Phase

    #initialise
    start = self.time()
    my_mouse.set_visible()
    
    #initialise borders
    w,h = exp.resolution()
    #define top-right margins; 0,0 is top-left margin
    tr_left = w-0.35*w
    tr_below = 0+0.30*h
    #define bottom-right margins; 0,0 is top-left margin
    br_left = w-0.35*w
    br_above = h-0.30*h
    #define bottom-left margins; 0,0 is top-left margin
    bl_right = 0+0.35*w
    bl_above = h-0.30*h
    
    #initialise logging variables; not necessary, but makes the write command cleaner.
    part = self.get('subject_nr')
    seq = self.get('count_sequence')
    condition = self.get('cond')
    snumber = self.get('sent_num')
    
    #play sound - implemented here so that tracking starts with the sound, not after.
    my_sampler.volume(1.0)
    my_sampler.play()
    
    #set a tracking loop
    tracking = True
    
    while tracking:
        position, timestamp = my_mouse.get_pos()
        x, y = position
        t = timestamp - start
        #uncomment this to print to the debug window.
        #print str(t)+', '+str(x)+', '+str(y)
        #write to myLog
        myLog.write(str(x) + ',' + str(y) + ',' + str(t) + ',' + str(seq) + ',' + str(part) + ',' + str(condition) + ',' + str(snumber) + chr(10))
        #assemble to write trajectories to OS logger (duplicate data and could be removed)
        xList.append(x)
        yList.append(y)
        tList.append(t)
        #set up mouse click
        lclick, mclick, rclick =  get_pressed()
        get()
        #three defined areas, one for each picture
        if x > tr_left and y < tr_below and lclick == 1:
            resp = 1
            mouse_rt = timestamp - start
            #print 'Clicked'
            #stop the sound if still playing
            if my_sampler.is_playing():
                my_sampler.stop()
            my_canvas.clear()
            tracking = False
            break
        if x > br_left and y > br_above and lclick == 1:
            resp = 2
            mouse_rt = timestamp - start
            #print 'Clicked'        
            #stop the sound if still playing
            if my_sampler.is_playing():
                my_sampler.stop()
            my_canvas.clear()
            tracking = False
            break
        if x < bl_right and y > bl_above and lclick == 1:
            resp = 3
            #print 'Clicked'
            mouse_rt = timestamp - start        
            #stop the sound if still playing
            if my_sampler.is_playing():
                my_sampler.stop()
            my_canvas.clear()
            tracking = False
            break
        #This dictates sampling rate. 50 hertz is more than enough
        self.sleep(20)
    my_canvas.clear()
    #print 'answered'
    #if resp == self.get('correct'):
        #self.experiment.set("accuracy", 1)
    #else:
        #self.experiment.set("accuracy", 0)
    self.experiment.set("Mouse_RT", mouse_rt)
    self.experiment.set("xTrajectory", str(xList))
    self.experiment.set("yTrajectory", str(yList))
    self.experiment.set("tTrajectory", str(tList))
    self.experiment.set("position", position)
    #response areas 1, 2, 3 are top right, bottom right, bottom left, respectively.
    self.experiment.set("pic_response", resp)
    
    

    Finally, the CloseLog Run phase


    global myLog myLog.close()

    There's likely some clutter in there and simpler ways of doing things, but this appears functional.

    Note on MyLog:

    The data format is:
    x_coord, y_coord, timestamp, sequence, participant, condition, sentence number
    So it's essentially an indexed vertical list of x and y coordinates for the mouse over time.

    Cheers.

  • edited April 2014

    Hi,

    Actually, I am also going to use mouse tracking inline code. However, because I am amateur I cannot manage it. In my experiment two pictures will be shown at top-left and top-right of the screen and the participant should select one of these two pictures. Consider that I have 30 trials, so I should have 30 choices at the end. Their choice is important for me. I have designed this part with a form-based object. Beside that I also would like to record their mouse trajectories and I faced with problem in this part.

    By these changes, when I run the program the first trial starts, I even select the first picture and then suddenly the program ends and it gives an error related to MyLog code in the run phase of the tracker, I mean this line:

    myLog.write(str(x) + ',' + str(y) + ',' + str(t) + ',' + str(seq) + ',' + str(part) + ',' + str(condition) + ',' + str(snumber) + chr(10))

    Actually, I eliminated the line related to LOG file in the run phase of the tracked (I mean the above line) and just by eliminating that line, the program runs without error. However, as expected I do not get the data related to the mouse trajectory in my responses!

    I really appreciate if you could help me in this regard and sorry, if the question is really basics. I have also explained my experiment here:

    Regards,
    Pa

  • edited April 2014

    Hi :)
    I just would like to inform you that I solved the problem and now, I get the mouse tracking data. However, it is not finalized and I am still working on that !:)
    Bests,
    PA

Sign In or Register to comment.

agen judi bola , sportbook, casino, togel, number game, singapore, tangkas, basket, slot, poker, dominoqq, agen bola. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 50.000 ,- bonus cashback hingga 10% , diskon togel hingga 66% bisa bermain di android dan IOS kapanpun dan dimana pun. poker , bandarq , aduq, domino qq , dominobet. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 10.000 ,- bonus turnover 0.5% dan bonus referral 20%. Bonus - bonus yang dihadirkan bisa terbilang cukup tinggi dan memuaskan, anda hanya perlu memasang pada situs yang memberikan bursa pasaran terbaik yaitu http://45.77.173.118/ Bola168. Situs penyedia segala jenis permainan poker online kini semakin banyak ditemukan di Internet, salah satunya TahunQQ merupakan situs Agen Judi Domino66 Dan BandarQ Terpercaya yang mampu memberikan banyak provit bagi bettornya. Permainan Yang Di Sediakan Dewi365 Juga sangat banyak Dan menarik dan Peluang untuk memenangkan Taruhan Judi online ini juga sangat mudah . Mainkan Segera Taruhan Sportbook anda bersama Agen Judi Bola Bersama Dewi365 Kemenangan Anda Berapa pun akan Terbayarkan. Tersedia 9 macam permainan seru yang bisa kamu mainkan hanya di dalam 1 ID saja. Permainan seru yang tersedia seperti Poker, Domino QQ Dan juga BandarQ Online. Semuanya tersedia lengkap hanya di ABGQQ. Situs ABGQQ sangat mudah dimenangkan, kamu juga akan mendapatkan mega bonus dan setiap pemain berhak mendapatkan cashback mingguan. ABGQQ juga telah diakui sebagai Bandar Domino Online yang menjamin sistem FAIR PLAY disetiap permainan yang bisa dimainkan dengan deposit minimal hanya Rp.25.000. DEWI365 adalah Bandar Judi Bola Terpercaya & resmi dan terpercaya di indonesia. Situs judi bola ini menyediakan fasilitas bagi anda untuk dapat bermain memainkan permainan judi bola. Didalam situs ini memiliki berbagai permainan taruhan bola terlengkap seperti Sbobet, yang membuat DEWI365 menjadi situs judi bola terbaik dan terpercaya di Indonesia. Tentunya sebagai situs yang bertugas sebagai Bandar Poker Online pastinya akan berusaha untuk menjaga semua informasi dan keamanan yang terdapat di POKERQQ13. Kotakqq adalah situs Judi Poker Online Terpercayayang menyediakan 9 jenis permainan sakong online, dominoqq, domino99, bandarq, bandar ceme, aduq, poker online, bandar poker, balak66, perang baccarat, dan capsa susun. Dengan minimal deposit withdraw 15.000 Anda sudah bisa memainkan semua permaina pkv games di situs kami. Jackpot besar,Win rate tinggi, Fair play, PKV Games