[open] Mousetracking and Visual World Paradigm
Good afternoon,
I am attempting to use Open Sesame for a mouse-tracking experiment in the visual world paradigm. The trial procedure is basically thus:
Participant sees a fixation dot for a fixed amount of time.
After the delays, a sound file plays while the fixation dot remains.
They click on the dot and three pictures appear, one each in the lower left, top right, and bottom right corners.
A sound file also plays immediately.
They move the mouse to click on one of the pictures.
End of trial. Repeat randomly with other stimuli.
Most of this is quite straightforward, though I am hitting a few problems I've been unable to solve so far. I will mention the biggest one first.
1) I can display three images in the right locations using the standard Sketchpad. Plus, I can make the fixation dot follow the mouse until a mouse-click occurs. I am using the inline code from http://osdoc.cogsci.nl/python-inline-code/mouse-functions/ for the mouse action. What I don't understand yet is how to integrate the two if possible. For instance, the "follow the mouse" inline code continually clears the canvas (I could remove that line if needed). I assume I need to draw the canvas inside the same inline code that handles the mouse-tracking, but my attempts to do that so far using the documented canvas functions are unsuccessful.
2) Displaying the mouse during a fixation cross. I'm guessing this one is trivial to the knowledgeable. The fixation dot appears. It remains until mouseclick. The purpose of that is so that the mouse is at the center when the pictures and sample are played. However, the mouse does not show during this fixation cross time, so the participant doesn't know where their mouse is. (It does just show with my tracking inline code.) I've been trying to use set.visible, but I'm apparently not placing it in the right location.
3) Finally, I'm having trouble getting it to write the continuous mouse positions (pos and time) to the log file. Advice here?
Apologies to have my first support request be so needy, but I am not solving these problems myself fast enough for a student who needs this up and going.
Thanks for any time you have for this,
Hunter
Comments
Hi Hunter,
Welcome to the forum!
Generally speaking, if you do part of the drawing with an inline_script item, you have to do all of it that way (at least for things that happen simultaneously). So instead of a sketchpad item, you would need to use
openexp.canvas
, as described here:Basically, instead of drawing just the fixation dot, you would also draw the three images to the canvas and omit the sketchpad altogether.
Incidentally, might I ask why you are drawing a mouse-contingent fixation dot, instead of using the regular mouse cursor? If the mouse cursor doesn't show (after calling
mouse.set_visible()
), this might be due to the back-end. Some back-end configurations prevent the mouse cursor from being shown (in fullscreen only), in which case you could try a different back-end.The easiest way is probably to insert a mouse_response after the item used to draw the fixation cross (a sketchpad, probably?) and tick the 'Visible mouse cursor' box. Make sure to set the
duration
of the preceding item to 0, so that it advances immediately to the mouse_response.Polling the mouse position is done with
mouse.get_pos()
. However, logging a continuous signal into a row-per-trial logfile is problematic, because it's not clear how the data should be organized in that case. Ideally speaking, how would you like to organize your data file?Good luck and cheers!
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi Hunter
For a experiment, I did something similar to what you need to do. The problem I had using openexp.canvas is that there's a lot jitter in the cursor trajectory. So I prefered to use Pygame in inline script to present the mouse cursor.
I think I can help with the data logging here.
I've been building a template for something similar for touchscreens (so haven't needed to see the cursor).
What I've done is, after the stimuli have been presented (most conveniently with the sketchpad, although you can control this in the inline script for more flexibility), I create an inline object called "Tracker".
In the prepare phase of this, I put:
which sets things up for logging.
In the run phase, use a variation of what's below.
Because on the touchscreen, I just need the cursor to be over the response buttons in the two corners to register a response (touching the screen is one big click).
As of today, I know that you can place
at the start of your experiment somewhere, call it in the 'while tracking' loop above as
and change my:
if x < 350 and y < 175:
to
if x < 350 and y < 175 and (lclick == 1):
Hope this helps!
Eoin.
Edit: You obviously need to stick a logger item after this each time, and log xTrajectory, yTrajectory, tTrajectory, response, accuracy, and position.
Thanks, everyone, for your assistance and apologies for being so slow to get back. Basically, I'm able to work on research Saturday - Monday and then disappear into lectures Tuesday - Friday right now. It's Saturday here in New Zealand and so I'm back on this.
First, before I forget, Eoin, my collaborator and I have been working on an open source Android-platform that's under development (I've seen Open Sesame's work on this as well.) The project is located at https://github.com/Otago-PsyAn-Lab/Otago-PsyAn-Lab. Yell if you'd like to discuss projects. It's only at a state where it runs designed experiments but has no way to design them. The latter's currently being created. (And I hope mentioning that work here is not inappropriate.)
Sebastiaan, there's no reason actually to have the fixation dot follow the mouse. It simply did no harm and was ready-made inline code. If I remove that and just track the cursor, then the clearing of the canvas line is unneeded and that can help solve my presentation of pictures.
Yes, how to write the tracking information? Ideally, I believe it would be something like:
x-pos ; y-pos ; timestamp ; item number.
where item number is some identification of which item the participant is currently engaged with. One would then append these rows at an appropriate sample rate.
Which gets us to Eoin's code. Thanks very much indeed for this. Reading your code, am I correct in understanding that you are comparing the location of the mouse against a determined pixel number in order to categorise? I don't believe we will need that as we're just interested in raw location with the hope of comparing curvature towards pictures on the right as a result of the sentence played. (The picture on the left is a distractor for non-filler items.) Is there more going on in that section of the code that I am not following?
OK, I will attempt to implement the suggestions so far and report back.
Psy-An lab looks interesting. I'll be sure to take a look at it later in the week.
You're right about my code, I have a response button in the top left and top right corners (as in Freeman's MouseTracker program), and the trial ends and logs a response as soon as the cursor is over either button.
My code saves the x and y positions and the timestamp as 3 python lists (values in square brackets, separated by commas), so that they can be relatively opened an manipulated with SciPy/NumPy, although I haven't got as far as this yet. Because I actually wrote this code for a simple lexical decision task to show my supervisors OpenSesame's potential, on each trial I log the following:
LoggerCount (or similar) - chronological trial number.
stimulus
response
accuracy
RT
xTrajectory (list of values on x axis)
yTrajectory
tTrajectory (a misleading name: the timestamps corresponding to each entry on x and y trajectories).
Finally, I had a go at showing the cursor position last week.
...worked fine on my desktop in the office (Intel 3.4GHz, 8GB), but I haven't tried it on the Nexus 7).
Eoin
Thanks Eoin and everyone,
I'm making a good bit of progress on this and I think it's going to work out when I have all the little details worked out. When the whole thing is ready, I will post the entire code here, because I think this is an experimental design others will be interested in.
The particular issue I'm dealing with at the moment is "flushing". Based upon Eoin's code I am using get.pressed() from pygame to register the mouse click. Everything goes as expected for the first trial. However, when I go through the second trial, the while loop ends immediately as if the mouse has already been pressed. I think I need some way to clear out the value of get_pressed from a previous trial.
The format is roughly: inline code 1 presents a fixation dot and waits for mouse click. (This is where my problem eventually comes.) After click, I have a couple samplers and a sketchpad using the standard OpenSesame building tools. Then there's the final Tracker inline code based largely on Eoin's sample. I've commented out a few things that may be deleted or implemented as I keep working.
Fixation Dot script (where I see the problem on trial 2)
And the second Tracker script (still has clutter to be removed later)
Should add that it's possible that I have not diagnosed the error correctly. Both scripts are using get_pressed() and on trial 2, only the first script immediately advances as if the mouse has already been clicked. The second script (Tracker) correctly waits for the mouse click.
You're supposed to call
pygame.event.get()
before callingpygame.mouse.get_pressed()
. You could try this and hope it works (makes for a little less unpredictable script). Apart from this, note that theprint
sometimes has the same effect.By the way: might I suggest to try the playground version of OpenSesame? (github.com/smathot/OpenSesame then click to 'playground' branch). A get_pressed was added to the openexp mouse class very recently. Why not try this one?
Good luck!
Hi Edwin,
Thanks for catching the get() issue so quickly. Yep, that was it. I reordered when it was done and that problem went away. I've been making further modifications and I am 95% certain, we've pulled this off. I'm playing with alternate data logging a bit and then will post the final solution here for posterity in case someone else wants to do mouse tracking in Open Sesame. Thanks again.
I think we have the mouse tracking experiment working now. Thanks to everyone for their help again. I am going to document it here because I think this is a fairly common design. The experiment shown here is missing a training period and some more instructions, but the critical trial sequence is displayed in the image below. Below that, I copy in the necessary inline scripts. One odd thing in the current design is that we're both sending some things to logger and some things to a custom myLog file. The purpose of the latter is because it makes for pretty easy analysis in R later without heavy manipulation. However, some Logger items, such as the location and area of the final click are only appropriate in Logger. Clearly, one could remove the trajectories from logger since myLog is implemented.
Design:
The participant, during trials, is presented with a fixation dot. When they left click on the dot (and only there, or a square around it), a question is played as a sound file. After the question, three images are displayed and a further sentence is played through sampler. The unusual thing about the design here is that there are three possible destinations rather than the usual two. Mouse position and timestamps are recorded until they left-click on one of the three designated areas. There is a slight delay and then the next trial proceeds.
Script opening up MyLog. Run Phase only.
Show Dot prepare phase
Show Dot Run Phase
Tracker Prepare Phase
Tracker Run Phase
Finally, the CloseLog Run phase
There's likely some clutter in there and simpler ways of doing things, but this appears functional.
Note on MyLog:
The data format is:
x_coord, y_coord, timestamp, sequence, participant, condition, sentence number
So it's essentially an indexed vertical list of x and y coordinates for the mouse over time.
Cheers.
Hi,
Actually, I am also going to use mouse tracking inline code. However, because I am amateur I cannot manage it. In my experiment two pictures will be shown at top-left and top-right of the screen and the participant should select one of these two pictures. Consider that I have 30 trials, so I should have 30 choices at the end. Their choice is important for me. I have designed this part with a form-based object. Beside that I also would like to record their mouse trajectories and I faced with problem in this part.
By these changes, when I run the program the first trial starts, I even select the first picture and then suddenly the program ends and it gives an error related to MyLog code in the run phase of the tracker, I mean this line:
myLog.write(str(x) + ',' + str(y) + ',' + str(t) + ',' + str(seq) + ',' + str(part) + ',' + str(condition) + ',' + str(snumber) + chr(10))
Actually, I eliminated the line related to LOG file in the run phase of the tracked (I mean the above line) and just by eliminating that line, the program runs without error. However, as expected I do not get the data related to the mouse trajectory in my responses!
I really appreciate if you could help me in this regard and sorry, if the question is really basics. I have also explained my experiment here:
Regards,
Pa
Hi
I just would like to inform you that I solved the problem and now, I get the mouse tracking data. However, it is not finalized and I am still working on that !:)
Bests,
PA