Question about mouse-tracking measures
Hello!🤠
I prepared an experiment in OpenSesame and I obtained mouse-tracking measures in R. I compared 3 cases. While there seems to be a significant difference in terms of AUC among 3 different cases (this is our expectation), no significant difference occurs in terms of MAD values among 3 cases. Is this normal? Is something wrong with my data? Do I also have to obtain a difference in terms of MAD values? Which measure do I need to take into consideration? Is it okay if I just take AUC?
Thank you 🤠
Comments
Hi there,
when you say that you compared 3 cases, are you referring to 3 different conditions with mutliple observations each? If so, how many observations do you have in each condition? Which statistical test did you use to compare them?
Usually, MAD and AUC are correlated quite strongly (I would guess r >.9) and should lead to very similar results. Could you check the correlation in your dataset?
Best,
Pascal
Hello,
I have solved the problem. There was something I was missing with the data. All values are correlated like you said. Thank you for the answer. =)
HI @Pascal , I hope you are well.
I have a methodological question if you don't mind.
What is your opinion on using the mt_plot_aggregate to show/plot curvature differences found between participants rather than between stimuli conditions? I know this sounds vague, but can you think of any reasons not to take this approach?
To illustrate, would it be possible, for instance, to compare trajectories between native and non-native speakers of English? There would be no difference in terms of stimuli/conditions. The only difference would be the groups of participants.
Cheers
Hi there,
in my opinion, you can also use mouse-tracking to compare differences between participants, probably mostly related to comparing groups of participants (e.g., native and non-native speakers, as in your example).
In this regard, I would particularly see two things that should be considered:
1) Mouse-tracking differences could also be explained by other variables that differ between the participant groups, particularly if it is plausible to assume that they might affect how participants are using the mouse in general (e.g., if the age distribution is also different between the participant groups)
2) In general, the discussion around whether it makes sense to compare the groups based on aggregate trajectories / aggregate curvature indices or use a trial level analysis, e.g., focusing on trajectory types, see, e.g., Wulff, D. U., Haslbeck, J. M. B., Kieslich, P. J., Henninger, F., & Schulte-Mecklenbeck, M. (2019). Mouse-tracking: Detecting types in movement trajectories. In M. Schulte-Mecklenbeck, A. Kühberger, & J. G. Johnson (Eds.), A Handbook of Process Tracing Methods (pp. 131-145). New York, NY: Routledge. (preprint: https://psyarxiv.com/6edca/)
Best,
Pascal
Thank you so much for this, @Pascal !
You are the best!
Hi Pascal,
Happy New Year!
I have a similar question to the above regarding how to interpret such findings. I hope that's okay.
In the example below (average deviation, p < .05), would it make sense to say that individuals in Group A (e.g., Non-native speakers of English) were more biased towards the competing response options than Group B (e.g., native speakers of English) as indicate by their less direct mouse trajectories?
Also, could you recommend any readings regarding power for mouse-tracking measures in terms of both number of participants and number of trials needed in general? I understand the answer will vary depending on the research question and design, but I would like to have a rough idea of the minimum needed in general.
Best
@Pascal
Hi there,
thanks & happy New Year to you as well!
Regarding the first question ("would it make sense to say that individuals in Group A (e.g., Non-native speakers of English) were more biased towards the competing response options than Group B (e.g., native speakers of English) as indicate by their less direct mouse trajectories"):
I would phrase it in the way that individuals in Group A were on average more attracted towards the competing response option (at some point during the decision process) than Group B which could indicate that they on average experienced more conflict in their choice. However, whether this "on average" is meaningful / valid depends on the distribution of the individual trajectories. We have a new preprint (https://psyarxiv.com/v685r) of a mousetrap tutorial (main author is Dirk Wulff, who co-developed the mousetrap R package) in which we discuss this issue in detail in the section "Advanced mouse- and hand-tracking analysis".
Regarding the second question ("could you recommend any readings regarding power for mouse-tracking measures in terms of both number of participants and number of trials needed in general?"):
That's a good question. I am only aware of one mouse-tracking paper (https://dx.doi.org/10.3758%2Fs13428-020-01409-0) that addresses the question regarding the number of trials (in the subsection Measurement precision) by Robert Wirth and colleagues. But it could well be the case that I am missing something here as I have left academia some time ago transitioning into an industry position (since then I continue to maintain the open-source software packages like mousetrap, but I don't actively follow all the new papers anymore).
Hope this helps!
Best,
Pascal