Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Mousetrap: velocity and bimodality

I have two questions, both of which are inspired by Freeman (2014) "Abrupt category shifts during real-time person perception": https://pdfs.semanticscholar.org/4116/e3240369ac70da21253f29e9b0f735636407.pdf

One, is it possible to produce velocity plots like the one shown in B and D below from Freeman (2014):

Given that the Y-axis is x-coordinate velocity, I'm interpreting this as meaning that the positive and negative values of velocity indicate velocity in the left or right direction. This looks like an extremely useful way to plot data. Is this possible in the mousetrap R package?

Two, the paper uses maximum deviation to help identify abrupt changes in trajectory. Specifically, he states that "trajectories exceeding an MD threshold of 0.9 were marked as reversals." Although it's never explicitly stated, I'm assuming this refers to the standardized z-scored values of MD which I can produce using the mt_standardize function in R. Does this assumption sound correct?

Thank you to anyone that can provide assistance.

Comments

  • Hi Mike,

    regarding one: yes, it is possible to do this in the mousetrap R package.

    Here are two examples:

    Example 1 that is close to the Freeman (2014) plots:

    # calculate signed velocity based on movements along x dimension
    mt_example <- mt_derivatives(mt_example, dimensions = "xpos", absolute = FALSE)
    
    # average values for 100 ms intervals
    mt_example <- mt_average(mt_example, interval_size = 100)
    
    # plot mean velocity across time per condition
    # (problem: trial length differs so not every trial contributes to every time point)
    mt_plot_aggregate(mt_example, use = "av_trajectories", 
      x = "timestamps", y = "vel", color = "Condition")
    

    Example 2 based on time-normalized trajectories:

    # time-normalize trajectories
    mt_example <- mt_time_normalize(mt_example)
    
    # calculate signed velocity for time-normalized trajectories based on movements along x dimension
    mt_example <- mt_derivatives(mt_example, use = "tn_trajectories",
      dimensions = "xpos", absolute = FALSE)
    
    # plot mean velocity across time steps per condition
    mt_plot_aggregate(mt_example, use = "tn_trajectories", 
      x = "steps", y = "vel", color = "Condition")
    
    

    Best,

    Pascal

  • Regarding two:

    I don't think this assumption is correct. I think the MD threshold of 0.9 is applied on the raw MD values calculated by MouseTracker. As MouseTracker transforms the original pixel values into its own metric (x values between -1 and 1, y values between 0 and 1.5) the resulting MD values (and accordingly the threshold) are also in this metric.

    Together with Barnabas Szaszi and a few colleagues from Budapest we recently generalized the MD threshold technique to be able to identify multiple changes of mind (one paper that uses this method can be found here). I have also implemented this in the mousetrap R package including a function that translates Freeman's threshold into a value that fits for the specific layout and metric of any new study. The code is not yet included in the package but I will make it available in the next weeks - just let me know if you are interested earlier.

  • Hi Pascal,
    Thank you for the helpful response! I have a couple of follow-up questions.

    Regarding velocity (1):
    I noticed that you did not include the remapping code.
    MT <- mt_remap_symmetric(MT)

    Should remapping be skipped for velocity? Or did you assume that remapping would have been done earlier?

    Regarding velocity (2):
    I'm running a memory recollection study where participants are allowed up to 8 seconds to try and recall their response. As you can imagine, the histogram of the response time for this task has a verrrrrrrrrrrrry long tail/skew with the number of trials dropping off precipitously after about 3 seconds or so. Given this, I feel like I should use time-normalized trajectories to look at velocity. I know you prefer to look at absolute time but given my very wide range of response times, I think I should use time-normalized trajectories for velocity. Does this seem reasonable? There is also a secondary concern that the interval size for averaged trajectories seems somewhat arbitrary. For example, I'm not quite sure why Freeman (2014) used a size of 60 ms.

    Regarding your plotting code:
    I noticed that you did not include the subject_id = "subject_nr" argument in the code. When should that argument be specified and when should it not be? Don't we always want to aggregate within a participant before aggregating within a condition?

    Regarding your MD code:
    Thank you for that paper. It looks extremely helpful. And thank you correcting my false assumption. I do not need the code immediately and can easily wait until the package is updated. I have signed up to the mailing list and will keep my eyes out for an email on it.

    Infinite thanks and appreciation for your help!

  • Hi Mike,

    Regarding velocity (1): yes, it definitely is best to remap the trajectories beforehand (the example trajectories in mt_example are already remapped).

    Regarding velocity (2): that's a good point and unfortunately (in my view) there is no really good solution in cases where the response times differ substantially across trials. In principle I agree with you that it makes sense to use the time-normalized trajectories. However, this does not completely solve the problem if you compare trajectories of different length since the temporal resolution would then be quite different between trials. You could think about using time-normalized trajectories but doing separate analyses for trials of different length (e.g. below and above 4 s).

    Regarding the aggregation procedure: good point. I think that in general it makes sense to first aggregate within and then across participants. However, if the trial number differs substantially between participants (which might e.g. happen in the example above if you split your analyses for different subsets of trials) this might lead to strange results so in this case it is probably best to leave it out.

    Hope this helps! I think that you are raising important questions and in many cases I can only provide a suggestion and no definite recommendation.

    Best,

    Pascal

    Thanked by 1MikeD
  • Thank you for all your assistance!

Sign In or Register to comment.