Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[open]Playing multiple background sounds in eye-tracking study

edited November 2015 in OpenSesame

Hi guys,

I'm new here. I'm working on a study using eye-tracking to see how music distracts reading. I have some music pieces and some articles. I want the music pieces to play in a random order while the subject is reading articles. It is important for me to synchronize the music stream and the eye tracking data, so I can identify at which point of the music the subject has an erratic eye movement. I wonder how to achieve this synchronization. I have some knowledge of python so I can write a little bit python script. I am using a Tobii 60Hz eye tracker. I know that pygame is not very accurate at precision but the 30ms jitter is not entirely unacceptable. But it would be cool to do this in pyaudio, but it seems more complicated than pygame.

So basically I'm seeking help on:

  1. Play sounds consecutively in random order
  2. stop playing as long as the subject finishes one article, start playing the music (that has not been played) as long as the next article presents.
  3. Synchronize each piece of music (start, stop) and the on-line eye-tracking recording. Pygaze is acceptable, pyaudio would be perfect.

Any help would be appreciated!

Han

Comments

  • edited 10:02PM

    Hi Han,

    Your design is not entirely clear to me.

    • Are you simply playing one long sound file during each article, or are you continuously playing individual sounds?
    • And how does the subject indicate that he/she has finished reading the article? By pressing a button, or with a gaze-contingent algorithm to check when the eyes reach the end of the text?

    In terms of timing, if you have a 60 Hz eye tracker, then I don't think you will gain much by using PyAudio over PyGame. However, in both cases I would check the latency, that is, the delay between the moment that you tell the computer to start sound playback, and the moment that the sound is actually played. This different from jitter, which is about variance in the latency. (Usually, latency is fairly high, but relatively constant.)

    The basic procedure is to simultaneously show a display and play a sound and then, using an external device, register the actual delay between both events.

    It's a bit of a pain, I know; but because you're interested in absolute timing, benchmarking is important.

    Cheers,
    Sebastiaan

  • edited November 2015

    Hi Sebastiaan,

    Thanks for your reply! We are playing multiple pieces of music and subjects will press a key to indicate they've finished! We are using pygame.mixer.music to play the music and we set some variables to track names and lengths of the music. We have to synchronize the eye tracking data and music stream manually but this is the best way we can find :)

    I do have another question, though. Since we are using Pygaze to collect eye-tracking data and we are interested in analyzing fixation data, does Pygaze provide any scripts to calculate fixation data from raw gaze data? I've looked at the Pygaze Analyzer here but it seems that it is specifically designed for drawing. Is there any way to get the output fixation data?

    Thanks so much!

    Han

  • edited 10:02PM

    Hi Han,

    This is something that may be provided by the developer of your specific eyetracker. I know SRResearch has software to process the Eyelink data with, for example.
    If worse comes to worst you could always parse the raw data using a script of your own.. This is not easy, but at the same time not that much more difficult than retrieving fixation durations from an already-parsed logfile.

    Cheers,
    Josh

Sign In or Register to comment.