[open] Implementing an Infant Preferential Looking(Listening) Procedure (PLP)
Hello,
I`m new to OpenSesame. I followed the tutorials and got a basic sense of how it might work. I am interested in implementing a version of the so-called Preferential Looking Procedure (PLP) for infant studies. In the test phase, a child is sitting on in front of a monitor and is presented with sounds stimuli associated with attention-getting videos. The dependent variable is amount of looking time to the monitor while sound test items are played. Sound items come from two different categories (e.g., words versus part words, or other manipulation).
The test starts with a monitor displaying a silent attention getter video, and a sound test item begins to play. Importantly, for as long as the infant maintains their gaze on the central monitor and do not look away, the test trial continues for up to a maximum X seconds (e.g., 10 seconds). If the infant looks away for 2 consecutive seconds, the experimenter who is watching the infant through a camera ends the test trial and the next test item appears.
Here a few functionalities are required:
- how to play a silent video and a sound item at the same time?
- How to have the sound test item (e.g., "bago") play repeatedly (e.g., "bago...bago...bago...) up to X seconds.
- How to implement the keypress control such that the sound test item plays as long as a button is kept pressed, or if it is unpressed by the experimenter for less than 2 seconds.
If this worked, I believe several researchers running infant studies would be interested in using OpenSesame.
Thank you,
Luca
Comments
It would be quite simple if you'd allow an active key press instead of a key release. A very simple solution would entail creating >X-second sound files that consist of the individual items looped (though there are other ways, too). I can outline how to do that if you're willing to use key presses.
Though with some inline scripting, it should be possible with key releases, too.
Hi Jona,
thank you. Yes I can create a sound file that loops individual items for say 10 times with a given inter stimulus interval. If I understand correctly, the task of the experimenter would be reversed. That is a) do nothing if the infant is attending to a visual anchor to the screen while the sound stimulus is playing; b) Keep a button pressed when the child does not attend to the visual stimulus. If this time is longer than X seconds, the program interrupts the sound stimulus and the visual anchor and moves to the next sound. Is that how you were conceptualizing the task?
I'm not sure whether this question is still relevant, but here a few pointers:
You can just insert a
sampler
item before amedia_player_vlc
item (or some other item that plays a video) and set the duration of thesampler
to 0. This will cause the sound to played in the background while the experiment moves on to the next item, effectively combining sound with video.I agree with Jona that it would easiest to just create looped sound files. Alternatively, you can write a Python script that periodically starts a new sound. This will be more flexible, but slightly more complicated.
The exact implementation depends on what exactly you want to do, and whether you want to do this while the video is playing or not. In general, all that you want to do is certainly possible, although some scripting may be required, and the exact solution depends on the specifics of our experiment. But please don't hesitate to post if you get stuck.
Check out SigmundAI.eu for our OpenSesame AI assistant!