Audios as Stimuli in IAT
Hey Guys, I'm very new to OS and try to implement an IAT that (atm) uses only audio files as stimuli. As my skills in OS still are very rudimentary, I'm currently struggling a lot with the implementation of audio files as stimuli. For the basic IAT structure, I use this template: OSF | An IAT Template for OpenSesame Wiki. As far as I know, audios are to include via a sampler item. And from this discussion post (Using audio stimuli for IAT on Open Sesame — Forum (cogsci.nl)) I take that the reference to my audios via the sampler could work by manipulating the filepath to be smth. like "__pool__/[sound].wav". Yet, I still am pretty lost on how to really implement this.
In the experiment structure, adapted from the template, I have a sketchpad ("first_chance_stim") via which the (usually visual) stimuli are presented to participants. I'd like my adaption to work a same way, meaning that participants can see the typical sketchpad interface, showing the categories and related key-symbols, but playing the audio stimuli (provided by the sampler- I guess).
Also, what would be important is that the count of reaction time per stimuli starts only after the audio has been played. Though that might be a bit much to ask here, without providing the inline code used to start and calculate the reaction time.
Thanks in advance for your help!