Need advice on making a quick version of a short vigilance/sustained attention task
Hello, I’m a novice in OpenSesame and Python (though I have experience with C++ and a few other languages). I’ve completed a few tutorials on here (e.g., cats and dogs, visual search, and the gaze cue). For my dissertation, I would like to create a short, visual sustained attention task and an equivalent auditory task. I found a potential pair of tasks that I could use, and I have supplied a snippet from a paper below describing the tasks (e.g., used in Galinsky, Rosa, Warm, & Dember, 1993; Scerra, 2013?; Shaw, 2006). Although these tasks have been used by many researchers, I haven’t found code, scripts, etc. that I can reuse. I want to adapt the task to have 1-minute trials instead of 10+ minute trials. Additionally, I plan on having a cross-modal condition (visual task + auditory stimulus presented) and an intra-modal condition (auditory task + additional auditory stimulus).
The closest task that I found on here is the conjunctive continuous performance task, but it would need to modified to work (and I need a task that is more demanding for shorter durations). Before I go through the trouble of designing and implementing the vigilance tasks below, I need to make a quick, hard-coded version of the vigilance task (visual or auditory) to see if it is suitable for my experiment. I’m thinking the following would work for now: a few trials (trial: 1-min duration and 25 events in that duration); 1 critical-target per trial; and different ISI for critical-target and noncritical targets (not randomized for now) in a trial. How can I implement this quickly with the least effort as possible? Any advice would be greatly appreciated!
From Scerra (mostly copied, but changed some):
“Vigilance Task. The experiment employed a 40-minute, high event-rate (25 events/min) task, wherein participants were asked to make successive critical-target discriminations. To avoid possible sex differences in spatial discriminatory ability (Dittmar et al., 1993), and to ensure equal discriminability across sensory modalities, critical targets were differentiated from noncritical targets on the basis of duration. To make comparisons across sensory modalities, perceptual load must be equated; however, the threshold for temporal discrimination varies by modality (Dember & Warm, 1979).
- In the visual condition, the stimulus for both noncritical (247.5 ms) and critical (125 ms) events was a 3 x 9.5 cm horizontally oriented white bar presented against a dark grey background.
- The auditory stimulus was a 440 Hz tone, presented binaurally for 247.5 ms in the critical condition, and 200 ms for critical signals.
- The task event rate was a mean of 25 events/min based on non-target event interstimulus intervals (ISI) randomly selected from the values of 1.2, 2.0, and 2.7 s. The ISI values were selected to introduce temporal uncertainty in the presentation of events, thus circumventing rote responding, while still maintaining the rate of 25 events/min.
- Critical events were set to comprise 4% of total signals, or approximately ten per every ten-minute period of the task, with critical signal ISI randomly selected from the values of 24, 36, 48, 60, 92, and 100 s (Dittmar, et al., 1993)”
, T. L., Rosa, R. R., Warm, J. S., & Dember, W. N. (1993). Psychophysical determinants of stress in sustained attention. Human Factors: The Journal of the Human Factors and Ergonomics Society, 35(4), 603-614.
Scerra. Effect of sensory modality on performance, workload, and frontal lobe oxygenation in a standard vigilance task.
Shaw, T. H. (2006). Effects of signal modality and event asynchrony on vigilance performance and cerebral hemovelocity (Doctoral dissertation, University of Cincinnati).