Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

How to use coroutines to collect microphone and joystick responses simultaneously?

Hey there!

In my experiment participants are going to decide whether they have seen a light (or not). Answers are given by (a) moving a joystick or (b) talking into a microphone. I want the participants to give the joystick and microphone responses simultaneously. Accordingly, I need OS to assess the answers at the same time as well.

I have an inline script for joystick movement and voice detection...but how can I perform these two scripts simultaneously with OS?

It seems like like coroutines are the way to go. Unfortunately, I wasn't able to figure out how to implement my scripts.

For joystick movement detection I use:

import time
import pygame
pygame.init()
joystick.__init__(self, device=0, joybuttonlist=None, timeout=None)

running = True

        #down=no light seen, up=light seen
    t0 = time.time()        
    while running:
        # get input
        position, timestamp = joystick.get_joyaxes(timeout=50000)
        if position[1] > 0.8:
            joy_dir = "down"
            joy_rt = time.time() - t0
            if var.stimulus_occ == "yes":
                joy_cor = 0
            else:
                joy_cor = 1
            running = False
        elif position[1] < (-0.8):
            joy_dir = "up"
            joy_rt = time.time() - t0
            if var.stimulus_occ == "yes":
                joy_cor = 1
            else:
                joy_cor = 0
            running = False
    else:
        print(joy_dir)
        print(joy_rt)
        print(joy_cor)
        print(position[1])
    print("done")

    var.joy_y = position[1]
    var.direc = joy_dir
    var.rt = joy_rt
    var.cor=joy_cor

And the microphone works with:

import pyaudio
import struct
import math 
import time
import wave

sound_threshold = 0.3
timeout = 3

FORMAT = pyaudio.paInt16 
SHORT_NORMALIZE = (1.0/32768.0)
CHANNELS = 2
RATE = 44100  
INPUT_BLOCK_TIME = 0.01
INPUT_FRAMES_PER_BLOCK = int(RATE*INPUT_BLOCK_TIME)
CHUNK=1024

OUTPUT_FOLDER = "C:\\Users\\Admin\\Desktop\\2017_2018 Ian Masterarbeit\\Preparation\\1_Sound\\Sound_output\\"
WAVE_OUTPUT_FILENAME = "sound_" + str(var.mic_part) + "_" + str(self.get("count_mic_tr_1_15")) + ".wav"
FINAL_FILENAME=OUTPUT_FOLDER + str(WAVE_OUTPUT_FILENAME)

def get_rms(block):
    """Get root mean square as a measure of loudness"""

    count = len(block)/2
    format = "%dh" % (count)
    shorts = struct.unpack( format, block )
    sum_squares = 0.0
    for sample in shorts:
        n = sample * SHORT_NORMALIZE
        sum_squares += n*n
    return math.sqrt( sum_squares / count )

# Open the mic
stream = pyaudio.PyAudio().open(
    format=FORMAT,
    channels=CHANNELS,
    rate=RATE,
    input=True,
    input_device_index=0,
    frames_per_buffer=INPUT_FRAMES_PER_BLOCK
    )

# record until sound or until timeout
print("* recording")
frames = []
start_time = time.time()

recording = True
for i in range(0, int(RATE / CHUNK)):
    while recording:
        if time.time() - start_time >= timeout:
            mic_rt = timeout
            response = "no response found"
            recording = False
        try:
            block = stream.read(CHUNK)
        except IOError as e:
            print e
        loudness = get_rms(block)
        if loudness > sound_threshold:
            mic_rt = time.time() - start_time
            response = "response found"
            recording = False
    data = stream.read(CHUNK)
    frames.append(data)

print(mic_rt)
print(response)

# Process the response
var.rt = mic_rt
var.response = response

# Close the audio stream
stream.stop_stream()
stream.close()
pyaudio.PyAudio().terminate()

# safe audio input into wave file
wf=wave.open(FINAL_FILENAME,'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(pyaudio.PyAudio().get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
wf.close()

(Microphone script credit to Sebastiaan at http://forum.cogsci.nl/index.php?p=/discussion/1772/)

Appreciate your help!

Comments

  • Hi,

    Coroutines are generator functions: functions that contain a yield statement that allows them to suspend and resume later. So if you want to have an inline_script that works together with items in a coroutines item, then you would need to write such as generator function, as described here:

    However, this doesn't strike me as the best solution in your case. Rather, what you want to do is integrate both loops (for recording sound and joystick responses) into a single loop. That is, you create a single while loop that breaks either when a joystick response has been provided, or when a sound has been played.

    Does that make sense? So forget about coroutines—they're not necessary here. Just a single while loop in which two things happen. Do you think you'll be able to manage?

    Cheers,
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • Hey Sebastiaan,

    thx for your swift reply! I'm afraid I'm going to need some help with the loop. Specifically, I need the following to be fulfilled by the program:

    1) I need the answers from both the joystick and the microphone. That is, I want the program to assess answers until both the joystick has been moved and an answer has been said into the microphone. Only then it is supposed to break.
    2) The order of answers is open. People can answer first with the joystick and then with the microphone, or vice versa. This is a bit tricky to me...

    Here is a rough version of what I tried out:

    # initialize joystick and prepare microphone beforehand
    
    run_all=True
    run_mic=True
    run_joy=True
    
    while run_all:
            while run_joy:
                    #the joystick part. If answer is recorded...
                    run_joy=False
            while run_mic:
                    #the mic part. If answer is recorded...
                    run_mic = False
            if run_mic == False and run_joy ==False:
                    run_all = False
    

    The program runs until there was a microphone and joystick input, fulfilling (1).

    However, there is an order in which answer have to be given. In this case, the Joystick has to be moved first and only then is the microphone answer recorded. In other words, my program doesn't fulfill (2).

    Can you help me out?

  • The problem here is that you put the code to check joystick and mic input in their own while loops, which block until input is collected (first for the joystick, then for the mic). But that's not the flow that you want.

    Instead, you want to briefly check whether there has been joystick input (without blocking). If so, you exit the main loop. If not, you do nothing. Next, you briefly check whether there has been mic input. If so, you exit the main loop. If not, you do nothing.

    Does that make sense? So the logic should be like this (I used functions to make the flow clearer, but that's optional):

    def joy_input():
        # This function should check whether there has been joystick input
        # but return right away if there hasn't. (i.e. non-blocking)
        pass
    
    
    def mic_input():
        # This function should check whether there has been microphone input
        # but return right away if there hasn't. (i.e. non-blocking)
        pass
    
    
    while True:
        if joy_input():
            break
        if mic_input():
            break
    

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • Hey!

    I applied part of Sebastiaan's solution into my script (it includes an "eye-function" because I'm eye-tracker-assessing eye movements as answers as well):

    ##previously define functions for microphone, joystick and eye-tracker assessment.
    ##each function returns if a voice answer/joystick movement/eye movement has been recognized 
    ##and passes otherwise.
    
    a=0
    b=0
    c=0
    while True:
        if a==0:
            if run_mic():
                a=1
        else:
            pass
        if b==0:    
            if run_joy():
                b=1
        else:
            pass
        if c==0:
            if run_eye():
                c=1
        else:
            pass
        if a==1 and b==1 and c==1:
            break
        else:
            pass
    

    This loop repeats until answers from microphone, eyes and joystick have been given by the participant. Basically, it checks if, say, a microphone answer has been recognized. If so, the microphone function won't be checked again. If not, it temporarily passes the microphone function and checks for an answer in the next function, in this case for the joystick function.

    The script works for my purpose. There is a small problem, however:

    The audio recordings are saved as wav.files. If I run the microphone function on its own (w/o assessing joystick and eye movement), the audio recordings are long enough to make distinctions between a "yes" or "no" answer.
    But the audio files created with the abc-loop above are extremely short. You cannot hear if a person said "yes" or "no", which is crucial for my experiment.

    Now I wonder if and how I can increase the recording time without blocking the other functions of my abc-loop. Anyone has an idea?

    Appreciate it!

Sign In or Register to comment.