Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Live Playback

Hello,

I am new to OpenSesame and I want to build an experiment, where participants' breathing will be recorded and saved as a wav file per trial. In some of the trials I want to playback - live - their breathing sound while recording, so they are able to hear their own breathing through their headphones.

I have found how to record and save wav files. However, I would like to kindly ask for your assistance on how to implement the live playback, if that is possible of course. Any help would be greatly appreciated!

Thank you in advance for your time and consideration.

Best regards,

Effie

Comments

  • Hi Effie,

    Which solution makes the most sense depends on the implementation that you have so far. How about you share your solution to record and save wav files, and we start from there?

    Generally, I see to options.

    1) redirect the audio stream to the output before/after you save it

    2) Live read in wav files that you have saved in parallel to recording the breathing

    For both options I am not 100% certain that you can have multiple audio stream open simultaneously. So, I would need to try it out.

    Eduard

    Buy Me A Coffee

  • edited October 2023

    Hi Eduard,

    Thank you for your reply. Following is my code for record and save wav files (I apologie if there is a specific way to attach scripts):

    import pyaudio
    import wave
    import os
    chunk = 1024 # Record in chunks of 1024 samples
    sample_format = pyaudio.paInt16 # 16 bits per sample
    channels = 1
    fs = 44100 # Record at 44100 samples per second
    seconds = 3
    output_folder = 'C:\\Users\\user\\Desktop\\recording_file'
    filename = os.path.join(output_folder, f"{trial}.wav")
    
    p = pyaudio.PyAudio() # Create an interface to PortAudio
    
    # Open the mic
    stream = p.open(format=sample_format,
    channels=channels,
    rate=fs,
    frames_per_buffer=chunk,
    input=True)
    
    frames = [] # Initialize array to store frames
    
    # Store data in chunks for 3 seconds
    for i in range(0, int(fs / chunk * seconds)):
    data = stream.read(chunk)
    frames.append(data)
    
    # Stop and close the stream
    stream.stop_stream()
    stream.close()
    # Terminate the PortAudio interface
    p.terminate()
    
    # Save the recorded data as a WAV file
    wf = wave.open(filename, 'wb')
    wf.setnchannels(channels)
    wf.setsampwidth(p.get_sample_size(sample_format))
    wf.setframerate(fs)
    wf.writeframes(b''.join(frames))
    wf.close()
    


    What I want to do basically is to stream the incoming audio in real-time, so I think the first option you suggested is the more suitable, right? If so, any assistance in the implementation would be a massive help.


    Thank you again for your your time.


    Best,

    Effie

  • Hi Effie,

    I have some troubles, getting your script to work on my system (I think bc of my system...). Anyway, looking at this website here: https://thepythoncode.com/article/play-and-record-audio-sound-in-python

    it seems that you can simply say stream.write(data) to playback the recording. Unfortunately, I can't try it myself because of above mentioned issues.

    Hope this helps,

    Eduard

    Buy Me A Coffee

  • Hello Eduard,

    Thank you for your assistance. I will try it out and get back to you if I have more questions.


    That was a great help!Thanks again.


    Best,

    Efi

  • Hi Eduard,

    I am returning on this topic as I recently found the audio-low-latency plugin, where I can record and playback sound files on the background while my participants are watching a media_player.mpy.

    Here is the link I used: https://github.com/dev-jam/opensesame-plugin-audio_low_latency

    However, I was wondering if there is a way via this plugin to playback the incoming audio in real-time.

    At the moment I am using the same audio filename as in the audio_low_latency_record_start item but it doesn't seem to recognize the directory.

    Any help would be greatly appreciated.

    Best,

    Effie

  • Hi Effie,

    affective breathing is not a valid file name. If it is a variable it must be encapsulated by {} or [] depending on whether you use OS4 or OS3. So for this to work, you need to start recording (and storing) before you start playbacking the sound. Otherwise, there is no sound file available that can be put into the audio filename box of this toolbox. Can you try that?

    Eduard

    Buy Me A Coffee

  • Hi Eduard,


    Thank you for your immediate reply. I am using OS4, so I assume the variables need to be encapsulated by {}. I am interested in recording-storing and playing back participants' breathing simultanesously, while they are watching a video file, so I was wondering if there is a way to achieve this purpose via this toolbox.


    Best,

    Effie

  • Hi Effie,

    I am not familiar with the toolbox unfortunately. In any case, you should start with the easy things. Try to make the playback and record parts work separately. If they do, you can think about how to combine them. In the mean time, you can try to ask a question via the issues tab of their github repo. They (the developers) should know best, whether your goal is achievable with their software.

    Eduard

    Buy Me A Coffee

Sign In or Register to comment.