Integrating webcam image into experiment
Hi everyone,
I've been looking for a solution to integrate a webcam window into my experiment in open sesame for quite a while, but couldn't really find one until now. I want to show full-screen images to the participants and in the middle of the images, I want to place the webcam image to simulate an online meeting. Is there a way to either place the camera window over the images, i.e. overlay the windows, or integrate the webcam image into the visual stimuli? I think the challenge is that the images should change every 10 seconds, but the camera image should remain throughout the experiment. I'd be really grateful if someone could come up with a solution.
Comments
Hi Anna,
Do you plan to run it online or in the lab? If in the lab, there is a chance you can capture the input from the webcam with OpenCV2 and include it as video in your experiment. I am not sure though whether it works and if so, how easy it is to make it work. In any case, you potentially need to open a new window that is not a canvas to show the video, which means that you are leaving the experiment temporarilly, which makes things complicated. But maybe I am wrong here. Could you try to, in a first step. Capture the input of the webcam and present it in an inline_script? This link should be useful: https://www.geeksforgeeks.org/python-opencv-capture-video-from-camera/
If it works, we can think about how to proceed.
Good luck,
Eduard
Hi Eduard,
thank you so much, that worked. The question is now, how can I overlay this camera image over the visual stimuli? Do you have any idea on that?
Thanks again!
Anna
Hi Anna,
The straightforward approach would be to directly access the pixels in the OpenCV Image object that you have, and change them corresponding to your stimuli. Not sure whether OpenCV has some basic drawing operations. Perhaps it is possible to integrate Pygame functions with OpenCV. But I am also mostly guessing here. If you share the code that is working so far, I can try to play around with it myself.
Good luck,
Eduard
Hi Eduard,
currently, only the OpenCV code is working. So, I was using this one:
import cv2
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA)
cv2.imshow('Input', frame)
c = cv2.waitKey(1)
if c == 27:
break
cap.release()
cv2.destroyAllWindows()
I also thought about building the visual stimuli around the webcam frame and to do so using coroutines. But I guess it will be much easier if it's possible to overlay the webcam frame over the whole picture with the visual stimuli. What do you think about it?
Thanks again,
Anna
Hi Anna,
sorry for the late reply!
But I guess it will be much easier if it's possible to overlay the webcam frame over the whole picture with the visual stimuli.
I agree. I am not 100% sure but the frame variable in your script is essentially not much more than a canvas which you should be able to draw on. The only tricky part (that might be the dealbreaker) is whether there are easy ways to draw on it, without specifying the RGB values for each pixel. What are your stimuli? If they are images themselves, you can probably load them OpenCV and the just replace the relevant pixels of the video frame. Does that make sense? Another issue that might pop up is the one of performance. You would need to do it on every video frame, and I am not sure the process is quick enough to not cause any noticeable lag. But I guess there is no way around trying. In any way, it sounds quite cool, let me know how it goes :)
Eduard