Psychopy installation breaks OpenSesame
Hi! I'm doing an experiment in which the plan is to show two stills on the display and subsequently animate one of the stills (with an .mp4 if that's possible). Now I found on these forums that PsychoPy would be preferable for coding this. As the computers used at the lab run Windows, I'm trying to run this in Windows myself as well. I'm using the 'opensesame_3.2.7-py3.6-win64-2.exe' for good measure.
If I follow the guide at https://osdoc.cogsci.nl/3.2/manual/environment/ and use this to install PsychoPy via the Debug Window, I always end up with the error 'PermissionError: [Errno 13] Permission denied: 'C:\\Program Files (x86)\\OpenSesame\\Lib\\site-packages\\PyQt5\\Qt.pyd'' This is, apparently, because PsychoPy wants to replace the Qt-file. This won't work, because the OpenSesame window is using it. If I then run something along the lines of '."C:\Program Files (x86)\OpenSesame\Scripts\pip" install psychopy' it ends up installing - but then I am greeted with the error as added to this post ('error.png'). It therefore seems to be impossible at this time to run PsychoPy on Windows 10, is that correct? I'm using the pip that was installed by OpenSesame, not the most up to date version, if that matters.
What would now be the best way to proceed? The goal is to have, for 3 seconds, a still next to a video. The still could also be a video (but just 1 frame for 3 seconds) if that makes programming any easier. FWIW, I have re-installed OpenSesame 3 times by now, so I don't suppose that'll help fixing it. System-wide I'm running Python 3.7.2 and pip 19.1. Thanks in advance for any replies!
Comments
Well then: If I read properly, I'd have read this: "Some Python packages, notably PyGaze and PsychoPy, are not compatible with Python 3, and are therefore not available in the Python 3 packages of OpenSesame."
That explains a lot. Since we are using an eye tracker, would it be recommended to downgrade to Python 2.7, or is there in Python 3.6 a possibility for having a still/video with one frame next to a moving video?
Hi,
Yes, if you want to use eye tracking and PsychoPy, I would stick to the Python 2 release for now. For the purpose of what you're trying to do, I don't think it matters much whether you're using Python 2 or 3. However, presenting a video and static text together will probably require a bit of a hack in any case!
Cheers!
Sebastiaan
PS. Recent versions of PsychoPy are actually compatible with Python 3 (this is a relatively recent development). However, we haven't tested this in combination with OpenSesame yet, and so I'm not surprised that you're running into some issues.
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi Sebastiaan,
Thanks for the reply! For good measure, the idea is not to have text, but a black screen of five seconds, followed by two images (stills of videos, both the first frames) left and right of the center for three seconds after which one of the pictures starts animating (goes into a video) for three seconds. And then continuously track what the eyes of the participants are doing.
It seemed like PsychoPy would be to go. Indeed, I went for Python 2.7 and now things are starting to work. Sadly, however, playing a video on its own already seems to give some difficulties. We are using the MovieStim3 package from PsychoPy to draw the screen. However, calling the MovieStim3 continuously overlaps the 'current' experiment window and takes on a different background colour.
Taking inspiration from an earlier thread kinda helped: http://forum.cogsci.nl/index.php?p=/discussion/137/solved-simultaneous-image-video-and-sound-presentation
Using .mp4 or .avi doesn't matter - in the center only the picture is rendered, the video is nowhere to be found.
The adapted code I'm using then is
Could you maybe give a pointer in the right direction? What would you say would be necessary to achieve the black screen/two stills/one still one video scenario?
And could it be this is due to the following error? Which is odd, because the package is installed.
Also that has been fixed now. We have taken the cv2/numpy/pygame route to make it work :-) We can now properly render a video next to an image.
Then the very last thing I'm having issues with: the images/video are mirrored and the videos are slightly jittery. We cannot quite figure out why that is the case. We suppose it has to do with the 'cv2.waitKey(33)' call - could that be done any differently? For good measure, I'll include the code below:
Prepare:
Run:
> Also that has been fixed now. We have taken the cv2/numpy/pygame route to make it work :-) We can now properly render a video next to an image.
Good!
> the images/video are mirrored
numpy.rot90()
has a k keyword that allows you to specify how often the array should be rotated. If you specifyk=-1
, then I suspect the mirroring should be fixed.> We suppose it has to do with the 'cv2.waitKey(33)' call - could that be done any differently?
Yes. There's probably some variability in how long the operations take, and it's best to take this into account in the waiting period. One simple way would be to get the time before reading the frame and the time after flipping the display. Something like this:
Does that make sense?
Cheers!
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!