Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Psychopy installation breaks OpenSesame

Hi! I'm doing an experiment in which the plan is to show two stills on the display and subsequently animate one of the stills (with an .mp4 if that's possible). Now I found on these forums that PsychoPy would be preferable for coding this. As the computers used at the lab run Windows, I'm trying to run this in Windows myself as well. I'm using the 'opensesame_3.2.7-py3.6-win64-2.exe' for good measure.

If I follow the guide at and use this to install PsychoPy via the Debug Window, I always end up with the error 'PermissionError: [Errno 13] Permission denied: 'C:\\Program Files (x86)\\OpenSesame\\Lib\\site-packages\\PyQt5\\Qt.pyd'' This is, apparently, because PsychoPy wants to replace the Qt-file. This won't work, because the OpenSesame window is using it. If I then run something along the lines of '."C:\Program Files (x86)\OpenSesame\Scripts\pip" install psychopy' it ends up installing - but then I am greeted with the error as added to this post ('error.png'). It therefore seems to be impossible at this time to run PsychoPy on Windows 10, is that correct? I'm using the pip that was installed by OpenSesame, not the most up to date version, if that matters.

What would now be the best way to proceed? The goal is to have, for 3 seconds, a still next to a video. The still could also be a video (but just 1 frame for 3 seconds) if that makes programming any easier. FWIW, I have re-installed OpenSesame 3 times by now, so I don't suppose that'll help fixing it. System-wide I'm running Python 3.7.2 and pip 19.1. Thanks in advance for any replies!


  • Well then: If I read properly, I'd have read this: "Some Python packages, notably PyGaze and PsychoPy, are not compatible with Python 3, and are therefore not available in the Python 3 packages of OpenSesame."

    That explains a lot. Since we are using an eye tracker, would it be recommended to downgrade to Python 2.7, or is there in Python 3.6 a possibility for having a still/video with one frame next to a moving video?

  • Hi,

    Yes, if you want to use eye tracking and PsychoPy, I would stick to the Python 2 release for now. For the purpose of what you're trying to do, I don't think it matters much whether you're using Python 2 or 3. However, presenting a video and static text together will probably require a bit of a hack in any case!



    PS. Recent versions of PsychoPy are actually compatible with Python 3 (this is a relatively recent development). However, we haven't tested this in combination with OpenSesame yet, and so I'm not surprised that you're running into some issues.

  • edited April 2019

    Hi Sebastiaan,

    Thanks for the reply! For good measure, the idea is not to have text, but a black screen of five seconds, followed by two images (stills of videos, both the first frames) left and right of the center for three seconds after which one of the pictures starts animating (goes into a video) for three seconds. And then continuously track what the eyes of the participants are doing.

    It seemed like PsychoPy would be to go. Indeed, I went for Python 2.7 and now things are starting to work. Sadly, however, playing a video on its own already seems to give some difficulties. We are using the MovieStim3 package from PsychoPy to draw the screen. However, calling the MovieStim3 continuously overlaps the 'current' experiment window and takes on a different background colour.

    Taking inspiration from an earlier thread kinda helped:

    Using .mp4 or .avi doesn't matter - in the center only the picture is rendered, the video is nowhere to be found.

    The adapted code I'm using then is

    from psychopy import visual
    mov = visual.MovieStim3(win, "CUT\Apple.avi")
    pic = visual.ImageStim(win, image="CUT\Apples4-1.jpg")
    while mov.status != visual.FINISHED:

    Could you maybe give a pointer in the right direction? What would you say would be necessary to achieve the black screen/two stills/one still one video scenario?

    And could it be this is due to the following error? Which is odd, because the package is installed.

    [2019-04-30 15:57:22,737:process:152:INFO] Starting experiment as ExperimentProcess-24
    [2019-04-30 15:57:23,403:psycho:142:INFO] waitblanking = True
    [2019-04-30 15:57:23,403:psycho:143:INFO] monitor = testMonitor
    [2019-04-30 15:57:23,403:psycho:144:INFO] screen = 0
    [2019-04-30 15:57:24,015:legacy:185:INFO] sampling freq = 48000, buffer size = 1024
    [2019-04-30 15:57:24,332:experiment:450:INFO] experiment started
    [2019-04-30 15:57:24,332:experiment:454:INFO] disabling garbage collection
    [2019-04-30 15:57:28,506:experiment:462:INFO] experiment finished
    0.6400 	ERROR 	avbin.dll failed to load. Try importing psychopy.visual
      as the first library (before anything that uses scipy)
      and make sure that avbin is installed.
    2.5449 	WARNING 	pyo audio lib was requested but not loaded: ImportError('No module named pyo',)
    [2019-04-30 15:57:28,591:experiment:538:INFO] enabling garbage collection
    [2019-04-30 15:57:28,599:process:158:INFO] experiment finished!
  • Also that has been fixed now. We have taken the cv2/numpy/pygame route to make it work :-) We can now properly render a video next to an image.

    Then the very last thing I'm having issues with: the images/video are mirrored and the videos are slightly jittery. We cannot quite figure out why that is the case. We suppose it has to do with the 'cv2.waitKey(33)' call - could that be done any differently? For good measure, I'll include the code below:


    # Full path to the video and image files
    path = exp.get_file(var.filmExhaustive)
    pathImg = exp.get_file(var.staticDistributive)
    # Some code to make sure we can scale the images to their right locations
    scale = 0.90
    # Don't hug the edge
    offset = 25
    X = int(640*scale)
    Y = int(480*scale)
    left = (0+offset, 332)
    right = (1280-X-offset, 332)
    # Video/image left/right chance variable
    rightSide = var.random
    # Open the video and the image in colour
    video = cv2.VideoCapture(path)
    img = cv2.imread(pathImg,1)
    # Rotate the image because for some unknown reason it's projected rotated
    img = numpy.rot90(img)
    # The image uses BGR colors and PyGame needs RGB
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)


    # A loop to play the video file next to an image
    	# Get a frame
    	ret, frame =
    	# No more left? Break.
    	if ret == False:
    	# Rotate the video, because for some reason it otherwise appears flipped.
      	# Not the culprit for mirroring the image
      	frame = numpy.rot90(frame)
      	# The video uses BGR colors and PyGame needs RGB
      	# Not the culprit for mirroring the image
      	frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
      	# Create a PyGame surface and scale the surface to the right dimensions
      	surf = pygame.transform.smoothscale(pygame.surfarray.make_surface(frame), (X, Y))
      	surf2 = pygame.transform.smoothscale(pygame.surfarray.make_surface(img), (X, Y))
      	# Chance function for the left/right image/video
      	if (rightSide == 0):
      		exp.surface.blit(surf, left)
      		exp.surface.blit(surf2, right)
      		exp.surface.blit(surf, right)
      		exp.surface.blit(surf2, left)
      	# Write the image
      	# Use 30 fps and quit on q
      	if cv2.waitKey(33) & 0xFF == ord('q'):
  • > Also that has been fixed now. We have taken the cv2/numpy/pygame route to make it work :-) We can now properly render a video next to an image.


    > the images/video are mirrored

    numpy.rot90() has a k keyword that allows you to specify how often the array should be rotated. If you specify k=-1, then I suspect the mirroring should be fixed.

    >  We suppose it has to do with the 'cv2.waitKey(33)' call - could that be done any differently?

    Yes. There's probably some variability in how long the operations take, and it's best to take this into account in the waiting period. One simple way would be to get the time before reading the frame and the time after flipping the display. Something like this:

    t0 = clock.time()
    # Do time consuming stuff
    t1 = clock.time()
    waiting_period = 33 - t1 + t0
    if cv2.waitKey(waiting_period) & 0xFF == ord('q'):

    Does that make sense?



Sign In or Register to comment.