Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by


Hello together,

I want to present a video and some images together simultaneously on one screen.

I've looked for a topic like that in the forum and found this one:


So I adapted the paths and created an inline script with the code I found in the link above.
But when I executed the experment I received an error.

You can see the structure of the experiment and the error-message in the debug-window in the attached image file.

I think the problem may be that openseame does not know in which window it should display the images and the video.
But I don't know how to program this (make a reference for example to the "mywin"- sketchpad defined before the inline script)

The installed openseame version is the "opensesame_3.1.9-py2.7-win32-1.exe" on a windows 8.1 machine.

Have anybody an idea to realize this ?



  • CarstenCarsten Posts: 2
    edited March 15

    Hi there...

    I found out the last days how to fix this issure:

    First I had to do was to install OpenCV. You can get it from here:


    I used the Windows self-extracting archive. How it is installed is described here:


    Some remarks about the installation:

    Please don't install a separate Python27. Copy the "cv2.pyd" to the openseame Python Path, in my case:


    You find the "cv2.pyd" when you had extracted the archive you got from the first link

    You can find "Idle" in my installation here:


    Then choose in Openseame under "New experiment"as Backend "legacy".

    Insert a new "Inline_script" and paste the following code in the Run-Section:

        import cv2
        import numpy
        import pygame
        # Video filename in file pool
        path_video = pool['VO1LHi12.mp4']
        # Eyetracking Marker oben links
        # Open the video
        video = cv2.VideoCapture(path_video)
        # Set returnvalue to True to get into the while loop
        # Set background to white
        white = [255, 255, 255]
        # Do the while loop until the video is played
        while retval:
            # Get a frame
            retval, frame = video.read()
            ### if no frame is availble exit while loop,
            ### video is played
            if retval != True:
            # rezize the video
            frame = cv2.resize(frame, (1280, 800)) 
            # Rotate it, because for some reason it otherwise appears flipped.
            frame = numpy.rot90(frame)
            # The video uses BGR colors and PyGame needs RGB
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            # Create a PyGame surface
            video_surface = pygame.surfarray.make_surface(frame)
            # video is mirrored (x-axis) correcting this..
            video_surface = pygame.transform.flip(video_surface, True, False)
            # load the image with the Marker oben links
            image = pygame.image.load(path_Marker_OL)
            image_surface = image.copy()
            image_surface = pygame.transform.scale(image_surface , (150, 150))
            # calculations: x_pos=(BS_x_Exp - Videosize_x) / 2=(1680-1280) / 2 = 200
            #                       y_pos=(BS_y_Exp - Videosize_y) / 2=(1050-800) / 2 = 125
            surf1=exp.surface.blit(video_surface, (200, 125))
            surf1=exp.surface.blit(image_surface, (0, 0))

    Be sure that the video and the image are in the filepool.

    Thats it...


    Thanked by 1eduard
Sign In or Register to comment.