Synching video playback and joystick axis data collection (multiple times per trial)
Dear All,
Apologies in advance for the long post. We are designing an experiment in which participants will be shown a video and will be asked to continuously rate an attribute of the video along one dimension, for example low to high synchrony between two musicians, using a joystick. We want to log joystick positional data every 200 milliseconds. The joystick and video playback need to be synched.
We have tried various options. The code for the latest attempt is below, but we are at a point where some feedback and guidance would be very helpful and greatly appreciated.
First attempt:
We first tried using the recommended media player mpy and the joystick plugin. We can get the video to play and the joystick to log axis position data every 200 ms but not simultaneously. We thought perhaps it might require coroutines, however it doesn’t look like the media player plug-in works within a coroutine. Is there a different media player plugin that would work better with the joystick plugin?
Next step:
We decided that it looked like we needed to go outside the media player plugin and joystick plugin altogether and run both the video and joystick using inline scripting, which although rather challenging, is looking promising. We can now play a short excerpt of the video clip while concurrently collecting positional data from the joystick, however there are still issues needing resolving, the main ones being:
- making sure the start time of the joystick response collection and video playback are entirely synchronised. It seems to be very close, but looking at the data output it seems that the joystick data collection is actually starting just slightly earlier than the video playback at the moment. Is there a specific function that can trigger the playing of the video and start of the joystick data collection at the exact same time?
- we’ve had to switch back from mp4 files to avi files because the mp4 files were not being recognised properly, and the problem was instantly fixed when .avi files were used instead. It would be good to be able to play mp4 files however. Is there some recommended documentation of the video codecs that are supported when using openCV in inline scripting?
- The video playback is a bit "jerky" at the moment, as though the transition from one frame to the next is not seamless. We need to solve this as we don't want artifacts of the video display process to affect participants' ratings of visual synchrony.
All help appreciated!
Inline script follows (using the legacy backend):
import pygame
import numpy
import cv2
pygame.init()
pygame.joystick.init()
position_x1 = []
position_y1 = []
position_z1 = []
position_x = []
position_y = []
position_z = []
timedata = []
timedata2 = []
timeframe = []
timeframe2 = []
#initialize joystick
self.my_joystick = pygame.joystick.Joystick(0)
self.my_joystick.init()
#record start time of the trial
Start_time = self.time()
exp.set("Start_time", Start_time)
if self.get('canvas_backend') == 'legacy':
surface = exp.surface
elif self.get('canvas_backend') == 'xpyriment':
surface = exp.window
#set a regularly occurring event every 200 ms. Joystick data will be collected every time one of these events occurs (sampling rate of 5 Hz).
BEEP = pygame.USEREVENT + 1
pygame.time.set_timer(BEEP, 200)
FPSCLOCK=pygame.time.Clock()
#get video
path = exp.get_file('FreeJazz.avi')
cap = cv2.VideoCapture(path)
#playing first 1000 frames of video (eventually will play the whole thing, but keeping it short for diagnostic purposes)
for i in range(1000):
#set frame rate
FPSCLOCK.tick_busy_loop(25)
#read in video
retval, frame = cap.read()
#rotate so the video doesn't appear sideways
frame=numpy.rot90(frame)
#color scheme
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
#display
surf = pygame.surfarray.make_surface(frame)
exp.surface.blit(surf, (0,0))
pygame.display.flip()
if pygame.event.get(BEEP): # Checks event cue to sample joystick responses
#get x, y, and z positional data from joystick and timestamp
position_x1 = self.my_joystick.get_axis(0)
position_y1 = self.my_joystick.get_axis(1)
position_z1 = self.my_joystick.get_axis(2)
timedata = pygame.time.get_ticks()
position_x.append(position_x1)
position_y.append(position_y1)
position_z.append(position_z1)
timedata2.append(timedata)
#save variables
exp.set("position_x", position_x)
exp.set("position_y", position_y)
exp.set("position_z", position_z)
exp.set("timedata", timedata2)
Comments
Hi Palacegreen,
I am the author of the media_player_mpy plugin. You should be able to do what you want by using the script box that is provided in the plugin if you set
call custom python code
toafter very frame
. Have you tried using this? It should be quite easy to translate the script you have posted above to something that works in the mpy script box. If you use OpenSesame's built-invar
object you can save variables between items (for if you want to log data that you collect in the script box later). I'm giving it a go here below, but I have not tested this script, so you might need to do some tinkering of your own to make it work. It might look a bit wonky, but that is because you have to keep in mind that this script is called after each displayed video frame (and thus in fact could be considered a loop), so there is no clean way to easily set the starting variables, without reinitializing them each time. I just a try - except statement to account for this:I don't know if it is wise to use pygame.events here if you simply want to measure an interval. You might as well monitor OpenSesame's own clock instead:
Let me know if this works for you. Scripting in media_player_mpy might be a bit complex at first, but it saves you the hassle of rendering the video.
I don't know enough about rendering videos with OpenCV, so maybe someone else can help you there if you want to go down that road.
Hi Daniel. Thanks very much for your help and quick response! I was able to get the experiment to do mostly what it needs to do by first inserting an inline script that initialises the variables and joystick:
And then using the event handler within the media_player_mpy plugin as such:
One limitation here is that I'm still not getting joystick positional data exactly every 200 ms as originally desired (e.g. sometimes the amount of time between data collection points is 201 ms, sometimes it is as high as around 240 ms). I suppose this is because the custom code is not being referred to every ms but after every frame. It would be useful if you could explain exactly what the event handler is doing when you tell it to call custom code 'after every frame'. I initially assumed this would mean the code got called once every 40 ms if the video is running at 25 FPS, however when I tried calling the custom code after every frame without the conditional statement of 'if clock.time() - t1 > 200', I got many more data points than expected if the code was only being called once per frame.
Thanks for your time.
You're right, the code is called on each screen refresh, not only after a new frame has been shown. I decided to do it like this in the end because it allows more fine-grained time measurements. That should be changed in the documentation.
Regardless, you do have a variable
frame
available in the script that contains the number of the current frame that is displayed (see the help file for themedia_player_mpy
item. If you would want the code to only run after each frame you could incorporate an extra check such as:but that is not relevant to you right now, if I understand correctly.
Did you manage to get the timing exactly at 200 ms in the end? I find it weird that there is so much variance in your interval measurements. One thing that I can think of is that the time of decoding video frames varies for several points in the video. Complex frames (e.g. with a lot of details, different pixel colors or movement) can be more time intensive to decode, which also delays other processes.