Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

FLICKERING PROBLEM

Good Morning,
I'm Arianna and I'm developing an experiment by using Expyriment and the LEAP motion device. In this study each trial is made up of four stimuli: a forward mask lasting 49 ms, a prime lasting 83 ms, a backward mask lasting 49 ms and a target lasting until response. I manipulated the SOA between the prime and the target: the prime can appear 100 ms before, 50 ms before or 0 ms before the target. When the SOA is 100 everything works, instead when the SOA is either 50 or 0 the result is a flickering stimulation.
Below you can find the lines of the script referred to the chronological presentation of the stimuli:
pic = target

screen_pic = stimuli.BlankScreen()
pic.plot(screen_pic)
screen_pic_prime = stimuli.BlankScreen()
pic.plot(screen_pic_prime)
stimuli.TextLine(text=str(prime), text_size=46).plot(screen_pic_prime)
prime_screen = stimuli.TextLine(text=str(prime), text_size=46)

``````    if SOA == 0:
screen_pic_prime.present()
exp.clock.reset_stopwatch() # IMPORTANT: this is now the point in time that your "correctRT" starts counting!
exp.clock.wait(t_prime)
pic.present(clear=True)
screen_pic.present()  # a problem here is that during all this time, the program doesn't check for keyups
else:
prime_screen.present()  # prime appears
if SOA == 50:
exp.clock.wait(SOA)  # for 50 ms, that is, the prime needs 33 ms more
screen_pic_prime.present()  # target appears
exp.clock.reset_stopwatch()
exp.clock.wait(t_prime - SOA)  # for 33ms
pic.present(clear=True)
elif SOA == 100:
exp.clock.wait(t_prime)  # for 83 ms
exp.clock.wait(SOA - t_prime)  # for the rest of the SOA-time (17 ms)
exp.clock.reset_stopwatch()
exp.clock.wait(t_mask - (SOA - t_prime))  # for the rest of the mask-time (32ms)
screen_pic.present()  # only pic
``````

Arianna F.

«1

• Dear Arianna,

What is your screen refresh rate? I am asking, because you use timings (49ms) that do not seem to fall at multiples of common refresh rates (e.g. 60Hz, 100Hz, 120Hz).

From what I understand, you script does the following if SOA is 50:

• it presents the screen_mask stimulus for x frames that fit into 49 ms (depends on your screen refresh rate)
• it presents the prime_screen stimulus for x frames that fit into 50 ms (depends on your screen refresh rate)
• it presents the screen_pic_prime stimulus for x frames that fit into 33 ms (depends on your screen refresh rate)
• it presents the pic stimulus for 1 single frame (this will hence not really be visible as it is too fast)
• it presents the mask stimulus for 1 single frame (this will hence not really be visible as it is too fast)
• it presents the screen_pic_mask stimulus for x frames that fit into 49 ms (depends on your screen refresh rate)

You hence present 6 stimuli in rapid succession, from which two will be so quick (only one frame) that they will not be visible.

Is this what you intend to do?

• Dear prof. Krause,
The refresh rate is 60 Hz.
My script doesn't work with both SOA = 50 and SOA = 0
Maybe due to the fact that in those case, with respect to the SOA= 100, the mask should appear in place of the prime on the target (pic).
The target consists of a big picture, the mask and the prime should alternate at the center of it.
As you correctly understood I want the mask to appear in a subliminal way, the prime in a conscious way but impossible to react to and the target until the participant's response.

• Given the refresh rate of 60Hz, here is what your script does in case SOA is 50:

• it presents the screen_mask stimulus for 3 frames (50 ms)
• it presents the prime_screen stimulus for 3 frames (50 ms)
• it presents the screen_pic_prime stimulus for 2 frames (33.33333... ms)
• it presents the pic stimulus for 1 frame (16.66666...ms)
• it presents the mask stimulus for 1 frame (16.66666...ms)
• it presents the screen_pic_mask stimulus for 3 frames (50 ms)

Hence, all of these six stimuli are probably not consciously perceived.

Make sure to always use the test suite to check that your computer and graphics card report accurate timings of visual presentations (http://docs.expyriment.org/Testsuite.html).

• What now I see with SOA= 50 is that when the mask replaces the prime, the result is a flickering stimulation.
If I cut the following lines:
pic.present(clear=True)
I see that the prime appears as last event in place of the backward mask.

• It is not clear to me what your goal is. I understand that your code apparently does not do what you want it to do, but I don't understand what it is you want it to do.

You first mentioned that each trial consists of 4 stimuli, but you present 6 stimuli in the script.

Given your initial description of what a trial should look like, the code would simply be this:

``````forward_mask.present()
exp.clock.wait(50)  # because 49 is not possible with 60 Hz
prime.present()
exp.clock.wait(83 ms)   # will actually be 83.33333...ms, because 83 is not possible with 60 Hz)
exp.clock.wait(50)  # because 49 is not possible with 60 Hz
target.present()
exp.keyboard.wait()  # or some other response (e.g. from a button box)
``````

The manipulation of the SOA between prime and target, however, is not clear to me, since there is another stimulus in between.

• I understood the point of our misunderstanding: by "stimuli" I meant the single items that make up a trial (mask, prime, mask and target) not the screens displayed. Now I explain you more in detail:
With SOA=50

2° after 50 ms on the blank screen the mask disappears and the prime appears for 50 (=SOA)

3° after 50 ms (=SOA) the target appears and the prime remains (on the target) for other 33 ms (83-SOA)

4° after 33 ms while the target is still present, the prime disappears and the mask appears (on the target) again lasting 50 ms

5° after 50 ms the mask disappears and the target keeps on staying on the screen until the response

• Prepare the following stimuli before stimulus presentation:

• `mask = expyriment.stimuli.Picture("...")`
• `prime = expyriment.stimuli.Picture("...")`
• `target = expyriment.stimuli.Picture("...")`

As well as:

• `prime_on_target = target.copy(); prime.plot(prime_on_target)`
• `mask_on_target = target.copy(); mask.plot(mask_on_target)`

Then simply do:

``````mask.present()
exp.clock.wait(50)
prime.present()
exp.clock.wait(SOA)
prime_on_target.present()
exp.clock.wait(83-SOA)
exp.clock.time(50)
target.present()
exp.keyboard.wait()  # or a response from any other device
``````
• Thank you for all your clear teachings!
I''ve just implemented the modifications you suggested on my script.
Since the prime is a digit and not a picture, what can I use in place of "plot"?

• plot will work with all visual stimuli

• I receive an error message telling me that "prime" object has no attribute plot

• Then `prime` is not of any of the Expyriment (visual) stimulus types.

• "prime" is a digit ranging from 1 to 10
should I use another code (e.g., overlapping)?

• I thought prime is a stimulus you want to show on the screen?

• "#"= forward mask lasting 49 (now 50 ms)
"1" = digit meaning the prime lasting 83 ms
"#" = backward mask lasting 49 (now 50 ms)
picture of two hands = target stimulus on the screen

with SOA = 50 the digit should appear on the blank screen 50 ms before the picture, after 50 ms the target appears and the digit should stay on the picture for other 33 ms before being replaced by the backward mask

• Then I don't understand what the problem is.

This should just work:

``````mask = expyriment.stimuli.TextLine("#")
target = expyriment.stimuli.Picture("...")
``````
• SOA = trial.get_factor("t_SOA")

``````    if trial.get_factor("fingers") == 0:
prime = trial.get_factor("digit")
else:
pic = stimuli.Picture("{}fingers_{}_{}small.JPG".format(trial.get_factor("fingers"), posturetype, trial.get_factor("smallhand")))
prime = trial.get_factor("fingers") + trial.get_factor("distance")

#screen_pic = stimuli.BlankScreen()
#pic.plot(screen_pic)
#screen_pic_prime = stimuli.BlankScreen()
#pic.plot(screen_pic_prime)
#stimuli.TextLine(text=str(prime), text_size=46).plot(screen_pic_prime)
#prime_screen = stimuli.TextLine(text=str(prime), text_size=46)
prime_on_pic = pic.copy()
prime.plot(prime_on_pic)

rand_fixdot = design.randomize.rand_int(t_fixdot[0], t_fixdot[1])
fixdot.present()
exp.clock.wait(rand_fixdot)

if SOA == 0:
prime_on_pic.present()
exp.clock.reset_stopwatch() # IMPORTANT: this is now the point in time that your "correctRT" starts counting! (when SOA equals 0)
exp.clock.wait(t_prime)
pic.present()  # a problem here is that during all this time, the program doesn't check for keyups
else:
prime.present()  # prime appears
if SOA == 50:
exp.clock.wait(SOA)  # for 50 ms, that is, the prime needs 33 ms more
prime_on_pic.present()  # target appears
exp.clock.reset_stopwatch()
exp.clock.wait(t_prime - SOA)  # for 33ms
elif SOA == 100:
exp.clock.wait(t_prime)  # for 83 ms
exp.clock.wait(SOA - t_prime)  # for the rest of the SOA-time
exp.clock.reset_stopwatch()
exp.clock.wait(t_mask - (SOA - t_prime))  # for the rest of the mask-time
pic.present()  # only pic
``````
• it keeps on giving me the following error message:
prime.plot(prime_on_pic)
AttributeError: 'int' has no attribute 'plot'"

NOTE I used pic instead of target

• Well, yes, of course. An integer is a built in Python data type representing a number, not an Expyriment stimulus.

You first need to create a stimulus. Please see my example above.

• ``````prime_on_pic = pic.copy()
prime.plot(prime_on_pic)
``````

these lines give me "invalid syntax" error

• All of them? That is very strange.

It is in general very difficult to judge errors without seeing the entire script. Could you provide the entire script?

• yes all of them, I tried to delete each of them and the errors passed to the second line. Now I attach the whole script

• I apologize, since I don't menage to attach the file I have to copy and paste the script.

• -- coding: cp1252 --

import Leap, cv2, math, ctypes, sys, thread, time, ctypes, os
from expyriment import misc, design, control, stimuli, io
import numpy as np

control.set_develop_mode(False)

design

exp = design.Experiment(name="PostureProduction",
background_colour=misc.constants.C_BLACK,
foreground_colour=misc.constants.C_WHITE)

trials_per_block = 81
Block_Anzahl = 2 #only 1 block. the second block is for the training.
posturetype = "count" #"mont" here for the other version

control.initialize(exp)

block = design.Block(name="Experiment")
for t_SOA in [100, 50, 0]:
for digit in range(1, 10):
trial = design.Trial()
trial.set_factor("digit", digit)
trial.set_factor("t_SOA", t_SOA)
trial.set_factor("fingers", 0)
trial.set_factor("catch", 1)
for distance in [-2, 0, 2]:

for posturetype in ["count", "count"]:

``````for smallhand in ["left", "right"]:
for fingers in [3, 4, 6, 7]:
trial = design.Trial()
trial.set_factor("catch", 0)
trial.set_factor("distance", distance)
trial.set_factor("t_SOA", t_SOA)
trial.set_factor("fingers", fingers)
trial.set_factor("smallhand", smallhand)
``````

trial.set_factor("count", posturetype)

``````      block.add_trial(trial, copies=3)
``````

for block in exp.blocks:
block.shuffle_trials()
exp.shuffle_blocks()

exp.data_variable_names = ["block_cnt", "trial_cnt", "t_fixdot", "catch_trial",
"smallhand", "posturetype",
"target", "distance", "prime", "t_SOA",
"first_button_released", "RT_1", "second_button_released", "RT_2", "RT_1_plus_2",
"falsePostures", "FirstFalseRT", "SecondFalseRT", "ThirdFalseRT", "FourthFalseRT",
"LeapRT", "Execution_times",
"First_back_button", "Post_RT_1", "Second_back_button", "Post_RT_2", "Post_RT_1_plus_2",
"LeftHandFingers", "RightHandFingers", "correct"]

ITI = 300

block_cnt = 0

trial_cnt = 0
cnt = 0
t_fixdot = [600, 1200]
t_prime = 83.333333333335

mouse = io.Mouse(show_cursor=False)

blankscreen = stimuli.BlankScreen()

control.start(exp)

controller = Leap.Controller()
controller.set_policy_flags(Leap.Controller.POLICY_IMAGES)

finger_names = ['Thumb', 'Index', 'Middle', 'Ring', 'Pinky']

def convert(image, width, height):
#wrap image data in numpy array
ctype_array_def = ctypes.c_ubyte * image.height * image.width
# as ctypes array
# as numpy array
as_numpy_array = np.ctypeslib.as_array(as_ctype_array)
img = np.reshape(as_numpy_array, (image.height, image.width))

``````#resize output to desired destination size
destination = cv2.resize(img, (width*2, height*2), 0, 0, cv2.INTER_LINEAR)
return destination
``````

define a trial

def do_block(cnt, block, experiment):
block.shuffle_trials()

``````stimuli.TextLine(text="At the push of the two buttons we start! (Remember: keep pressing)").present()
exp.keyboard.wait()
exp.keyboard.wait(duration=1000)
blankscreen.present()

whichFingersLeft = []
whichFingersRight = []
``````

# training_list = []

``````for trial_cnt, trial in enumerate(block.trials):

if trial_cnt == 81:
stimuli.TextLine(text="short break (keep the two buttons pressed to continue)").present()
exp.keyboard.wait()

if experiment == 0:
#  if trial.get_factor("fingers") in training_list:
trial_cnt = trial_cnt - 1
if trial_cnt in range(19, 200, 20):
stimuli.TextScreen("Training", "More Training? yes/no (y / n)").present()
tr_button, sth = exp.keyboard.wait([misc.constants.K_y, misc.constants.K_n, misc.constants.K_z])
if tr_button == misc.constants.K_n:
break
#      continue

false_postures = dict()
false_RTs = dict()
false_RTs[1] = false_RTs[2] = false_RTs[3] = false_RTs[4] = -5

SOA = trial.get_factor("t_SOA")
``````
• if trial.get_factor("fingers") == 0:
prime = trial.get_factor("digit")
else:
pic = stimuli.Picture("{}fingers_{}_{}small.JPG".format(trial.get_factor("fingers"), posturetype, trial.get_factor("smallhand"))) #MODIFIED
prime = trial.get_factor("fingers") + trial.get_factor("distance")

``````    #screen_pic = stimuli.BlankScreen()
#pic.plot(screen_pic)
#screen_pic_prime = stimuli.BlankScreen()
#pic.plot(screen_pic_prime)
#stimuli.TextLine(text=str(prime), text_size=46).plot(screen_pic_prime)
#prime_screen = stimuli.TextLine(text=str(prime), text_size=46)
prime = expyriment.stimuli.TextLine(text=str(prime), text_size=46)
pic = expyriment.stimuli.Picture("{}fingers_{}_{}small.JPG".format(trial.get_factor("fingers"), posturetype, trial.get_factor("smallhand"))
prime_on_pic = pic.copy()
prime.plot(prime_on_pic)

rand_fixdot = design.randomize.rand_int(t_fixdot[0], t_fixdot[1])
fixdot.present()
exp.clock.wait(rand_fixdot)

if SOA == 0:
prime_on_pic.present()
exp.clock.reset_stopwatch() # IMPORTANT: this is now the point in time that your "correctRT" starts counting! (when SOA equals 0)
exp.clock.wait(t_prime)
pic.present()  # a problem here is that during all this time, the program doesn't check for keyups
else:
prime.present()  # prime appears
if SOA == 50:
exp.clock.wait(SOA)  # for 50 ms, that is, the prime needs 33 ms more
prime_on_pic.present()  # target appears
exp.clock.reset_stopwatch()
exp.clock.wait(t_prime - SOA)  # for 33ms
elif SOA == 100:
exp.clock.wait(t_prime)  # for 83 ms
exp.clock.wait(SOA - t_prime)  # for the rest of the SOA-time (17 ms)
exp.clock.reset_stopwatch()
exp.clock.wait(t_mask - (SOA - t_prime))  # for the rest of the mask-time
pic.present()  # only pic

if trial.get_factor("fingers") != 0:  # "trial.get_factor("catch") == 1" make more sense?
first_release, rt_1 = exp.keyboard.wait(wait_for_keyup=True) #this (rt_1) gives the RT of the first button-lift-off time (starting from last event)
second_release, rt_2 = exp.keyboard.wait(duration=150, wait_for_keyup=True) #this (rt_2) gives the RT of the second button-lift-off time (starting from last event)
correct_number = False
second_release, rt_2 = exp.keyboard.wait(wait_for_keyup=True, duration=50) #to avoid crash, is it correct?  ##now it waits for keyup twice... i would expect a lot more problems now.
else:
first_release, rt_1 = exp.keyboard.wait(wait_for_keyup=True, duration=2000) #this (rt_1) gives the RT of the first button-lift-off time (starting from last event)
correct_number = True
correctRT = -1
if first_release is not None:
stimuli.TextLine(text="You should keep both buttons pressed").present()
exp.clock.wait(1200)
blankscreen.present()
else:
stimuli.TextLine(text="Well done!").present()
exp.clock.wait(1200)
blankscreen.present()
# the lift-off time "counts" from the last event, so the picture will already have been presented for some time
if trial.get_factor("fingers") == 0 and second_release is not None:   ######
``````
• blankscreen.present()

``````    posNr = 0
lastPosture = -1

exp.keyboard.clear()

while not correct_number:

if lastPosture != -1 and exp.keyboard.check() is not None:
correctRT = -1
break
exp.keyboard.clear()

if exp.clock.stopwatch_time > rt_1 + 3500:
stimuli.TextLine(text="No correct pose detected.").present()
exp.clock.wait(1200)
correctRT = -2
first_release = second_release = rt_1 = rt_2 = -7
blankscreen.present()
break

isPosture = False

frame = controller.frame()
vR = frame.hands.rightmost.palm_velocity
vR_value = abs(vR[0]) + abs(vR[1]) + abs(vR[2])
vL = frame.hands.leftmost.palm_velocity
vL_value = abs(vL[0]) + abs(vL[1]) + abs(vL[2])
if  vR_value < 120 and vL_value < 120 and not frame.fingers.is_empty:

while not isPosture:
extended_fingers_list = controller.frame().fingers.extended()
nr_fingers = len(extended_fingers_list)
timeout = time.time() + 0.2
RT = exp.clock.stopwatch_time
while time.time() < timeout:
frame = controller.frame()
nr_fingers_B = len(frame.fingers.extended())
if nr_fingers != nr_fingers_B or frame.fingers.is_empty:# or len(frame.hands) != 2:
isPosture = False
break
else:
isPosture = True
nr_fingers = nr_fingers_B
fingerframe = controller.frame()

image = controller.frame().images[0]
if image.is_valid:
undistorted_left = convert(image, 400, 400)
RT = exp.clock.stopwatch_time
if nr_fingers == trial.get_factor("fingers"):
correctRT = RT
correct_number = True
elif lastPosture != nr_fingers:
posNr+=1
lastPosture = nr_fingers
false_postures[posNr] = nr_fingers
false_RTs[posNr] = RT

whichFingersLeft = []
whichFingersRight = []
for hand in controller.frame().hands:
if hand.is_left:
for finger in hand.fingers.extended():
whichFingersLeft.append(finger.type)
elif hand.is_right:
for finger in hand.fingers.extended():
whichFingersRight.append(finger.type)
``````

stimuli.TextLine(text="{}---{}".format(whichFingersLeft, whichFingersRight)).present()

``````    blankscreen.present()

fixdot_white.present()

if correctRT != -1:
first_backbutton, post_rt_1 = exp.keyboard.wait()
second_backbutton, post_rt_2 = exp.keyboard.wait(duration=150)
else:
first_backbutton = post_rt_1 = second_backbutton = post_rt_2 = -5

fixdot.present()

if second_release is None:
second_release = 0
rt_2 = 0
if second_backbutton is None:
second_backbutton = 0
post_rt_2 = 0

#Execution_times = time.time() my intention was to create the "Execution_times" dependent variable resulting by subtracting 0.2 (the time of steady fingers¥ posing) from the dependent variable "LeapRT".

exp.data.add([cnt, trial_cnt, rand_fixdot, 1 if trial.get_factor("catch") == 1 else 0,
trial.get_factor("smallhand"), posturetype,
trial.get_factor("fingers"), trial.get_factor("distance"), prime, trial.get_factor("t_SOA"),
first_release, rt_1, second_release, rt_2, rt_1 + rt_2 if rt_1 is not None else -10,
str(false_postures).replace(",",";"), false_RTs[1], false_RTs[2], false_RTs[3], false_RTs[4],
correctRT, Execution_times,
first_backbutton, post_rt_1, second_backbutton, post_rt_2, post_rt_1 + post_rt_2,
str(whichFingersLeft).replace(",",";"), str(whichFingersRight).replace(",",";"),int(correct_number)])

exp.clock.wait(ITI)
``````

block_cnt = 0

trial_cnt = 0

• stimuli.TextScreen("Instructions", """In the experiment, pictures showing two hands are presented to you. You should imitate the configurations with your fingers.\n\n

• Each trial is always introduced by neutral visual signals, do not pay attention to them\n
• Hold down the two buttons until you are ready to execute the finger pose\n
• Always do the finger pose as you feel natural and normal\n
• Always try to be fast but above all accurate\n
• Always show both hands (even if you only need one hand for the pose, display the other hand as a fist)\n
• Sometimes the target stimulus consists of a scrambled two-hand photo, in this case DO NOT MOVE YOUR FINGERS, KEEP THE BUTTONS PRESSED until you feel ready to imitate the next target\n
• The LEAP Motion will detect your finger pose as soon as you keep your hands and fingers in its "viewing area" steady enough\n
\nQuestions?""").present()
exp.keyboard.wait()
stimuli.TextScreen("Suggestion", """The Leap Motion does not react perfectly.\n
If the LEAP Motion does not recognize you correctly for too long of a time after releasing the buttons, a feedback will be dispayed before the white dot.\n
This can be due to the fact that you showed the wrong number of fingers or you were out of the "viewing area".\n
\nQuestions?""").present()
exp.keyboard.wait()

os.mkdir(path, 0755)

for cnt, block in enumerate(exp.blocks, 0):
if cnt == 0:
stimuli.TextScreen("Training", "").present()
exp.keyboard.wait()
else:
stimuli.TextScreen("Experiment", "Attention, let¥s start now!").present()
exp.keyboard.wait()
blankscreen.present()
exp.clock.wait(500)
do_block(cnt, block, cnt)

control.end(goodbye_text="Many Thanks!", goodbye_delay=3000)

• I am afraid that due to automatic formatting in the forum, a lot of information is lost.

You can embedd code here in the forum within three backticks on either side (Markdown syntax).