Running experiments online using OSWeb
Hi,
I have been using Opensesame for several years and it worked really well. I am now trying to run a lexical decision study online, but I had the following error:
uncaught typeerror: cannot read properties of undefined (reading 'apply') see the console for further details
The source of the error seems to be my two in-line scripts: one for recording the handness of participants (so that they pressed the 'yes' button with their dominant hand), the other is to counterbalance.
In-line script 1:
if var.response_handedness_response =='l':
var.y_response='z'
var.n_response='m'
else:
var.y_response='m'
var.n_response='z'
In-line script 2:
if self.get('subject_nr')%8 == 1:
b1 = 0
b2 = 10
b3 = 10
b4 = 10
b5 = 10
b6 = 10
b7 = 10
b8 = 10
elif self.get('subject_nr')%8 == 2:
b1 = 10
b2 = 0
b3 = 10
b4 = 10
b5 = 10
b6 = 10
b7 = 10
b8 = 10
elif self.get('subject_nr')%8 == 3:
b1 = 10
b2 = 10
b3 = 0
b4 = 10
b5 = 10
b6 = 10
b7 = 10
b8 = 10
elif self.get('subject_nr')%8 == 4:
b1 = 10
b2 = 10
b3 = 10
b4 = 0
b5 = 10
b6 = 10
b7 = 10
b8 = 10
elif self.get('subject_nr')%8 == 5:
b1 = 10
b2 = 10
b3 = 10
b4 = 10
b5 = 0
b6 = 10
b7 = 10
b8 = 10
elif self.get('subject_nr')%8 == 6:
b1 = 10
b2 = 10
b3 = 10
b4 = 10
b5 = 10
b6 = 0
b7 = 10
b8 = 10
elif self.get('subject_nr')%8 == 7:
b1 = 10
b2 = 10
b3 = 10
b4 = 10
b5 = 10
b6 = 10
b7 = 0
b8 = 10
b9 = 10
else:
b1 = 10
b2 = 10
b3 = 10
b4 = 10
b5 = 10
b6 = 10
b7 = 10
b8 = 0
exp.set('b1',b1)
exp.set('b2',b2)
exp.set('b3',b3)
exp.set('b4',b4)
exp.set('b5',b5)
exp.set('b6',b6)
exp.set('b7',b7)
exp.set('b8',b8)
I deleted the first online script and changed the second into the following Javascript using an online converter, but I still got the same error message.
var b1, b2, b3, b4, b5, b6, b7, b8, b9;
if (this.get("subject_nr") % 8 === 1) {
b1 = 0;
b2 = 10;
b3 = 10;
b4 = 10;
b5 = 10;
b6 = 10;
b7 = 10;
b8 = 10;
} else {
if (this.get("subject_nr") % 8 === 2) {
b1 = 10;
b2 = 0;
b3 = 10;
b4 = 10;
b5 = 10;
b6 = 10;
b7 = 10;
b8 = 10;
} else {
if (this.get("subject_nr") % 8 === 3) {
b1 = 10;
b2 = 10;
b3 = 0;
b4 = 10;
b5 = 10;
b6 = 10;
b7 = 10;
b8 = 10;
} else {
if (this.get("subject_nr") % 8 === 4) {
b1 = 10;
b2 = 10;
b3 = 10;
b4 = 0;
b5 = 10;
b6 = 10;
b7 = 10;
b8 = 10;
} else {
if (this.get("subject_nr") % 8 === 5) {
b1 = 10;
b2 = 10;
b3 = 10;
b4 = 10;
b5 = 0;
b6 = 10;
b7 = 10;
b8 = 10;
} else {
if (this.get("subject_nr") % 8 === 6) {
b1 = 10;
b2 = 10;
b3 = 10;
b4 = 10;
b5 = 10;
b6 = 0;
b7 = 10;
b8 = 10;
} else {
if (this.get("subject_nr") % 8 === 7) {
b1 = 10;
b2 = 10;
b3 = 10;
b4 = 10;
b5 = 10;
b6 = 10;
b7 = 0;
b8 = 10;
b9 = 10;
} else {
b1 = 10;
b2 = 10;
b3 = 10;
b4 = 10;
b5 = 10;
b6 = 10;
b7 = 10;
b8 = 0;
}
}
}
}
}
}
}
exp.set("b1", b1);
exp.set("b2", b2);
exp.set("b3", b3);
exp.set("b4", b4);
exp.set("b5", b5);
exp.set("b6", b6);
exp.set("b7", b7);
exp.set("b8", b8);
I am now unsure of what to do to solve the error. Do you have any suggestions? Thanks so much in advance!
Comments
Hi @Yuyanxue,
Just a quick comment... Would need to try your program to be sure but it looks like you're trying to use Python code into your Inline_Javascript objects... "this.get" and "exp.set" are Python commands.
To access the subject number in Javascript, use vars.subject_nr. To set a variable's value task-wise, replace exp.set by vars. For example:
exp.set("b1", b1)
becomesvars.b1=b1
.That's my first suggestion anyway. Try it out to see if it helps...
Best,
Fabrice.
PS: please note that it is best to post messages related to OSweb to the OSWeb forum. I'm moving this thread to that forum so that it can benefit other OSWeb users too.
Hi Fabrice,
Thanks a lot for your suggestion. I have replaced exp.set by vars, but the same error occurred. Also, when I tried to run the experiment without the two in-line scripts, another error message occurred:
"Uncaught ReferenceError: Variable 'y_response' not present in var store
See the console for further details".
'y_response' is a variable specified in one of the in-line script, and also in the stimuli table.
I have attached a simpler version of the experiment which only had one trial in each block (four blocks per list). It has two in-line scripts: one is "handedness_script" and the other is "counterbalance". The experiment worked perfectly using Opensesame, but couldn't run in OSWeb.
Could you please help solve the problems?
Thanks a lot,
Mengzhu
Hi @Yuyanxue,
First, many thanks for the ☕️! very nice of you. It's very much appreciated!
Second, just for your information, when addressing messages to specific forum users in the forum, it helps using their handle. You can do so by typing "@" and the first letters of the users name. A pull down menu then appears, which you can use to select the user you're addressing the message to. For example, if I type
@Yu
, the following appears:Using handlers means that your correspondent gets a notification when they visit the forum. As there can be many message posted on the forum, using my handler ensures that I'm more likely to see taht there is a message for me.
Third, moving on to your task... I had a quick look at the task you sent me. I could see that many objects were
inline_script
objects containing Python code. These need to be replaced byinline_Javascript
objects containing Javascript code doing the equivalent of what your Python code was using. Sometimes it is relatively easy, sometimes it is harder (for example when it comes to shuffling an array). You then must make sure to delete theinline_script
objects containing Python code and make sure to permanently erase them from the unused objects (otherwise,, even though you're no longer using these objects, the task will still try to interpret them and the task won't run in a browser).A lot of the rewriting follows the instructions I gave you in my earlier message. That is, instead of using
self.get
, you'd usevars
. So, for example,self.get('subject_nr')
in Python becomesvars.subect_nr
in Javascript.Your
couterbalance
oinline_script
code:... needs to be rewritten in Javascript within an inline_Javascript object. Your original code was quite redundant (all b variables are equal to 10 except one, depending on the subject's number). I simplified it as follows:
The shuffling of the
block_list
array is a little harder because Javascript does not contain an already existing shuffling function. In Python, it is easy to import one ("from random import shuffle"), but in Javascript, you must create and declare such function to then be able to call it. There are various ways to shuffle an array in Javascript. Some are compatible with EC5 and others only with EC6 (see this webpage for a discussion of the difference: https://www.javatpoint.com/es5-vs-es6). As fas as I know, it is best to use methods compatible with EC5 for the moment (as I believe that OSWeb does not yet support EC6).I rewrote your randomisation code into an
inline_Javascript
object as follows:Note: Your task structure is quite complex. I suspect it could be simplified, but this is outside the help I can provide. Seeing that you're using 8 blocks, I did take the liberty of creating a
block_list
array with 8 elements and to create variables L0 to L7. Please revise and modify as needed.This was not enough, however... When testing your task in OSWeb it is important to try it through your browser (making the console visible; CTRL+i, helps tracking what is going on and identify problems, or at least sometimes). Some features of the task worked in OS but not in OSWeb. Such things can be as little as not declaring variables in a way Javascript likes. So, for example,
b1 = 10
will run when the task is executed in OS, but it will generate an error (saying that the variable is not defined when running it in OSWeb). Hence, it needs to bevar b1=10
. Always testing your task through OSWeb as you develop it will help detecting possible problems (if you develop it and test it only through OS, you may then have to do quite a lot of debugging as I had to do with your task).Also, you were also using Python code in the "Run if" conditions of numerous sequences. For example:
Expressions such as self.get are Python instructions. They will work when you run the task in OS:
... but not when you run it through the browser (i.e., when using OSWeb):
So, all of the "run if" conditions need to be rewritten to avoid Python code:
I had to do the same for all block_seq sequences, as well as for the mainsequence.
Finally, when testing your task through the browser, OSWeb will run a compatibility check and highlight certain problems.
For example, in OSWeb, the logger cannot be set to record all variables (this is because this would take a lot of resources and bandwidth in a browser), so the logger must be set manually. To test your task in a browser, I therefore had to disable the "Log all variables" option. You'll need to make sure to add manually all the variables you want to feature in the data log. Using the variable inspector can help greatly doing so.
I attach my modification of the task you posted in this forum:
The explanations above and the code should allow you to follow all the changes I made. I'm non holidays but decided to take a moment to go through this so that you can progress with your task, but please note that I did not test the whole task and that I'm not familiar with your design or objectives, so please complete and modify the task as required and make sure to check thoroughly your task and data output before using the task with real participants.
Hope this helps.
Kind regards,
Fabrice.
Hi Fabrice @Fab
Thanks so much for your detailed explanations and for taking the time to help! Very much appreciated.
The explanations are really helpful, and the changes needed were much more complicated than I thought. I have run your modified version in OSWeb and it works perfectly. I will process your explanations and complete and test the whole experiment.
Happy new year!
Best,
Mengzhu
Hi @Fab ,
Thanks very much again for your help. I have constructed the whole experiment and it worked in my browser fine. However, when my collaborator tried to run the experiment on the web, she got an error "compatibility check failed"... Do you have an idea about what might be going wrong?
Our counterbalance-Javascript (thanks again for helping convert and simplify the original python script to this Java one!) distributes participants into 8 different groups - the1st/9th participant would get List 1, and the 8th/16th participant gets List 8 etc.. As the OSWeb did not ask to enter the subject number (OS does), although we have specified the possible subject numbers to [0,1,2,3,4,5,6,7], we are not sure whether the first participant who clicked the link would be assigned to List 1, and the 9th participant would still be assigned to List 1?
Also, our experiment exceeds largely the recommended size, and I tried to deploy it once in Jatos TryOut Server, the loading time was indeed quite long. As our experiment collects reaction times data, would loading time (and logging too many variables) affect the recording/accuracy of participants' reaction times?
Thanks very much in advance!
Best regards,
Mengzhu
Hi @Yuyanxue,
I'm glad my help was of use and that your experiment is now almost running.
To answer your questions, the subject number will be randomly allocated by the program, which means that it is not possible to ensure that you get the same number of participants with id numbers 0 to 7. There currently is no easy solution to that problem. the issue is that the task runs on the participant's machine and not on a central server. This allows for proper temporal resolution and makes the task a lot more reliable, but the downside is that each individual instance of the task has no way of knowing hoe many participants have been tested with each id across all the different computers participants have used across various locations. If you have a set list of participants to invite to take part, one thing you can try, to try to get roughly similar numbers in each of the 8 conditions, is to modify the task to ask participants to enter an id number and instruct individual beforehand to use such or such number. However, in most cases this is not practical to do.
As for the size of the task, it should not affect the accuracy of RTs but it will affect the time it takes for the task to load up when it starts. Most people have a relatively fast connection speed, so that should not be an issue.
The time taken to log the data is fairly short but will vary depending on the number of variables you want to collect (for this reason, OSWeb requires the "log all variables" to be disabled). Unless you're logging tens of variables, this should not be an issue.
Hope this helps.
Kind regards,
Fabrice.
Dear @Fab
Thanks very much again for your detailed suggestions. As for the size of the experiment, I have changed all sound files from .wav to .mp3 tp reduce the size, but the experiment returned the following error right after a few sketchpads indicating experiment instructions and before the inline java script:
Uncaught TypeError: Cannot read properties of null (reading 'cloneNode')
See the console for further details
Does this mean that OSWeb does not support mp3 files?
Thanks in advance!
Best regards,
Mengzhu
Hi again @Fab ,
Thanks to your generous help, our experiment is almost ready to test:)
We have added some inline_html to create a questionnaire to collect participants' info foloowing the instruction here https://osdoc.cogsci.nl/3.3/manual/forms/html/. The problem we had with the inline_html was that - When we tested the experiment in external browser in full screen), the page was unresponsive when we typed or clicked anything (e.g., the 'next' button that was supposed to bring us to the next page). The page worked fine when we ran not in full screen.... does that mean we have to run our experiment not in full screen or are there ways to make it work in full screen? It would work better if the experiment can work in full screen.
Thanks a lot,
Mengzhu
Hi @Yuyanxue,
Sorry I didn't get to reply to your message dated back to January. Not sure why, I didn't spot it. I take the sound issue if now solved.
Regarding your recent message and the full screen issue. I just ran a test setting OSWeb to full screen mode and I experienced no issue to register key presses.
I assume that you are using this feature to display the task in full screen mode. I'm not sure where the issue you're experiencing comes from (hard to tell without seeing the code). One thing you can do is to make sure that you are using the latest version of OSweb (currently 1.4.14.0).
To update OSWeb, open OpenSesame as administrator, then in the console, type and execute:
This will update OSWeb to the latest version available. Close OpenSeame and restart it. The latest version of OSWeb is now running.
Try your task again to see if it helps.
If not, I'm not sure what the issue may be. But you could try a manual solution as described below.
Unselect the "Make browser fullscreen" feature.
Use the following code in an
inline_javascript
at the beginning of your task:This code detects the browser and saves it to a variable (in case you want to include it in the data log), it detects whether the browser supports the full screen mode, and if it does, it switches to full screen mode.
You can try it with this basic example:
I tried it with Chrome, Edge, and Firefox.
Note that this method does not differentiate between Chrome and Edge (there may more sophisticated ways to do so, but I haven't found one that works well).
Perhaps this method works for you and your task becomes responsive. However, it does not explain why it currently is not and it's not possible to determine it without seeing your task and trying it out).
I hope this helps!
Fabrice.
Hi @Fab ,
Thanks so much for your detailed suggestions!
I have tried the example you provided, and it worked well. However, I may not have explained clearly the issue that I had. It was OK to use the full-screen mode to register key responses for sketchpad, but there was an issue with inline_html (e.g., the inline_html was unresponsive in the attached example), and updating to the latest version did not help solve the issue.
There was a similar issue posted here by @Uros back in Jan 2022: https://forum.cogsci.nl/discussion/4737/text-input-form-for-use-with-jatos/p2, and one solution would be run the experiment in full screen via Mozilla FireFox, but it probably is unrealistic to ask all participants to use FireFox.... Are there other solutions?
Thanks so much!
Mengzhu
Hi Mengzhu,
but it probably is unrealistic to ask all participants to use FireFox.
Every modern browser has a fullscreen mode. This mode is commonly toggled with the F11 key. So you wouldn't have to ask your participants to use firefox, you just need to tell them to press F11 in their browser to toggle fullscreen. I am not sure though, whether you can check whether they actually did do it (in case it is important to you.
That being said, it'd be of course better to solve that issue with the inline_html. But the way it sounds, it is n a problem that we can't really fix.
Eduard
Hi @Yuyanxue,
I tried your task as a JATOS experiment while enabling the "make browser fullscreen" in Open Sesame and I could reproduce the problem you described. The inline_html doesn't appear as it should and does not take input. As @lvanderlinden and @sebastiaan pointed out, this is due to some security restrictions by the browser.
The problem seems to be restricted to OSWeb/JATOS experiments set up to set the task to full screen from OS's "Make browser fullscreen", however. Starting from the task you uploaded to this forum, I managed to get it to work in two ways:
Version with "Make browser fullscreen" disabled: https://jatos.mindprobe.eu/publix/DVeP4JTY8ZQ
If you press F11 (as suggested by @eduard) or select full screen from the browser's menu once the task is open, it does appear full screen and is responsive (I tried it with Chrome, Edge and Firefox).
Version with "Make browser fullscreen" disabled + javascript commande to go to full screen: https://jatos.mindprobe.eu/publix/s0fDqbOcjoO
This version with the javascript code I uploaded in my earlier reply works with the task full screen without having to ask participants to activate the full screen view manually, and it appears to work with inline_html objects too. I tried it successfully with Chrome, Edge and Firefox. Haven't tried other browsers.
I think you could implement the second solution, and perhaps add a question item at the end of your task to ask participants whether or not the experiment ran in full screen on their computer (so that you at least have the information).
Hope this helps,
Fabrice.
cc @Uros in case this is helpful
Thanks so much @Fab @eduard !! Both ways work!
Dear @Fab
I hope this would be the last question before the experiment runs to collect the data:)
The setting with two choices like the below (inline_html called 'consentForm2 in the version "Exp-test_fab.osexp" that you posted) needs to be modified i.e., if the participants clicked 'yes', and then wanted to change to 'no', the current setting would not allow to undo 'yes'. Would it be possible to untick one box and then choose another so they can clear their selection if they make a mistake?
Thanks a lot again!
Mengzhu
Hi @Yuyanxue,
Your form is written in HTML. In that language, to make radio buttons mutually exclusive, you just need to give them the same name (but of course they'd keep different values).
So, in the consentForm2 form, you currently use:
Your two radio buttons have different names ("summary_yes" and "summary_no"). That's why, clicking on one does not automatically unselect the other.
Try the following, given them both the same name ("summary")
Now the two options are mutually exclusive and clicking on one disables the other.
You should use the same principle on other forms. For example, in your "gender" form, you should not use different names for the different options ("Man"; "Woman" or other labels if you want to use them), but the same.
Doing so has two important advantages:
(1) it makes them mutually exclusive
(2) it allows you to save the response under the same variable.
I'm not sure how you currently plan to log the data, but using the same name for the radio buttons ensures that the response is saved under a variable with that name. So, in your "gender" form, instead of using three different radio buttons names, just use "gender" for all of them. Then you can access the response from the "gender" variable (e.g., in javascript, through "vars.gender"), and you could then add "gender" into the logger for that response to appear in your output file.
Note that if you plan to run your task online, the logger cannot be set to log all variables and that you'll have to specify the specific variables you want saved in the data log.
Best,
Fabrice.
@Yuyanxue
PS: I upload here the stripped down version of the task with the corrections above as well as corrections to the "occupation" form (so that now, selecting "Yes" deselects "No", and vice versa).
Thanks so much, @Fab. Now the inline_html scripts work really well.
We are running an OpenSesame experiment on MindProbe, recruiting participants from Prolific. We used the instructions here to set this up:
https://osdoc.cogsci.nl/3.3/manual/osweb/prolific/
We have just run 32 participants. For most of them, the experiment seemed to work as intended, and the Prolific IDs were collected. However, there are three participants where they seem to have normal responses, but all of the automatically generated info about them (Prolific ID, location, date and time, browser, etc.) is missing. We are not sure why or how this could happen.
Possibly related, we got a message from one participant in Prolific that they had done the experiment, but it got stuck on the final screen of transferring data. We are not sure if this is one of the three participants without any ID data... There are no other 'unaccounted for' participants in the Prolific record for the study.
Any help in understanding what might be happening here is much appreciated.
Best,
Mengzhu
Hi @Yuyanxue,
This is puzzling. I have no experience with Prolific myself. I think that in online studies, the task running in the browser sends the data to JATOS at the end of the experiment, so I think that a connection failure at that precise time would prevent any data from reaching the server. In other words, my impression what happened to the participant you contacted you is probably unrelated to the other other problem (missing Prolific participants data).
is the code collecting the Prolific information conditioned to anything in your experiment? If this were the case, could it be that in some specific instances, that code is not executed? Was there something specific to these three participants with respect to the experiment's events?
If the issue is not with your task, then the only other reason I can think of for getting empty values for the ProlificID etc. would be that somehow some participants used a link that did not contain that information. Hard to know. It would be surprising, though, for I assume that they would just click on a link provided on Prolific and that that link is formatted equally for all participants. Still, are you sure these three participants reached your experiment through Prolific?
One way to catch participants who somehow don't appear to have a ProlificID passed to the task could be to collect the Prolific info at the beginning of the task (I assume you do) and to program your task so that if any of these pieces of information is missing, a screen comes up to inform participants that their Prolific ID has not been detected and to give them the opportunity to input it manually.
You may also want to reach out to Prolific to see whether it is possible that the failure occurred at the time of passing information from Prolific to your experiment.
That's all I can think of at the moment, I'm afraid.
Hope this helps!
Best,
Fabrice.