#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

# Counting points in OpenSesame

Hello

π

I am creating an experiment for my studies. The task of the subjects is to declare in colored circles (red, green or blue) the line presented in the following as horizontal or vertical. For this they have to tap on C (horizontal) or M (vertical). After each circle, the subject should receive feedback. Depending on the circle, the possible scores differ.

If the circle is red and the respondent taps correctly, he does not get a point. However, if he taps incorrectly, 20 points are deducted.

If the circle is green and the respondent guesses correctly, he gets 20 points. If he guesses wrong, however, no points are deducted.

If the circle is blue, the respondent gets one point if he guesses correctly. If he guesses wrong, one point is deducted.

After each circle the respondent automatically gets a feedback if his answer was correct or wrong. For this I created 6 different sketchpads, which have a condition assigned in the sequence. As an example: If the circle is green and the respondent types correctly, he receives the sketchpad response_green_correct as feedback.

In one run, all combinations of color and alignment of the line should then be presented twice.

Afterwards, the respondent should receive feedback on the total of 12 processed circles. For this, however, the score mentioned above must be taken into account.

Is it possible to calculate in the syntax how often certain sketchpads were displayed and then multiply this number with the score?

I hope you can help me. I thank you in advance!

If you have any further questions about the experiment or something was incomprehensible, please feel free to ask.

Thank you very much!

Jonas π

• Hi @jonase,

What kind of feedback would you like to present to participants after a block? Just an overall score based on the rules you described or a more detail feedback (such as a count of each type of trial and the points corresponding to each)?

The general idea

Whether the first of the second, the solution requires some coding in Python or in Javascript (if you're planning to run your experiment online, you'd be use Javascript).

Let's focus on the first option (calculating a general score) for now... (The second option would follow the same overall logic as below, but it'd require programming more variables)

It looks as if you're using a `colour `variable to code for the circle's color for every trial, and you can access whether the response is correct or incorrect using the `correct `variable. So, the solution is to implement code that would do something like this:

You'd have to set the score to a baseline level before the block starts. Presumably, subjects start with zero points. Let's say we want to store the points in a variable we call `game_score`. So you'd set the `game_score `to zero before the block. Then, on every trial, you'd run If the color is red and the response is incorrect inline_script (Python) or an inline_javascript object. Use whichever language you're most familiar with (unless you plan to run the task in a browser, in which case you'd have to go for Javascript). In that code, you'd specify something like:

If the `colour `is red and the response is incorrect, then `game_score `= `game_score `- 20

If the `colour `is green and the response is correct, then `game_score `= `game_score `+ 20

If the `colour `is blue and the response is correct, then `game_score `= `game_score `+ 1

If the `colour `is blue and the response is incorrect, then `game_score `= `game_score `- 1

Now, there is actually something simpler and more flexible: to define the reward and penalty in columns within the loop table (say, `reward `and `penalty`) and apply these systematically in every trial:

If response is incorrect, then `game_score `= `game_score `+ `penalty`

If response is incorrect, then `game_score `= `game_score `+ `reward`

(1) is takes less code - no more checking what colored circle was presented in the current trial (faster execution and most of all, less risk of programming error or confusion)

(2) you get to see the reward and penalty for each trial in your data output (provided that the logger includes it, which it does it you leave it set to "Log all variables", otherwise you'd have to add it manually to the variables being logged)

(3) it gives you the flexibility to use different reward and penalty regimes across blocks without having to edit the code.

Implementation

Here's how to I implemented it in a scaled-down version of your task based on your description (I used some Python code, but you could achieve the same result with Javascript):

(1) I set up the game_Score to 0 before the block

```var.game_score = 0
```

(2) I defined a number of useful variables in the loop, which will be used to keep track of the score while minimizing the need for code

(3) In side the trial sequence, I use this code to update the game_score using information from the loop:

```if var.correct == 0 :
var.game_score = var.game_score + var.penalty
else:
var.game_score = var.game_score + var.reward
```

Note that I add the penalty (I don't subtract it) because the penalty is defined with a negative value in the loop table.

(4) I display feedback, both trial by trial, and for the whole block. The trial feedback is delivered by showing one of two feedback objects, depending on response accuracy:

I displayed quite a bit of information on the trial feedback, so that you can see how to do it should you decide to display that much information:

Need more detailed block feedback?

If you wanted to display more specific feedback (e.g., display the number of errors for each circle color, the score per circle color etc.), you could do it by using some more advanced coding to implement variables. For example, if you wanted to count the number of errors for red circles, you'd set up a `red_errors` variable to zero before the block:

``` var.red_errors = 0
```

And inside the trial, ine the Python inline object, you'd have to use some code like:

```if var.colour == 'red' and var.correct == 0 :
var.red_errors += 1
```

and the same for the other colors. You get the idea...

Hope this helps. Good luck with your experiment!

Fabrice.

PS: In your description, you don't mention counterbalancing the response keys but it'd be a good thing to do (you can use code to allocate one key arrangement to odd-numbered subjects and the reverse arrangement for even-numbered subjects). You can see how to do that and another example of scoring based on condition in this post: https://forum.cogsci.nl/discussion/comment/23990#Comment_23990

• Hi,

first of all I thank you very much! The points are now counted but if there is the second circle of the same colour I get a error report saying "TypeError: this._variables[i[0]] is undefined. See the console for further details"

I don´t know how to handle this problem.

Yes I want that it can be used online.

To the feedback: I would like to give a feedback after every circle simpy saying "Correct" or "Incorrect" and after each block a feedback with the game_score.

Maybe you can also help me with this!

Thank you

Jonas

• edited September 15

Hi @jonase,

Glad I could help. In the example I sent you, each circle was presented twice and no error comes up. Can you confirm my example works correctly on your computer too? If you run it in a browser, it won't work because OSWeb does not support coding in Python. If you use Python code in your task and run it in the browser, you'll get errors. Otherwise, if you run it in an OS window and still get that error message, there must be some error in the code you implemented in your task, but I can't tell without seeing the actual task (could you possibly upload it to this forum?). OS should display information about the object and the line where the error occurs. Can you pinpoint the origin?

About the feedback, just edit the correct and incorrect feedback object to display "Correct" and "Incorrect" respectively.

OSWeb 1.4 includes new features not previously supported. This said, it is important that you read a little about the features of OS that OSWeb does not support, so that you can design your task with that in mind: https://osdoc.cogsci.nl/3.3/manual/osweb/osweb/

You would need to translate the Python code into Javascript (https://osdoc.cogsci.nl/3.3/manual/javascript/about/). This is not very difficult, now that know the logic of the solution you want to apply. Just note that your should declare variables using vars. instead of var. to make the variable available beyond the object in which it is defined. I'm pressed for time right now but if you get stuck, I'll take a look and post the javascript code here later on.

Best,

Fabrice.

PS: for questions about experiment you intent to run in a browser, the OSWeb forum section might be more relevant (you'll also find lots of interesting threads there that might be helpful). Otherwise, if you post on the OS forum, best to indicate that your task should run in a browser (so that you get replies that take it into account)

PPS: When replying in the forum, you might want to use the handlers so that your correspondent gets notified of your message (otherwise they won't know unless they manually revisit the thread). You just need to type @ followed by the first letters of your correspondent's username and a pulldown menu will pop up. For example, if you type @fa, this will appear:

• Hi @Fab,

maybe the best option is to upload my experiment. I tried to use java language but im not sure if its correct. Now there are no problems with the colour but OSWeb says that there is no variable called game_score.

Sometimes the language is german i hope that this is not a problem. The structure is in english.

• Hi @jonase,

(1) You tried to use Javascript in a `inline_script` object instead of an `inline_javascript` object... That can't work... Use the `inline_script `for Python code, and `inline_javascript` for Javascript code. Writing Javascript in a Python code object (or vice versa), will not work.

(2) Your Javascript code contains errors. For some reason you added parentheses everywhere you use the variable: game_score(). That won't work... Any reason you added the parentheses? Parentheses are used for functions, not variables. Also, when testing the condition of the "if" statement, you have to use "==" instead of "=". Here's the corrected code:

```if (vars.correct == 0) {
vars.game_score = vars.game_score + vars.penalty;
} else {
vars.game_score = vars.game_score + vars.reward;
}
```

Some further problems I spotted:

(3) You're using two different loggers. That is not a good idea as it can create messy output file and can result in errors. You should use one and the same logger throughout (if you copy and past it in different places, use a linked copy). I fixed that to have only one logger.

Since you're using different tasks in the same experiment (circle, probe task and some other yet not programmed task), it would be a good idea to define a variable that would appear in the data log and would code for the task currently running. That way, your output will contain a column coding for the task (will make it easier to process the data separately for each task).

You'll need to make sure to include `task_stage `into the list of variables being logged (see next point)

(4) You must set the logged variables manually in OSWeb. Check the diagnostic of the OSWeb compatibility check (Tools OSWeb). It'll give you some indication of whether something in your experiment is incompatible with OSWeb. You may want to read the documentation about OSWeb; it'll safe you time and possible problems. Apart from using inline_script objects (Python), another thing that is problematic is how to have the logger configured. By default, its "Log all variables" is checked. This is great when you run the task in an OS window in the lab, but not for running it in a browser (because logging everything takes a toll on the bandwidth and can slow the browser), so you must disable the "Log all variables options" and specify which variables you want to log in the data output. The easiest way to do this is by opening the variable inspector and then dragging the variables you want in the data output. Please do this carefully, as you need to make sure to include all the variables that will be necessary for your analysis of the data.

Here, I inserted a few but you might want to revisit this to make sure you have what you want...

(5) Wrong penalty values in your loop table. The rewards and penalties you implemented in the task do not correspond to the rules you described in your initial post, as you use positive values...

As I mentioned in my earlier post, if you use the code I suggested, you must define the penalties as negative values. Alternatively, you need to change the code to subtract penalties instead of adding them.

I corrected it to:

Some more considerations:

Responses measured to the keyboard object, not to the circle

I noticed that you display the circles for 195 ms before the fixation cross returns. That should work out if you don't expect participants to respond faster than 195 ms. They probably won't be, but if they were, their response would not be recorded (because the keyboard object only starts after the 195 ms have gone by). If you wanted to make sure that responses can be recorded from the onset of the presentation of the circle, you'd have to use something a little more complicated that consists in using two keyboard objects which I described in this post: https://forum.cogsci.nl/discussion/comment/23090#Comment_23090

Note that if response times matter in your experiment, your task currently measures RTs from the onset of the keyboard object, not that of the circle. That means that if a subject presses C or M 450 ms after the onset of the circle, the RT recorded by your task will be 255 ms (450 ms - 195 ms). So in that case you need to remember to add 195 ms to all RTs before your analyze your data.

Debugging in the browser

When developing a task to be executed in a browser, do test it in the browser:

... and use your browser's console to check output to the console and error messages.

Counterbalancing the response keys across subjects

Your task currently sets the responses in a fixed way: C for horizontal bars and M for vertical bars in the game task. Unless you have specific reasons to do it this way, I would recommend counter balancing the responses across subjects. You would need to use code to do this. See  https://forum.cogsci.nl/discussion/comment/23990#Comment_23990 for an example using Javascript code. I took the liberty to include it in your task. You'll see that I added an instruction screen for the game's task to show you that the instructions can be adapted dynamically to display whatever keys are appropriate for a specific subject. I used this code to create variables storing the correct responses for horizontal and vertical bars. the code runs at the onset of the experiment.

```if (vars.subject_parity == "odd") {
vars.horizontal_response = "c";
vars.vertical_response = "m"
} else {
vars.horizontal_response = "m";
vars.vertical_response = "c"
}
```

Note that when running an experiment online using JATOS, the subject number will be sampled from the configuration you set in the OSWeb section:

To use the subject's parity as a key to counterbalance the response keys, you must make sure to include at least one odd and one even number there. Note that JATOS will not sample these equally, though. So you may not end up with exactly the same number of participants with each keys arrangement. There may be ways to get around this playing around with JATOS features, but I'm not sure how to do this yet (might be something to check in the JATOS documentation/forum).

That's all I have time for, I'm afraid. But I believe your task now runs and that I fixed the existing problems. Hopefully this should provide you with enough information to finish programming your task.

The forum and the web site contain a lot of useful information, and you can find a lot of information about Javascript online too (I started using OS in March and had no idea of Javascript or Python, but found a lot of useful information out there).

Good luck!

Fabrice.

• Hi @Fab

Thank you very much for your help! The experiment now works perfectly! Do you know how to embed the experiment into sosci survey?

• Hi @jonase,

Regarding your new question... I'm not familiar with Sosci Survey. I imagine it is possible to link it with your OSWeb experiment in JATOS, but it will depend on how flexible Sosci Survey is with passing parameters, or reading parameters from, URLs.

You'll have to consult the Sosci Survey documentation to work out how to process information from the URL (https://www.soscisurvey.de/help/doku.php/en:survey:url#parameter_in_the_questionnaire_link).

The rationale is this: from JATOS to a questionnaire using another platform, you can set JATOS to redirect participants to the questionnaire's link, adding parameters to the URL so as so pass information from variables. From a questionnaire to JATOS, you can setup your experiment to read from the URL the content of certain parameters that you can then use in your task. The bit about JATOS is described in several posts about how to link JATOS and Prolific, or JATOS with Qualtrics: https://forum.cogsci.nl/discussion/comment/24127#Comment_24127, https://forum.cogsci.nl/discussion/5564/linking-osweb-jatos-with-qualtrics, https://forum.cogsci.nl/discussion/6993/using-prolific-ids-in-osweb-experiment, for example.

Good luck!

Fabrice.

• Hi,

sorry for asking for your help again @Fab . I forgot a little thing in the experiment. I have to randomly connect the three colours to a negative, positive and neutral valence. The neutral colour has 1 point reward and penalty, the positive colour 20 points reward and 0 points penalty and the negative colour 0 points reward and 20 points penalty.

Do you know how i can implement that feature? Subject one for example should get green as negative (+0,-20), red as neutral (+1,-1) and blue as positive (+20,-0). Another subject could have green as neutral (+1,-1), red as positive (+20,-0) and blue as negative (+0,-20). It should be random.

I hope you understand my intention. I will upload the newest version of the experiment to make it easier for you to understand.

Thank you!

• edited October 13

Hi @Fab

so here is my try counterbalancing the three colours in their valence in six different (3x2x1) sequences. Unfortunately this doesn´t really work but i think my intention is not too bad. Maybe you can tell which mistakes i have made.

Edit: it now works but i dont know if it counterbalances correctly

• @Fab It now works properly. However, only in such a way that the subjects are assigned to the different conditions in order. Is it possible to assign the subjects randomly?

• Hi @jonase,

I had a quick look at your task. Your method based on the subject number seems to work (though I did not check it in details; you'd need to run the experiment as subjects a to 6 once fully and then look at the logged data to make sure it is working). However, I spotted three issues. The first is that the method you implemented counterbalances the rules across participants whereas you indicated earlier that you wanted the rule to be selected at random. A second issue is that you implemented the different rules by creating multiple loops and used unlinked copies of their objects: That makes the code unnecessarily long and the unliked copies of objects means that they are independent and that if you decided to make a correcting to a trial sequence, you'd have to go and make that correction in all trial sequences (this can easily lead to errors and inconsistencies). It is better to used nested loops and to used linked copies of objects.

There's no need for some much duplication, and the use of several independent loggers can actually really mess up the data output. So if you'd like to go down the route in your current version, I'd strongly recommend using linked copies of sequences.

A third issue is that in your implementation, the counterbalancing of the response keys is not orthogonality crossed with that of the rule sets. Indeed, the keys are set by vary based on the parity of the participant's number. Since you then set the color rules based on their number too but only used 1-6, it means that, for example, participants 1, 7, 13 etc. will have the same color rule without counterbalancing of the response keys. To fix that, you'd have to consider 2 x 6 cases. This would make your task structure quite cumbersome. Most importantly, however, it would not set allocate rules to participants in a random fashion.

You have to decide whether you want a random allocation of rules or a counterbalanced one.

Based on you earlier post, I'll assume that you're after the second and describe a solution below.

One way to solve the problem is to define the rewards and penalties in a dynamic way and not in a fixed way in the loop table:

Here we use values that are actually variables (the first letter codes for the color, the second for Reward/Penalty). So, RR = Red Reward, RP = Red Penalty, BR = Blue Reward, BP = Blue Penalty, GR = Green Reward, GP = Green Penalty.

That way, we can set the values for the penalties and reward from code. Here, a solution is to create an array containing three elements, and to shuffle it to obtain a random order of the rules, which will then be applied to the red, blue and green objects respectively. So for example, if the shuffled array gives "Rule 3, Rule 2, Rule 1", Rule 3 will be applied to red objects, Rule 2 to blue objects, and Rule 1 to green objects.

To achieve this, we need code inserted at the onset of the task:

Here is the code with annotations:

```// defines function to shuffle the content of an array
function shuffleArray(array) {
let curId = array.length;
// There remain elements to shuffle
while (0 !== curId) {
// Pick a remaining element
let randId = Math.floor(Math.random() * curId);
curId -= 1;
// Swap it with the current element.
let tmp = array[curId];
array[curId] = array[randId];
array[randId] = tmp;
}
return array;
}
// Usage of shuffle
// arr contains the list of targets
let arr = ["Rule1", "Rule2", "Rule3"];
arr = shuffleArray(arr);
console.log("Shuffled order of rules:"+arr);
console.log ("Rule for red circles: "+arr[0]);
console.log ("Rule for blue circles: "+arr[1]);
console.log ("Rule for green circles: "+arr[2]);

// Allocate reward (R) and penalty (P)
// for the green (G), blue (B) and green (G) cirles
// Sets rule for red circle
if (arr[0] == "Rule1") {
vars.RR=0; // RP = Red reward
vars.RP=-20; // RP = Red Penalty etc.
} else if (arr[0] == "Rule2") {
vars.RR=1;
vars.RP=-1;
} else {
vars.RR=20;
vars.RP=0;
}

// Sets rule for blue circle
if (arr[1] == "Rule1") {
vars.BR=0;
vars.BP=-20;
} else if (arr[1] == "Rule2") {
vars.BR=1;
vars.BP=-1;
} else {
vars.BR=20;
vars.BP=0;
}

// Sets rule for gree circle
if (arr[2] == "Rule1") {
vars.GR=0;
vars.GP=-20;
} else if (arr[2] == "Rule2") {
vars.GR=1;
vars.GP=-1;
} else {
vars.GR=20;
vars.GP=0;
}

console.log("Check on reward and penalties to be applied:")
console.log("Red reward:" +vars.RR)
console.log("Red penalty:" +vars.RP)
console.log("Blue reward:" +vars.BR)
console.log("Blue penalty:" +vars.BP)
console.log("Green reward:" +vars.GR)
console.log("Green penalty:" +vars.GP)
```

I included many console outputs to facilitate the control of the task and make sure everything works as it is supposed to. You can see that this code contains a function to shuffle an array, an array containing the three rule labels, and then a series of "if... else if... else" statements to set the values of RR, RP, BR, BP, GR, GP. These values will then be called in the loop table. That way, you just need one loop table and not multiple ones.

Hope this helps.

Fabrice.

My version:

• edited October 13

Thank you very much @Fab. The problem here, however, is that the rule is redefined for each block loop in the Game Task. However, the same rule should exist for all runs of the Game Task.

Subject 1, for example, should receive red as negative, blue as neutral and green as positive in the Game Task.

Subject 2, for example, should receive red as positive, blue as neutral and green as negative.

This rule should apply randomly, as it does in your experiment. However, for the entire Game Task there should be the same rule.

The points must also always be the same: +20 -0 for the positive color, +1 -1 for the neutral and +0 -20 for the negative. Only the assignment of the color should be randomized.

I hope you get my point. Otherwise just ask me :)

Is there also a possibility to randomize counterbalance_keys, so that also here the allocation happens randomly?

After completion of the experiment you have really deserved some coffee ;)

• Hi @jonase,

Just to clarify... The solution I suggested does set the same rule for all blocks of the game task presented to a participant. The rule is not redefined in each block. It is set at the onset of the experiment and applies consistently in all game task blocks.

The allocation between points and stimulus color is random but remains fixed within-participant. So, participant 1 might get the positive scoring for the red color, the neutral for the blue color, and the negative for the green color. This remains so across all blocks. The next subject will get a new random allocation of points to colors. So the next participant might get the positive scoring for the green color, the neutral scoring for the blue color, and the positive scoring for the blue color.

As for the randomization of the keys across participants, that can easily be set up modifying the code in the `counterbalance_keys `inline object:

```let keypick = Math.floor(Math.random() * 2);

if (keypick == 1) {
vars.horizontal_response = "c";
vars.vertical_response = "m"
} else {
vars.horizontal_response = "m";
vars.vertical_response = "c"
}
```

So, unless I misunderstood you, I think that the solution I suggested does what you're after.

This picks up a value between 0 and 1 and sets up the keys accordingly. The Math.random() * 2 generates values between 0 and 1.99999... and the Math.floor reduces numbers to the lower integer (so, e.g., 1.234 ends up as 1).

Hope this helps.

Fabrice.

Modified example:

• edited October 14

• Hi @jonase,

Saw the empty post above. Not sure whether you sent it by mistake or whether something went wrong and the content went missing. Just letting you know in case you did post something.

Best,

Fabrice.

• Hi @Fab. I solved the problem of my last post so I deleted it.

I have (hopefully) a last question. In the beginning of the task the participants should read how much points they can earn and loose with each colour. I tried to implement this in "instruction_colour" but the variables are not avaiable.

• Hi @jonase,

The problem you described is due to the fact that sketchpad are prepared before the parent object is executed. So in this case, before the Experiment object is run. Since you refer to variables that do not exist yet (since they are created from Javascript), OS complains that the variables RR, RP etc. do not exist.

The solution is to use a feedback object instead of a sketchpad. Feedback objects are generated on the run (so that their content can depend on what's going on in the current context). So, if you simply replace the `instruction_colour `sketchpad by a `instruction_colour `feedback object, it'll work (if you wish to give the feedback object the same name as the sketchpad, you'll need to delete the sketchpad and empty the Unused items.

By the way, since OS does not allow you to copy a text object on a sketchpad to copy it elsewhere, you can instead copy the script of the sketchpad and copy it into the script of the feedback object (that way you'll get on the feedback object the text exactly as it was on the sketchpad).

I haven't checked the rest of the task (it's quite long), but now the instructions run as you wished.

I attached my modification:

Hope this helps. Best of luck with your experiment!

Fabrice.