Reducing jatos.appendResultData in OSWeb Study
I am running an OSWeb experiment online through the JATOS server right now that has been made public to host experiments during the pandemic and am having some data storage issues. They are saying my experiment collects about 10 jatos.appendResultData per second that leads to an entire database update each time, which are all being logged and bogging down the available storage of the server. How would I go about reducing the amount of jatos.appendResultData my study sends within the OS framework? Could this result from having more than one logger object in the experimental flow? I'm not really sure where this backend parameter sources from, so any suggestions would be greatly appreciated. Thanks!
Comments
Hi @quirkm
This could well be due to the fact that you are using multiple loggers, especially if they follow up each other in rapid succession. It is hard to tell why the call to appendResultData happens so frequently in your experiment without further details. Maybe you can post the experiment here and we can have a closer look.
Hi Daniel, I went back through my program to at least ensure that all loggers were linked, although I do still have several loggers in the experimental flow, and this alone didn't seem to solve the problem. I did run a few other test runs with just one logger at the end, which did solve the appendResultData issue, but as expected then each participant run only had one row of data that looked like an average for the whole experiment. I'm definitely trying to avoid having to re-program it in another program just to bypass this issue completely, so I would really appreciate any thoughts you may have if something pops out!
Hi @quirkm
I looked through your experiment, but there is nothing out of the ordinary, except for that in some sequences you have two loggers directly after each other, which seems a bit peculiar.
each participant run only had one row of data that looked like an average for the whole experiment
This sounds weird too. You are not looking at the `avg_rt` and `acc` variables? These are supposed to be cumulative and response_time and correct log the results per trial. Otherwise I am not entirely sure what you mean and have too little to go on...