It looks like you're new here. If you want to get involved, click one of these buttons!
Hi! Really enjoy using it!
I wanted to suggest one improvement to the already amazing touch response:
the ability to define margins. otherwise the calculations of the rows and columns start from the very edge.
Thanks for all your work!
@labovich That's a good idea, actually.
Thanks! so I have a followup idea : adding grid lines to the sketchpad (invisible to subjects) marking the boarders of every 'zone', to ensure that stimuli in the different zones don't cross the zone's boarders.
someone to answer problems on the help forum would be nice?
I am deeply grateful for this software and for the countless hours of hard work that the developers put into it. However, I have lost dozens of paid participants to runtime errors involving Python repeatedly crashing, or IO errors involving supposedly missing files that I can see in the file pool. For example:
I appreciate the effort of the moderators to help with the problems, but echoing @DanSolo, for me there are too many unpredictable crashes and not enough support to address them (which I understand--you have other things to do!). This is meant as purely constructive feedback. I am still a huge fan of the software and grateful to its creators and maintainers! It's just not working consistently for me (nor my collaborators using the same experiments). Perhaps we wouldn't have issues if we didn't use the Eye Tribe? I'm trying to switch to scripting the experiments myself in Python, as another experiment with scripts by @Edwin went smoothly (using the Eye Tribe and PyGaze).
Again, none of this is meant as blame or criticism--just feedback that might be useful as you think about where to take OpenSesame. Thanks again for the great software, even if it doesn't always work for me :)
@DanSolo and @TomArmstrong There are indeed periods on the forum where we have a hard time keeping up. We're doing our best though!
@TomArmstrong I've been peripherally following your discussion about the IOError. It's very strange, and I've never seen it happen myself. But it's on the radar now!
Hi @sebastiaan I think it would be great to be able to specify global lists of stimuli outside the block loop structures so that one could then be able to sample randomly from stimuli lists to present in each loop.
Also for eyetracking with video stimuli, it would be useful to be able to store the frame numbers to the eyetracking file, so that those are matched.
> @heliocuve I think it would be great to be able to specify global lists of stimuli outside the block loop structures so that one could then be able to sample randomly from stimuli lists to present in each loop.
This sounds like what E-Prime calls 'nested loops'. Is that what you're referring to? In most cases, you can accomplish the same thing more elegantly by using advanced loop operations, for example by shuffling one column randomly from the other. Is there a specific scenario in which this wouldn't get you what you want?
You are right! I think that probably does it.
I hope that Opensesame can make recordings properly.
Order of variables in the output files: I would prefer to have the variables in the order they appear during the experiment instead of an alphabetic order. Maybe an option in the logger item would be nice to select between chronologic and alphabetic order of the variables.
Not sure whether the alternative is chronological order, but if you don't use the logger item, but log within an inline_scipt, (see https://osdoc.cogsci.nl/3.2/manual/python/log/#function-log46write95vars40var95listnone41), the variables don't get sorted, unless you explicitly do so.
But indeed, maybe an added option to the logger item, might be a handy little addition.
@eduard @DahmSF Thanks, that's easy to add and might be useful to some people.
Hi OS team,
It goes without saying that all of us in this big community owe you immense gratitude for your incessant work in making OpenSesame always better. Thank you!
I don't know how much of crowd there is for this request, but I was wondering if OpenSesame could add support for the self-report dial and biometrics kit that comes with Gazepoint eye tracker? Theoretically I think they should be accessible through the same API used to support the Gazepoint eye tracker in opensesame. The self-report dial would be useful for studies where any self report ratings (that involves rotating a dial slider, etc) are required. And the Biometrics kit would allow to sync physiological data like skin conductance and heart rate. Because these devices are cheap I see the appeal of of getting them, and I know some people have been getting them to use with OS and Pygaze (just don't know if it is that many that would justify providing this support).
Another more general request is for something that has been discussed probably a couple of times here, a solid slider implementation. I know and have used the available options and have adapted some, but always find rating experiments with sliders a bit of a pain in OS, as the current options are very specific, and optimizing them has proven to be tricky as well. In particular, I think a slider that can be used during and after stimulus presentation (for instance during a presentation of a video or a photo), and that can record single responses as well as continuous ratings would be a really valuable addition to the OS arsenal of tools.
That's great to hear, thank you!
I don't know how much of crowd there is for this request, but I was wondering if OpenSesame could add support for the self-report dial and biometrics kit that comes with Gazepoint eye tracker?
In terms of people from the OpenSesame team supporting this: that probably won't happen. We're stretched thin enough as it is, and I don't know of anyone using these devices. But if someone is interested in developing that, then of course we'd fully support that. It could be a set of third-party plugins.
I know and have used the available options and have adapted some, but always find rating experiments with sliders a bit of a pain in OS, as the current options are very specific, and optimizing them has proven to be tricky as well.
That's a recurring question, and it might indeed be useful to add a slider to the form widgets. In your view, what would an ideal slider look like?
Thanks for your response @sebastiaan
In terms of people from the OpenSesame team supporting this: that probably won't happen. We're stretched thin enough as it is, and I don't know of anyone using these device
That is totally fair and understand, it was a long shot anyways.
I am working on a few alternatives myself, and will let you know if I manage to get it working.
That's a recurring question, and it might indeed be useful to add a slider to the form widgets. In your view, what would an ideal slider look like
I think an ideal slider would be one that can be drawn onto any stimulus surface (by that I mean to be used concurrently with the stimulus presentation (e.g. images, videos) as well as on it's own. The first would allow for ratings to be provided while the stimulus is visible, and the latter would be for options where ratings are only needed after stimulus presentation.
In both cases, it would be good to have an optional parameter to decide whether or not to collect continuous instead of only one rating. For instance, imagine people watching a video and continuously rating how they feel from positive to negative. So the slider would have to be able to continuously collect the ratings at a constant rate (e.g. every few milliseconds, or every few seconds), instead of providing only one rating.
minor things would include:
Those are the things I think would be useful for a wide variety of situations.
@DahmSF Solution to this issue is to comment line 55 in logger.py file in libopensesame folder (location varies depending on your operating system) which says self._logvars.sort()