Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[wishlist] What would you like to see in OpenSesame?

24

Comments

  • edited July 2013

    @Edwin

    Honestly, I've not tried it at all (I still haven't found time to finish what I was doing on the mobile app).
    I do know that it works with the HTML5 canvas, so it seems plausible that a new back end could be added, that draws to pyjamas.canvas.Canvas, instead of the PyGame canvas.
    It could a job for the future!

    There's an example of the HTML5 canvas at http://pyjs.org/examples/Space.html (source code at https://github.com/pyjs/pyjs/tree/master/examples/asteroids/)

  • edited 12:31AM

    Hi @Wouter, @elizabeth, @cnvanderwal, and @sviter,

    Thanks for all feedback, requests, and issues!

    Regarding the possibility of creating an online runtime for OpenSesame, as discussed by @Evadeh, @Edwin, and @EoinTravers: I think there is a lot of demand for this, so let me say a few words.

    Technological feasibility

    Technically, I think it's feasible. If a Python-to-JavaScript compiler works well enough, that would be perfect. But even if we have to write a native JavaScript runtime for OpenSesame, it should be possible, although embedding Python code would probably not be possible in that case.

    Who will do it?

    I think the more pressing question is not whether it's possible (because the answer is yes), but who will implement it. Let's assume that we want to take this on seriously, as a software package that really works and is not just one of the many half-finished solutions for running online experiments. In that case, it would be a sizable project, and someone with considerable expertise would need to invest some serious time, possibly as part of a post-doc or PhD project. OpenSesame as a project doesn't have sufficient budget to finance this. I was initially hoping that one or more university departments would step in to finance this development, because they seemed to be very interested. @Evadeh actually initiated this, so she will probably confirm: Everybody is interested, but universities are not willing to invest.

    Possible grants

    If we want to make this happen, I think the best route would be through a grant proposal. I have not seen serious grants that are purely for the development of scientific software, except perhaps from the CoS (see below). But it could be an applied part of a grant that relies on online experiments, for example in the field of social psychology, where I think the interest in online experiments is greatest. I would be very happy to be part of such a grant proposal, so if anyone would like to do this let me know.

    Another option would be to contact the Center for Open Science. They actually organized a conference-call meeting (which was a first for me) with a number of developers to discuss online experiments. No concrete plans have yet come from this, but the topic is on their mind, and the center does give out grants. So we could apply for a grant there, specifically for developing a package to run experiments online. Again, I would love to be part of this.

    To sum up: I think some external help will be necessary to develop an online runtime for OpenSesame, because the project is already saturated when it comes to manpower and budget. But if someone would take an initiative, for example by writing a grant proposal, I, and I imagine the other team members as well, will support this a 100%.

    Cheers!
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited September 2013

    Since no one has suggested these, I guess I may not have found out yet how to do them:

    • Shuffle the order of operations by dragging them up or down
    • Move stimuli placed on the sketchpad by dragging them
  • edited 12:31AM

    Being able to run a particular section of an experiment would save a lot of time (i.e. not having to run 5/6 blocks just to test block 4).

  • edited October 2013
    • I'm agreeing with froukehe on the option of being able to drag already drawn stimuli on a sketchpad to a new position! That really is a must-have in my experience
    • More "spreadsheet"/Excel-like behavior of the table in a list item. For instance

      • Changing the order of columns (by dragging them to a new position)
      • Deleting intermediate rows (thereby making all contents of all cells below below shift up)
      • Undo function/Ctrl-Z
      • Easier way of copy-pasting values into cells. As it is at the moment, sometimes this works as expected, but on other times this doesn't. Maybe even add a 'fill-handle' at the bottom right of the cell
      • Improved weight system. Even though OpenSesame allows one to pin a weight to several rows now, once these weights are applied the whole table is unreversably changed. It would be nice if the weights just are a number in the first column that effectively change how often their corresponding row is 'run' without changing the table itself (as it is in E-Prime now)
    • Being able to exit an experiment by pressing ESC at any time in the experiment, even when there is no keyboard_ or mouse_response item listening for user input. It might also be a good idea to use a less obvious key combination than simply ESC (In E-Prime, again, it is for instance CTRL-SHIFT-ALT, which is harder to guess)

    • More informative debug messages for inline_script. Now it simply states something like "Error in on line 32", and often also not in which inline_script the error occured (at least not in the debug window, it does in the notification window, but you have to close that to continue working with OS)
    • Be able to change the order of items by dragging them around in the Overview window
    • With the EyeLink plugins: the ability to specify where to save the EDF file, instead of it always being written to the OpenSesame program directory by default.
  • edited October 2013

    I'm completely agreeing with both of froukehe's suggestions. Furthermore:

    • Being able to add a directory of files to the file pool. Dragging works, but otherwise, you have to add files one by one.

    • Being able to copy a loop (for instance) in the GUI, without having an exact replica, but and editable copy. Now, when you fill a loop with an existing sequence, any changes made to that loop/sequence are also made in the original. It would be nice if there would be a copy option that does not have these properties, but only uses the loop that you copy as a template so to speak, completely separate from it's original.

  • edited October 2013

    Hi,
    I am new to opensesame. I am designing a basic shape discrimination task. I have to display two shapes simultaneously and clicking on a particular shape only will be considered as a correct response. The position of the shapes (i am taking only two shapes) will be randomized across the trials. Will it be possible to design in opensesame or i will have to do python programming for the same??
    I am facing some problem in designing the task in opensesame . It is showing two same shapes (for example triangle and a triangle) in a trial which i dont want, but not two different shapes in one trial.

  • edited 12:31AM

    Hi Pooja,

    Could you perhaps open a new thread for your question, instead of posting it in the wishlist?

    Thanks in advance! :)

    Lotje

    Lotje van der Linden - http://www.cogsci.nl
    FACEBOOK

  • edited October 2013

    I wish for a tutorial on the most basic aspects of variable usage and scripting in Open Sesame (I find that simply typing-in python doesn't always work, possibly because of Open Sesame's distinction between prepare phase and run phase).

  • edited 12:31AM

    I saw the voice response in an earlier comment on this post. My colleague, doing language research, was also asking for it. I had them almost converted to OpenSesame, if this feature (voice-key) would have been included.

  • edited 12:31AM

    I can see if I can add a voice response feature to the sound recorder plugin. It should not be that hard if I can figure out how to recognize if there is a "signal" in a wave packet.

    On that note, I would like to see block commenting in the inline script editor! Now you have to manually add a # to each line you want to comment, or place sections between triple quotes, but it would be nice if you could for instance select a block of text, press CTRL+1 and then this whole block gets commented. This is how it works in spyder now at least and I use it a lot!

  • edited 12:31AM

    @dschreij: YES! And indentation by selecting a couple of lines and hitting Tab or Shift+Tab, and 'go to line' shortcuts, autocompletion, warnings when a variable name isn't familiar (mostly on a typo), and pretty much all other amazing stuff you can do in a regular code editor. I think expanding the script editor functionality has been on the 'WANT'-list for quite a while now, so hope to see some of it appearing soon!

  • edited 12:31AM

    Lots of good ideas!

    But just a quick note about block commenting: This is actually already possible in 2.8.0, which is now in pre-release. The text editor component has been split off into a separate project QProgEdit, which is much more advanced than the old one.

    Block commenting is Control+M and Control+Shift+M.

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited 12:31AM

    It would be very useful to run an experiment online. I mean, to have the possibility to share online, for example with a link, a test and ask to perform it online. That would be a very easy way to find subjects, for some kinds of experiments at least!

  • edited 12:31AM

    It would also be a great help to have coherence about x y coordinates of the sketchpad in all modalities. For example, drawing an image and drawing a rectangle requires different kinds of coordinates.
    Also, mouse position setting appears to be different in full screen and quick run mode.

    Thak you all for your great work!

  • edited 12:31AM

    What people keep asking me about when I introduce them to OpenSesame is reading stimuli order from a file. This is necessary when doing experiments with probabilistic sequence learning and so on where particular order is generated offline using brute force algorithms. If only the Loop had a "load from file" option.

    Another thing I miss is a proper log. It would be great if there was an option to log all events: onsets of all stimuli and all keypresses (even those that are not used in the experiment). This would be great for debugging.

  • edited 12:31AM

    It would be very useful to have a way to test/preview individual elements within Opensesame (e.g. test that sound sampler is able to retrieve/play a sound file) without having to run through the experiment.

  • edited 12:31AM

    Hi,

    Im a novice with OpenSesame, but here is four things that I think would be very helpful.

    1. Virtual Reality (VR) compatibility for head mounted displays such as the Oculus Rift so that we could programme and run psychological experiments in VR.

    2. Ability to directly run an executable file from within OpenSesame itself, as well as run web based applications (one could link to an online application then as part of an experiment).

    3. Support for sending EEG triggers via USB rather than via a parallel port.

    4. Ability to automatically store and retrieve experiment information so that trials could be conducted across multiple test days.

  • edited 12:31AM

    Hi,

    I really like that there is now a comment/uncomment all function. Really helps!

    Unfortunately the increase/decrease indent function is gone since the new update. I know there are shortcuts for doing that (at least for increasing the indent) but I think it would be handy to have both (commenting and indenting presented next to each other. This would greatly increase work-flow as well as bugfixing.

    Cheers!

  • edited June 2014

    Good evening, colleagues, is it possible to add "Undo the last action"?

  • edited 12:31AM

    I would ask for the possibility to copy existing items to another place of overview area.

  • edited 12:31AM

    FYI: Copying existing items is already possible, by using a sequence's Append existing item button.

    This doesn't mean one could not think of a more straightforward way of doing this (e.g. by right-clicking on an item in the overview, or by allowing CTRL+C and CRTL+V keyboard shortcuts). This would probably be a nice addition.

  • edited 12:31AM

    Thank you, Edwin. When we push "Append existing item" button, we add exactky the same item which cannot be editied its own way. I meant that I would copy an item and change it little. In the current version when you change a copied item, you change the original one. Or am I wrong?

  • edited 12:31AM

    Ah, I see what you mean. You are definitely right: Append existing item produces a copy that is linked to the original; change one and you change the other too. I now understand that you would like to see the option to copy an item without this link to the original.

  • edited 12:31AM

    Hi,
    A useful facility would be to have a Random Number Generator from a drop down box providing a simple generation of - values from-to in steps of - would be very helpful for creating random Inter Trial Intervals (ITI's).
    Students that are new to creating experiments need this option but are unaware about the coding required. I am not sure how easy this could be implemented but would be very useful.
    Thanks
    Anthony McGuffie

  • edited 12:31AM

    Hi,
    I wanted to ask if there is any possibility for sending trigger to other android apps from your android app. Do you have API for communication with other android applications?
    If you want to do serious experiments in neurosciensce, and precisely record information about the time of visual stimulus and evoked potential during EEG signal processing it is essential to use this kind of communication. Csv file record is not enough because we must at least send a trigger when experiment started in order to be able to make proper time scale.
    Cheers :),
    Milena

  • edited 12:31AM

    Hi Milena,

    Do you have API for communication with other android applications?

    OpenSesame (or, rather, Python) is very flexible in how it can communicate with other applications. So there is not one specific API, but you can communicate with other applications in pretty much any way you want. In your case, for the mBrainTrain, you'll have to first know what kind of input the mBrainTrain app expects. For example, can your app listen to a UDP socket? For two apps running on the same device, that would seem like a viable way to communicate.

    It would be best if you open a new discussion for this, and describe in more detail what you need, and what the properties of your app are.

    Cheers!
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited 12:31AM

    hi! I would like an 'allowed_responses' field in touch response item. I've been good ;-)

  • edited 12:31AM

    Hello ,
    I am new to psycohpy. I got to play a set of short movies and get responses from my subjects. I am facing difficulty in playing the movies. The window displaying the movie would pop up and then vanish away without playing the movie. Do anybody have a solution.
    Also I need to integrate an eye tracker to it. can anyone help me out.

  • edited 12:31AM

    Hi @vinu,

    You might want to open a new thread to ask. Also, it would really help if you could be specific about what you want, how you are trying to implement this, and why that does not work.

    Video's can be played in OpenSesame by using one of several options, see here. Eye trackers (EyeLink, EyeTribe, SMI, and Tobii) can be used via OpenSesame's PyGaze plug-ins.

    Good luck!

Sign In or Register to comment.