Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

Question on connecting OpenSesame and ACT-R

Dear OpenSesame Team,

Besides of doing experimental research, I develop ACT-R models and wonder, if and how I could directly link these models to my experimental tasks in OpenSesame. So far, I "translated" the respective experimental task in the ACT-R experimental GUI, since I use rather basic learning material (simple geometric shapes). However, this alwalys costs extra time and thoughts to cover all essential task related features and keep the task for the model as comparable as possible to the human task.

Is there maybe a way of runnig OpenSesame in a native Python environment and connect both programs in this way or would it be possible to work via TCP/IP?

Thanks in advance for your response!

Regards,
Maria

Comments

  • sebastiaansebastiaan Posts: 2,947

    Hi Maria,

    It would help if you provide some info about what ACT-R is, how the software works, and in what sense you want to link OpenSesame and ACT-R. In very concrete and simple terms, what exactly do you want to do?

    To answer your last questions: OpenSesame already runs in a normal Python environment (so yes), and you can use the socket module for TCP networking (so yes again).

    Cheers!
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • Dear Sebastiaan,

    Many thanks for your answer and sorry for my delayed response!

    To answer your question on ACT-R (http://act-r.psy.cmu.edu/): It is a cognitive architecture that is implemented in Lisp and works on a set of modules with defined duties (e.g., the motor module is responsible for initiating and performing manual outputs like key press, the visual module is responsible for detecting and evaluating visual content) via production rules with if-then structure.

    What I want to do is to connect such a model that is able to perform the same task as a human participant directly with the OpenSesame experiment. It requires that the model architecture can see and understand the "visual world" from the task and react to it, and that model reactions are received and interpreted by the task environment. So far, I usually rebuilt the core task structure in the Lisp environment to get the task going, which is ok for simple tasks but always takes time, has severe limitations and requires getting annoyed by Lisp a lot. I know there are already efforts to make a direct connection (e.g., JSON Network Interface by Ryan Hope, see https://www.ncbi.nlm.nih.gov/pubmed/24338626) and even managed to get the Python-based example started, but I am not sure how to transfer this to my Code from OpenSesame and establish the connection there.

    However, thanks for your hint on the socket module, I will try to find out how it could work this way. My main problem at the moment is to understand, how I can translate model actions to actual responses for the task and vice versa how I can translate task content to something the model can see and understand. As I noticed from the code from the example, it seemed that most of the "effort" occured at the task side, that's why I try to approach the issue in OpenSesame.

    So, I would be really grateful about further advice, if you have any ideas.

    Regards,
    Maria

Sign In or Register to comment.