Port reader. PLEASE HELP!!!!!
Hi guys! I really need some help.
I´m trying to measure galvanic skin response with NeuLog sensor but I can´t figure out what is the best way to do it in combination with OpenSesame.
I already try to follow some instructions from 2014 that are here in the forum buy I still really lost. Im work on MAC so, I dont know if there is the begging of the problem. I already installed the .py files of the device on the OpenSesame Folders and also I already installed the serial_port_trigger plugging but OpenSesame still doesnt import the gsr data.
The aim of the experiment is to measure the GSR once the participant observe a violent religious image and compare the GSR when they observe a violent no religious image, and so on.
Then, the first thing is to have the GSR of each participant during the experiment, and second I need like a timeline that shows me the GSR of the participant an the point where the image was shown. In that way I can match the points of the GSR data and the image (the images gonna be display in random order), and know how the image affects participants' GSR.
I was trying to use two computers one with OpenSesame and the other with NeuLog´s software but was a completely mess.
I found a discussion in the forum where someone was asking you about the NeuLog but the discussion is to old and I couldn´t understand if he was asking you something like I need.
Thanks for your time.
Regards
@Edwin I so some of your responses here in the forum, could you please help me with this. Thanks so much.
Comments
Hi Atenas,
I think the first step will be to establish clearly a) what you want to do, and b) what kind of connection the GSR device uses.
Cheers!
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
To quote from another discussion:
> Hi Im dealing with the neulog. After many intents I changed from MAC to PC, so here Im. So in the web page of neulog I found this. https://neulog.com/wp-content/uploads/2014/06/NeuLog-API-version-7.pdf So, my question is How to contact via GET command in OPENSESAME?
This answers my question b) from above. So apparently it uses an HTTP connection to send JSON messages back and forth. This is the same protocol as a webbrowser uses for opening web pages, so to start with you could see if you can communicate with the device simply entering these example queries (from the NeuLog docs) in the address bar of a browser.
Once this works, then you can work on implementing this in a Python inline_script. This is not that difficult, and you can use Python's
urllib2
module for it.Note that
localhost
essentially means 'this computer'. If the NeuLog API is running on a different computer from OpenSesame (or the webbrowser during testing), then you'd have to replace this by the IP address of computer that runs the NeuLog API.I imagine that this may sound a bit overwhelming. But it's doable!
PS. Please don't open multiple discussions for the same question!
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi, Sebastian. Sorry for all the questions in the forum. Im very desperate , I was dealing with this for almost two weeks.
The good news! is with this code you gave me I was able to make communication with the device! and I´m very happy.
About the question "a" you asked me. I want to read GSR values from the device to analyze and log them within OpenSesame, in that way I can have the GSR variable and compare with the time_[item_name] variable in the same .csv., and know what image (the images are in random order) and how this images affects the participants' GSR.
Thanks a lot for your time.
Good to hear that you got things more or less working! 💪
I don't have the device to test it myself, but I believe that the following code should first read the sensor information, then parse the
json
string, and store the sensor value it in the var store. (But this is blind-coded, so there may be bugs!)Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi, Sebastian!
I think we´re getting closer but I still have a problem.
The code you gave me works! But at this moment I still can´t log the gsr for each one of the images, in the log file only appears one grs value. It´s the same value that appears in the API. I tried a few options but no one works. So, Im asking again for your help. I gonna attach the api´s image, the log´s image and an image of my experiment.
Im wondering if my mistake was how I answer the question "A" and the thing I really need is to send triggers from OpenSesame to the GSR :( and in that way I can have the gsr for each image.
Thanks for your time.
Hi Atenas,
It looks like the API may not always respond by giving the actual GSR value. But this is easy to test. You could for example try it out in a browser, as I suggested above. Or you could print out the response to the debug window to see whether the issue is in the response or in the way that you parse the response:
Cheers!
Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi, Sebastian. Me again.
So, after several hours I could find out what is happening. This code 'GetSensorValue' is always gonna give us one value from the device (I gonna attache some prt sc from the manual). The value only change when I press enter on the link.
I think the code I need is 'GetExperimentSamples' cuz with this one I gonna have all the samples from the current experiment. But the thing is that when I change in the code 'GetSensorValue' with 'GetExperimentSamples' python and OpenSesame crashes.
So, I did a few test on the brower like you said, and after several tries I could figure out that the sensor is asking me for first starts the experiment (http://localhost:22002/NeuLogAPI?StartExperiment:[GSR],[1],[8],[30]), then set it to get the samples (http://localhost:22002/NeuLogAPI?GetExperimentSamples:[GSR],[1]) and finally to stop the experiment (http://localhost:22002/NeuLogAPI?StopExperiment:[GSR],[1],[8],[8])
So, I had to put every code on separates windows on the browser and click enter. And the problem is that I don't know how the python code needs to look like on Open Sesame. And after all Im a little dessapointed cuz I dont know if this thing is gonna work, all this doesnt appear to be close to the results I expected. I was trying to get the closer I can to the results from the original app without the API.
So, again Im asking for your help. First, to figure out how the python code needs to look to get all the experiment samples and second if you have another idea how to solve this. I tried run the experiment on separates computer but the images are in random order so I dont know what images affects in what way to the gsr, and if I put sequential images I thing I gonna have a time bias.
I over though this so I dont have my ideas very clear, sorry.
I was thinking to add a current time to the variables, maybe in that way I can analise the data.
And thanks for your time.
> So, after several hours I could find out what is happening. This code 'GetSensorValue' is always gonna give us one value from the device (I gonna attache some prt sc from the manual). The value only change when I press enter on the link.
Absolutely. Perhaps the confusion comes from your thinking that you're asking for a stream of data? But you're not! Instead, you're asking for a single value, and that's exactly what you get. If you want to have the next value, you have to ask again. In a browser, you'd do that by reloading the page, as you've noticed. In OpenSesame, you'd do this by running the script from my previous post each time time that you want to read a new sensor value within OpenSesame. So for example at the start of the trial, or at some other relevant moment.
Does that clear things up?
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi, Sebastian! You was right! And finally after so much thinking about it I could put the ideas in order. I think, the tiredness didn't let me to think in the right way. The only thing I need to do is to put the code in the loop, before the sketchpad. In this way I could obtain the sensor value for each image.
If I want to get the experiment sample, I just need to put an inline with the API code for the star of the experiment, then another inline with the code to get the sample of the experiment and finally the code to close the experiment. So, thanks a lot for your time and your patience!!!!
Great to hear that you figured it out!
Check out SigmundAI.eu for our OpenSesame AI assistant!