Welcome!

Sign in with your CogSci, Facebook, Google, or Twitter account.

Or register to create a new account.

We'll use your information only for signing in to this forum.

Supported by

OpenSesame and NETStation Package Plugin

Hello,

I am trying to get OpenSesame properly working with EGI hardware. I have downloaded this library:
https://github.com/imnotamember/Pynetstation-Plug-In

Which works pretty well, but still I get a lot more jitter than E-Prime does. I really hate E-Prime philosophy and Visual Basic and also I have some experiences with OpenSesame which I completely integrated into my other lab.

Do you have any experiences with timing your experiment with this plugin and EGI? Maybe I miss something which would be shame for me, because OpenSesame is excelent piece of software and I would love to completely get rid of E-Prime :smile:

Thank you in advice!

Michael

Best wishes,
Michael Tesar
neurosciencemike.wordpress.com

Comments

  • Hello Michael,

    I'm actually the developer for the Pynetstation Plug-In. It was a brief side-project for exactly the purposes you mentioned (getting out of the E-Prime universe). It worked to a point, but I'm a lowly grad-student with very little free time for maintenance/perfecting this project. I do plan on revisiting it this summer to make a more advanced and customizable version which you could easily integrate into any experiment. I'm sure you're more concerned with what can be done today though.

    Yes, when first working with this software there is typically a LOT more jitter than E-Prime (tested using EGI's offset kit). I typically find offsets for visual onset times to be in the range of +-15 ms. E-Prime usually comes out to about +-3 ms. That's huge, especially when you're recording at 1000hz. In discussing this issue with some colleagues, the discrepancy relates to your hypotheses in 3 potential situations:
    1) You're hypotheses are based on ERP's of early components (0ms-150ms after stimulus onset)
    2) You're hypotheses are based on ERP's of later components (150ms+ after stimulus onset)
    3) You're performing frequency analysis

    If you're in situation 1, this is really dangerous as you have smaller, more time-condensed components of interest. Doing an average of a P50 with +-15ms jitter would certainly flatten the component heavily, and be completely mis-representative of the actual component being studied (although P50 is typically audio related and doesn't bear on visual onset times, in which case you have more issue with audio timing than visual).

    If you're in situation 2, this is not as dangerous as your waveforms are more spread over time (typically the 300ms+ components) and therefor take less damage as a result of the jitter (it will effect the onset/offset of the components, but less the amplitude). There are the 150ms-300ms components which could be considered in greater danger, but those seem to do alright so far so I take that as they are such strong effects that spreading doesn't have as enormous an impact (still, an impact none the less).

    Finally, if you're in situation 3, then it depends. Are you doing time-frequency analyses or just frequency analyses? This will have more effect on time-frequency analyses, but potentially it isn't as major a problem as it is for ERP (I haven't had a chance to investigate this well enough to say). I have a sense that frequency analyses would be relatively safe but that's also debatable.

    In conclusion, I would say that if you have something you're very concerned about timing for, use PsychToolbox or PsychoPy at the moment. I've tried a number a ways to toy around with the timing of graphics cards/backends/system configurations to get timing as precise as E-Prime in OpenSesame, but haven't gotten it to work just yet. This millisecond timing precision issue could just be me doing something wrong, but I haven't had enough testers out there yet to confirm anything (welcome to the club!).

    If you have a chance and could run a simple experiment to test visual offsets with an EGI offset kit and send me the resulting offset file then I'll see if I can help, or at least add it to the files I've already collected on this issue to try and figure out the resolution.

    TL;DR
    It's not there yet.

    Best,
    Josh Zosky

  • imnotamemberimnotamember Posts: 16

    UPDATES
    Hey Mike,

    I'm in the process of updating the PyNetstation Plug-in for better accuracy in timing and it should be completed very shortly. Can you tell me what graphics card you were using which reported the offset jitter? Also, which backend were you using? The PsychoPy backend is currently most reliable but hopefully the xpyriment backend will soon be just as reliable.

    Thanks,
    Josh

  • neuropacabraneuropacabra Posts: 22

    Hey, we should start some communication again :-)

    Me and friend of mine would like to participate on getting OpenSesame working eith EGI system. So testin, some code...feel free to ask. I dont know the HW specs of computer - it is some W7 Geodesic SW preinstalled with E-Prime 2. Regular desktop (some Optiplex 7xxx).

    We used Expyriment backed and measured with photo diode the real onset to measure a jitter on about 100 image repetitions (trials = same pictures).

    I guess we should take a closer look at it, right?

    Thank you,
    Michael

    Best wishes,
    Michael Tesar
    neurosciencemike.wordpress.com

Sign In or Register to comment.