Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

OpenSesame and NETStation Package Plugin


I am trying to get OpenSesame properly working with EGI hardware. I have downloaded this library:

Which works pretty well, but still I get a lot more jitter than E-Prime does. I really hate E-Prime philosophy and Visual Basic and also I have some experiences with OpenSesame which I completely integrated into my other lab.

Do you have any experiences with timing your experiment with this plugin and EGI? Maybe I miss something which would be shame for me, because OpenSesame is excelent piece of software and I would love to completely get rid of E-Prime :smile:

Thank you in advice!



  • Hello Michael,

    I'm actually the developer for the Pynetstation Plug-In. It was a brief side-project for exactly the purposes you mentioned (getting out of the E-Prime universe). It worked to a point, but I'm a lowly grad-student with very little free time for maintenance/perfecting this project. I do plan on revisiting it this summer to make a more advanced and customizable version which you could easily integrate into any experiment. I'm sure you're more concerned with what can be done today though.

    Yes, when first working with this software there is typically a LOT more jitter than E-Prime (tested using EGI's offset kit). I typically find offsets for visual onset times to be in the range of +-15 ms. E-Prime usually comes out to about +-3 ms. That's huge, especially when you're recording at 1000hz. In discussing this issue with some colleagues, the discrepancy relates to your hypotheses in 3 potential situations:
    1) You're hypotheses are based on ERP's of early components (0ms-150ms after stimulus onset)
    2) You're hypotheses are based on ERP's of later components (150ms+ after stimulus onset)
    3) You're performing frequency analysis

    If you're in situation 1, this is really dangerous as you have smaller, more time-condensed components of interest. Doing an average of a P50 with +-15ms jitter would certainly flatten the component heavily, and be completely mis-representative of the actual component being studied (although P50 is typically audio related and doesn't bear on visual onset times, in which case you have more issue with audio timing than visual).

    If you're in situation 2, this is not as dangerous as your waveforms are more spread over time (typically the 300ms+ components) and therefor take less damage as a result of the jitter (it will effect the onset/offset of the components, but less the amplitude). There are the 150ms-300ms components which could be considered in greater danger, but those seem to do alright so far so I take that as they are such strong effects that spreading doesn't have as enormous an impact (still, an impact none the less).

    Finally, if you're in situation 3, then it depends. Are you doing time-frequency analyses or just frequency analyses? This will have more effect on time-frequency analyses, but potentially it isn't as major a problem as it is for ERP (I haven't had a chance to investigate this well enough to say). I have a sense that frequency analyses would be relatively safe but that's also debatable.

    In conclusion, I would say that if you have something you're very concerned about timing for, use PsychToolbox or PsychoPy at the moment. I've tried a number a ways to toy around with the timing of graphics cards/backends/system configurations to get timing as precise as E-Prime in OpenSesame, but haven't gotten it to work just yet. This millisecond timing precision issue could just be me doing something wrong, but I haven't had enough testers out there yet to confirm anything (welcome to the club!).

    If you have a chance and could run a simple experiment to test visual offsets with an EGI offset kit and send me the resulting offset file then I'll see if I can help, or at least add it to the files I've already collected on this issue to try and figure out the resolution.

    It's not there yet.

    Josh Zosky

    Hey Mike,

    I'm in the process of updating the PyNetstation Plug-in for better accuracy in timing and it should be completed very shortly. Can you tell me what graphics card you were using which reported the offset jitter? Also, which backend were you using? The PsychoPy backend is currently most reliable but hopefully the xpyriment backend will soon be just as reliable.


  • Hey, we should start some communication again :-)

    Me and friend of mine would like to participate on getting OpenSesame working eith EGI system. So testin, some code...feel free to ask. I dont know the HW specs of computer - it is some W7 Geodesic SW preinstalled with E-Prime 2. Regular desktop (some Optiplex 7xxx).

    We used Expyriment backed and measured with photo diode the real onset to measure a jitter on about 100 image repetitions (trials = same pictures).

    I guess we should take a closer look at it, right?

    Thank you,

  • Hey Mike,

    Sorry for the delay. The academic year put an end to my earlier endeavors on this plugin.

    The plug-in should definitely be used with psychopy backend at the moment as it has a specific function which aids in timing accuracy of stimuli onset. That has proven to be relatively successful at keeping jitter at a maintainable amount (1ms~4ms). It definitely depends on the computer's strengths in processing/ram/graphics card though.

    I was recently brought to the attention of another method other than what is currently used to keep timing accuracy between PyNetstation and Netstation. Apparently EGI has (as of 300-series amplifiers) quietly implemented an NTP server which provides greater timing accuracy than the typical method. I'm planning on implementing that soon (within the next 3 weeks) and detailing my success/failure here for others.

    Once I get to test that I'll update the current repository with a fully tested and functional plug in that should be as good as it can get.


    P.S. If anyone else is interested in this please let me know. The more demand there is out there the more time I'll try dedicating.

  • Hi everybody,
    We are just about to start an endeavour of using an SMI eyetracker alongside a netstation 3 setup through OpenSesame, and I was wondering if anybody had an example paradigm they wouldn't mind us taking a look at in terms of setting up netstation triggers and that sort of thing.


  • edited August 2

    Hey Nathan,

    Sorry for the crazy late response. I presume you got something going already (really sorry about that) but if you or anyone else is interested I'll be refining the plug-in soon to make it a fully integrated part of OpenSesame. There will also be some major updates regarding how the underlying code works. Stay tuned! (or email me directly with questions at:



Sign In or Register to comment.