Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

Windows error Access violation - crash

edited January 2017 in OpenSesame

Hi,

I have the following error when running my script:

  File "C:\OpenSesame314\lib\site-packages\libopensesame\inline_script.py", line 102, in run
    self.experiment.python_workspace._exec(self.crun)
  File "C:\OpenSesame314\lib\site-packages\libopensesame\python_workspace.py", line 161, in _exec
    exec(bytecode, self._globals)
  File "<string>", line 18, in <module>
  File "C:\OpenSesame314\lib\site-packages\openexp\_canvas\psycho.py", line 144, in show
    stim.draw()
  File "C:\OpenSesame314\lib\site-packages\psychopy\visual\text.py", line 591, in draw
    self._pygletTextObj.draw()
  File "_build/bdist.macosx-10.5-x86_64/egg/pyglet/font/__init__.py", line 565, in draw
    self._layout.draw()
  File "_build/bdist.macosx-10.5-x86_64/egg/pyglet/text/layout.py", line 852, in draw
    self.batch.draw()
  File "_build/bdist.macosx-10.5-x86_64/egg/pyglet/graphics/__init__.py", line 554, in draw
    func()
  File "_build/bdist.macosx-10.5-x86_64/egg/pyglet/graphics/__init__.py", line 486, in <lambda>
    (lambda d, m: lambda: d.draw(m))(domain, mode))
  File "_build/bdist.macosx-10.5-x86_64/egg/pyglet/graphics/vertexdomain.py", line 313, in draw
    glDrawArrays(mode, starts[0], sizes[0])
WindowsError: exception: access violation reading 0x00000010

I am using the most recent version of OpenSesame 3.1.4, running this script.
I reproduced the error on multiple computers although the memory code occasionally changes. Looking this error message up it seems to suggest that I python is trying to access memory that it isn't allowed to but I am not sure when or how I trigger it. I am running OpenSesame on Windows (7 or 10) and also tried starting the program in "administrator mode".

Any help is appreciated!

Michel

Comments

  • edited January 2017

    ...maybe also interesting to know:
    On my computers, the script runs for a few trials before it stops displaying the stimuli (except for the fixation dot) and then finally crashes.

    I just stripped the entire file of all the other sequences that follow on the first sequence (practice block) and then ran it again. Now I do not run into the above described issue. Does this maybe have something to do with memory running full? Is there any way to avoid this without splitting the experiment up into several smaller scripts?

  • Hi Michel,

    Maybe this helps:
    http://forum.cogsci.nl/index.php?p=/discussion/comment/8770/#Comment_8770

    You can use Windows task manager to monitor memory like here:
    https://github.com/smathot/OpenSesame/issues/413

    Best,
    Jarik

  • Hi Michel,

    The error is, in principle, a bug somewhere in OpenGL. But as Jarik also suggests, I think it's probably also due to memory filling up (in combination with buggy memory management).

    You have a lot of sketchpads that are all prepared at the same time, because there is no layer of loop between them, if you see what I mean. As a start, you could change all these instruction sketchpads to feedback` items. This should substantially reduce the lag when starting the experiment (which is already problem in itself, I would say).

    Then, start the experiment in a window with the Windows task manager next to it, to see if memory fills up. If it does, then we can think of a way to fix it.

    Cheers!
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited January 2017

    Hi Jarik, Sebastiaan,

    thanks for the help. I did not know that sketchpads eat more resources than feedback items. That's good to know. Changing that helped already. I can get through the first loop and even more depending on the machine I am using.
    I also monitored my memory during the process and could indeed see that it was almost fully loaded on one computer (although it also didn't go complete full but roughly 90-95%). However, on my other one (with 8GB RAM) it was only around 35-40 % full and still crashed with the same exception raised. Is there some application limit of Open Sesame (i.e. it doesn't use all available memory)?
    Could you elaborate on what you mean by there is no loop between them? I currently have 5 feedback items before the first loop, 3 after the first loop and before the second and so on. Is that too many? Is there a possibility to delete feedback items after they have been presented to free up memory? For instance, delete my 5 feedback items before the first loop, after the the loop has been completed.
    Would it be more beneficial for performance to transform all feedback items one inline script that just presents the instructions or does that not matter?

    Michel

  • thanks for the help. I did not know that sketchpads eat more resources than feedback items.

    They consume the same amount of resources, but are prepared at a different moment. sketchpad items are all prepared in advance, so if you have a lot of them, the preparation can be very noticeable. feedback items are only prepared when they are shown, so the preparation time is often less noticeable.

    I also monitored my memory during the process and could indeed see that it was almost fully loaded on one computer (although it also didn't go complete full but roughly 90-95%). However, on my other one (with 8GB RAM) it was only around 35-40 % full and still crashed with the same exception raised. Is there some application limit of Open Sesame (i.e. it doesn't use all available memory)?

    The Windows Python 2 version is 32 bit, meaning that is uses at most 4 Gb memory. So it makes sense that it will crash with a memory error if memory consumption nears 50% on an 8 Gb system.

    Could you elaborate on what you mean by there is no loop between them?

    A sequence prepares all the items in it during its own prepare phase. In contrast, a loop prepares (and runs) all the items in it during its own run phase. In other words, all the sketchpads in your experiments that do not have a loop above them in the hierarchy are prepared at the start of the experiment—and there are many of them. Does that make sense?

    Now for a fix: Could you try the following?

    In a text editor, open the following file:

    • [OpenSesame folder]\Lib\site-packages\libopensesame\sketchpad.py

    Then, explicitly delete the canvas after it has been shown, by changing the run() function (line 120) from:

        def run(self):
    
            """See item."""
    
            self._t0 = self.set_item_onset(self.canvas.show())
            base_response_item.run(self)
    

    To:

        def run(self):
    
            """See item."""
    
            self._t0 = self.set_item_onset(self.canvas.show())
            base_response_item.run(self)
            del self.canvas # delete the canvas
    

    (Make sure that you preserve indentation, so no random copy-pasting from the forum into the file!)

    Then run the experiment, again using feedback items where possible and while monitoring memory consumption. Does this resolve the issue?

    Cheers,
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited January 2017

    I just tried that but it did unfortunately not fix the issue. I was at 42% for my 8 GB PC and it crashed with the same error around the same point in the experiment as before the change.

  • Hi Michel,

    I just did a bit of testing myself, and explicitly deleting the canvas indeed doesn't appear to make a big difference—which is odd. What does make a difference is re-enabling the automatic garbage collection. You can do that by editing the general script, and changing this line:

    set disable_garbage_collection yes
    

    To:

    set disable_garbage_collection no
    

    A bit of background: Python frees up memory when variables are no longer used. This is called garbage collection. By default, Python does this at unpredictable moments, which can cause a bit of unpredictable delay. Therefore, OpenSesame disables this, and explicitly performs garbage collection at the end of every sequence. But this doesn't appear to be as effective as allowing Python to do it automatically. (I don't understand why not though.)

    In any case, I would re-enable the automatic garbage collection as described above and try again. Please let me know if that resolves the issue.

    Cheers,
    Sebastiaan

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited January 2017

    Hi Sebastiaan,

    thanks for the help. As you suggested, I set disable garbage collection in the main script to no. I assume you meant to load the script an editor.
    Unfortunately however, this did also not fix the problem, for neither PC I tested it on. Crash still happens and on similar points in the experiment (although different for different PCs) :-(
    This is my first script to which I "translated" much of my old stuff from the old OpenSesame versions (prior to 3.0.) into the new. Is there maybe something wrong in the way I did that (e.g. my use of var. or sth like that)?

  • Hi Michel,

    This is my first script to which I "translated" much of my old stuff from the old OpenSesame versions (prior to 3.0.) into the new. Is there maybe something wrong in the way I did that (e.g. my use of var. or sth like that)?

    No, this is not something that you're doing wrong. It's a problem with how OpenSesame and the underlying libraries deal with memory.

    But I did make some progress. There were so-called cyclic references that prevented canvas objects from being deleted under some circumstances. A cyclic reference is when object A refers to B, and object B refers back to A. In this situation, both objects are always referred to, and therefore never deleted by the garbage collector, which only looks at objects that are no longer referred to.

    Could you try your experiment again with the latest prerelease of 3.1.5 (right now 3.1.5a3)? Please try it with disable_garbage_collection set to both 'yes' and 'no' as described above. (You can change this by clicking on 'General script' in the General Properties tab, changing the script, and then clicking on 'Apply'.)

    I'm not 100% sure that the problem is completely gone now, but I think it might.

    Cheers,
    Sebastiaan

    @Knante This may be the same issue that caused memory to fill up on your tablet experiments.

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited January 2017

    Hi Sebastiaan,

    I just tried it with my 8GB RAM laptop using the pre-release of OS 3.1.5a3 that you provided. Unfortunately that also did not resolve the issue for me (neither on garbage collection "yes" nor on "no"). On this computer it still crashes as soon as I start the prac_trainLoop.
    I am gonna try it on my other computer later...

  • ...I just tested it on the other PC (Win 7/4GB RAM). Also here, the problem is not solved yet unfortunately. On this PC it always crashes somewhat later when it is already inside the prac_trainLoop.

  • That's weird—and annoying. I cannot reproduce the crash myself. On my Windows 7 system, the experiment gradually eats its way up to about 700 Mb of memory. That's clearly excessive, and the fact that memory consumption accumulates over time suggests that memory is still not properly freed up in all cases (although more so than before the fix). But 700 Mb is not nearly enough to trigger a memory error, and the experiment finishes without any trouble. (Btw, see this post for a trick to enable auto-responses for debugging.)

    So I don't have any really suggestions left. Try another backend?

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited January 2017

    Assuming I would step away from the usage of feedback items and just present all instructions in inline scripts with a single canvas that I overwrite (or maybe even delete manually) every time, do you think this would solve the issue for now?

  • Looking back at your previous posts: I assumed that you were monitoring memory use of OpenSesame, but that's not actually what you said. These percentages, do they refer to overall memory use of your system, or to the memory use of OpenSesame on its own?

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited January 2017

    I monitored overall memory!
    I just ran the same thing while monitoring the memory use of OpenSesame. The OpenSesame application does not change memory on either PC (a few MBs). pythonw.exe, however, starts on one PC at around 80MB and towards the end runs up to 450MB before it crashes (around 300MB after the "practice" sequence). The largest jumps in usage happen indeed when clicking through the feedback items (instruction slides), while I only see roughly 1MB added per actual trial. Either way, still not really the maximum of my theoretically available memory.

  • I monitored overall memory! I just ran the same thing while monitoring the memory use of OpenSesame. The OpenSesame application does not change memory on either PC (a few MBs). pythonw.exe, however, starts on one PC at around 80MB and towards the end runs up to 450MB before it crashes (around 300MB after the "practice" sequence).

    Ok, that explains the apparent difference in memory use between us. This also means that memory use wasn't the problem to begin with. So it's probably an obscure, difficult-to-resolve OpenGL issue.

    On the bright side: There was a garbage-collection problem in OpenSesame, and it is fixed now :wink:

    Assuming I would step away from the usage of feedback items and just present all instructions in inline scripts with a single canvas that I overwrite (or maybe even delete manually) every time, do you think this would solve the issue for now?

    Possibly, but it's very difficult to say. Why not switch to the xpyriment backend?

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • edited January 2017

    Well, that's good news :)
    I chose psychopy because I thought I remember it is the best when it comes to timing. Given that in this experiment timing is very critical (some stimuli are present only a few frames on a 120Hz monitor) I figured that is right one.
    Is ypyriment good enough for timing sensitive experiments? I assume I will have to run some timing test to make sure...

  • edited January 2017

    Is ypyriment good enough for timing sensitive experiments? I assume I will have to run some timing test to make sure...

    In principle, xpyriment and psycho should be close to identical in terms of timing. But of course in practice there are sometimes differences (as often in favor of xpyriment as the other way around though).

    Expyriment actually has a pretty good test suite, which you can start from the debug window:

    import expyriment
    expyriment.control.run_test_suite()
    

    See also:

    Edit: Another thing you could try is update PsychoPy, again from the debug window:

    import pip
    pip.main(['install', 'psychopy', '--upgrade'])
    pip.main(['install', 'configobj']) # New dependency in PsychoPy
    

    There's much bigger issues in the world, I know. But I first have to take care of the world I know.
    cogsci.nl/smathot

  • I just gave the expyriment backend a test spin. When running it, I cant even get into the first block. After 2-3 minutes it crashes with:
    TypeError: delay requires one integer argument

    So, I guess my best shot is to go with inline script(s) instead...

  • Hi,

    Late to the party, but it might help for future projects or developments in the software:

    I had exactly the same problem when running my experiment on a windows laptop. No matter what I tried, it crashed halfway. I'm pretty sure it was a memory problem. My stimulus files are high quality sound files adding up to 507MB in total. For some reason, it runs perfectly stable on all Mac laptops.

    Maybe this helps in figuring out what the problem is... Or at least you could try run it on Mac/Linux when encountering a similar problems.

Sign In or Register to comment.